mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-15 18:19:56 +00:00
Compare commits
12 Commits
paris
...
indexconf2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0900d605ef | ||
|
|
f67cfa8693 | ||
|
|
cb8690f4a3 | ||
|
|
7a6d488d60 | ||
|
|
b1d8b5eec8 | ||
|
|
1983d6cb4f | ||
|
|
e565da49ca | ||
|
|
ee33799a8f | ||
|
|
b61426a044 | ||
|
|
fd057d8a1e | ||
|
|
4b76fbcc4b | ||
|
|
b25d40f48e |
19
CHECKLIST.md
19
CHECKLIST.md
@@ -1,19 +0,0 @@
|
||||
This is the checklist that I (Jérôme) use when delivering a workshop.
|
||||
|
||||
- [ ] Create branch + `_redirects` + push to GitHub + Netlify setup
|
||||
- [ ] Add branch to index.html
|
||||
- [ ] Update the slides that says which versions we are using
|
||||
- [ ] Update the version of Compose and Machine in settings
|
||||
- [ ] Create chatroom
|
||||
- [ ] Set chatroom in YML and deploy
|
||||
- [ ] Put chat room in index.html
|
||||
- [ ] Walk the room to count seats, check power supplies, lectern, A/V setup
|
||||
- [ ] How many VMs do we need?
|
||||
- [ ] Provision VMs
|
||||
- [ ] Print cards
|
||||
- [ ] Cut cards
|
||||
- [ ] Last minute merge from master
|
||||
- [ ] Check that all looks good
|
||||
- [ ] DELIVER!
|
||||
- [ ] Shutdown VMs
|
||||
- [ ] Update index.html to remove chat link and move session to past things
|
||||
11
README.md
11
README.md
@@ -247,17 +247,6 @@ content but you also know to skip during presentation.
|
||||
- Last 15-30 minutes is for stateful services, DAB files, and questions.
|
||||
|
||||
|
||||
### Pre-built images
|
||||
|
||||
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
|
||||
|
||||
There are also three variants, for demo purposes:
|
||||
|
||||
- `dockercoins/rng:v0.2` is broken (the server won't even start),
|
||||
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
|
||||
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
|
||||
|
||||
|
||||
## Past events
|
||||
|
||||
Since its inception, this workshop has been delivered dozens of times,
|
||||
|
||||
@@ -4,12 +4,6 @@
|
||||
|
||||
- [Docker](https://docs.docker.com/engine/installation/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
|
||||
|
||||
And if you want to generate printable cards:
|
||||
|
||||
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
|
||||
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
|
||||
|
||||
## General Workflow
|
||||
|
||||
@@ -41,16 +35,6 @@ The Docker Compose file here is used to build a image with all the dependencies
|
||||
- `AWS_SECRET_ACCESS_KEY`
|
||||
- `AWS_DEFAULT_REGION`
|
||||
|
||||
If you're not using AWS, set these to placeholder values:
|
||||
|
||||
```
|
||||
export AWS_ACCESS_KEY_ID="foo"
|
||||
export AWS_SECRET_ACCESS_KEY="foo"
|
||||
export AWS_DEFAULT_REGION="foo"
|
||||
```
|
||||
|
||||
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
|
||||
|
||||
### Update/copy `settings/example.yaml`
|
||||
|
||||
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
|
||||
@@ -64,7 +48,6 @@ workshopctl - the orchestration workshop swiss army knife
|
||||
Commands:
|
||||
ami Show the AMI that will be used for deployment
|
||||
amis List Ubuntu AMIs in the current region
|
||||
build Build the Docker image to run this program in a container
|
||||
cards Generate ready-to-print cards for a batch of VMs
|
||||
deploy Install Docker on a bunch of running VMs
|
||||
ec2quotas Check our EC2 quotas (max instances)
|
||||
@@ -72,7 +55,6 @@ help Show available commands
|
||||
ids List the instance IDs belonging to a given tag or token
|
||||
ips List the IP addresses of the VMs for a given tag or token
|
||||
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
|
||||
kubetest Check that all notes are reporting as Ready
|
||||
list List available batches in the current region
|
||||
opensg Open the default security group to ALL ingress traffic
|
||||
pull_images Pre-pull a bunch of Docker images
|
||||
@@ -81,7 +63,6 @@ start Start a batch of VMs
|
||||
status List instance status for a given batch
|
||||
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
|
||||
test Run tests (pre-flight checks) on a batch of VMs
|
||||
wrap Run this program in a container
|
||||
```
|
||||
|
||||
### Summary of What `./workshopctl` Does For You
|
||||
@@ -94,12 +75,12 @@ wrap Run this program in a container
|
||||
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
|
||||
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
|
||||
|
||||
### Example Steps to Launch a Batch of AWS Instances for a Workshop
|
||||
### Example Steps to Launch a Batch of Instances for a Workshop
|
||||
|
||||
- Run `./workshopctl start N` Creates `N` EC2 instances
|
||||
- Your local SSH key will be synced to instances under `ubuntu` user
|
||||
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
|
||||
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
|
||||
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
|
||||
- If it errors or times out, you should be able to rerun
|
||||
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
|
||||
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
|
||||
@@ -107,67 +88,6 @@ wrap Run this program in a container
|
||||
- *Have a great workshop*
|
||||
- Run `./workshopctl stop TAG` to terminate instances.
|
||||
|
||||
### Example Steps to Launch Azure Instances
|
||||
|
||||
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
|
||||
- Customize `azuredeploy.parameters.json`
|
||||
- Required:
|
||||
- Provide the SSH public key you plan to use for instance configuration
|
||||
- Optional:
|
||||
- Choose a name for the workshop (default is "workshop")
|
||||
- Choose the number of instances (default is 3)
|
||||
- Customize the desired instance size (default is Standard_D1_v2)
|
||||
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
|
||||
```
|
||||
az group create --name workshop --location eastus
|
||||
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
|
||||
```
|
||||
|
||||
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
|
||||
|
||||
To display the IPs of the instances you've launched:
|
||||
|
||||
```
|
||||
az vm list-ip-addresses --resource-group workshop --output table
|
||||
```
|
||||
|
||||
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
|
||||
|
||||
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
|
||||
|
||||
```
|
||||
brew install jq
|
||||
mkdir -p tags/myworkshop
|
||||
```
|
||||
|
||||
2) And then generate the IP list:
|
||||
|
||||
```
|
||||
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
|
||||
```
|
||||
|
||||
After the workshop is over, remove the instances:
|
||||
|
||||
```
|
||||
az group delete --resource-group workshop
|
||||
```
|
||||
|
||||
### Example Steps to Configure Instances from a non-AWS Source
|
||||
|
||||
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
|
||||
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
|
||||
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
|
||||
- If you have not already generated a file with the IPs to be configured:
|
||||
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
|
||||
- Format is one IP per line, no other info needed.
|
||||
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
|
||||
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
|
||||
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
|
||||
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
|
||||
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
|
||||
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
|
||||
|
||||
|
||||
## Other Tools
|
||||
|
||||
### Deploying your SSH key to all the machines
|
||||
@@ -177,6 +97,13 @@ az group delete --resource-group workshop
|
||||
- Run `pcopykey`.
|
||||
|
||||
|
||||
### Installing extra packages
|
||||
|
||||
- Source `postprep.rc`.
|
||||
(This will install a few extra packages, add entries to
|
||||
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
|
||||
|
||||
|
||||
## Even More Details
|
||||
|
||||
#### Sync of SSH keys
|
||||
@@ -205,7 +132,7 @@ Instances can be deployed manually using the `deploy` command:
|
||||
|
||||
$ ./workshopctl deploy TAG settings/somefile.yaml
|
||||
|
||||
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
|
||||
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
|
||||
|
||||
#### Pre-pull images
|
||||
|
||||
@@ -215,10 +142,6 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
|
||||
|
||||
$ ./workshopctl cards TAG settings/somefile.yaml
|
||||
|
||||
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
|
||||
|
||||
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
|
||||
|
||||
#### List tags
|
||||
|
||||
$ ./workshopctl list
|
||||
|
||||
@@ -1,250 +0,0 @@
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"workshopName": {
|
||||
"type": "string",
|
||||
"defaultValue": "workshop",
|
||||
"metadata": {
|
||||
"description": "Workshop name."
|
||||
}
|
||||
},
|
||||
"vmPrefix": {
|
||||
"type": "string",
|
||||
"defaultValue": "node",
|
||||
"metadata": {
|
||||
"description": "Prefix for VM names."
|
||||
}
|
||||
},
|
||||
"numberOfInstances": {
|
||||
"type": "int",
|
||||
"defaultValue": 3,
|
||||
"metadata": {
|
||||
"description": "Number of VMs to create."
|
||||
}
|
||||
},
|
||||
"adminUsername": {
|
||||
"type": "string",
|
||||
"defaultValue": "ubuntu",
|
||||
"metadata": {
|
||||
"description": "Admin username for VMs."
|
||||
}
|
||||
},
|
||||
"sshKeyData": {
|
||||
"type": "string",
|
||||
"defaultValue": "",
|
||||
"metadata": {
|
||||
"description": "SSH rsa public key file as a string."
|
||||
}
|
||||
},
|
||||
"imagePublisher": {
|
||||
"type": "string",
|
||||
"defaultValue": "Canonical",
|
||||
"metadata": {
|
||||
"description": "OS image publisher; default Canonical."
|
||||
}
|
||||
},
|
||||
"imageOffer": {
|
||||
"type": "string",
|
||||
"defaultValue": "UbuntuServer",
|
||||
"metadata": {
|
||||
"description": "The name of the image offer. The default is Ubuntu"
|
||||
}
|
||||
},
|
||||
"imageSKU": {
|
||||
"type": "string",
|
||||
"defaultValue": "16.04-LTS",
|
||||
"metadata": {
|
||||
"description": "Version of the image. The default is 16.04-LTS"
|
||||
}
|
||||
},
|
||||
"vmSize": {
|
||||
"type": "string",
|
||||
"defaultValue": "Standard_D1_v2",
|
||||
"metadata": {
|
||||
"description": "VM Size."
|
||||
}
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
|
||||
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
|
||||
"vmName": "[parameters('vmPrefix')]",
|
||||
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
|
||||
"publicIPAddressName": "PublicIP",
|
||||
"publicIPAddressType": "Dynamic",
|
||||
"virtualNetworkName": "MyVNET",
|
||||
"netSecurityGroup": "MyNSG",
|
||||
"addressPrefix": "10.0.0.0/16",
|
||||
"subnet1Name": "subnet-1",
|
||||
"subnet1Prefix": "10.0.0.0/24",
|
||||
"nicName": "myVMNic"
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2017-11-01",
|
||||
"type": "Microsoft.Network/publicIPAddresses",
|
||||
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"copy": {
|
||||
"name": "publicIPLoop",
|
||||
"count": "[parameters('numberOfInstances')]"
|
||||
},
|
||||
"properties": {
|
||||
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
|
||||
},
|
||||
"tags": {
|
||||
"workshop": "[parameters('workshopName')]"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "2017-11-01",
|
||||
"type": "Microsoft.Network/virtualNetworks",
|
||||
"name": "[variables('virtualNetworkName')]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
|
||||
],
|
||||
"properties": {
|
||||
"addressSpace": {
|
||||
"addressPrefixes": [
|
||||
"[variables('addressPrefix')]"
|
||||
]
|
||||
},
|
||||
"subnets": [
|
||||
{
|
||||
"name": "[variables('subnet1Name')]",
|
||||
"properties": {
|
||||
"addressPrefix": "[variables('subnet1Prefix')]",
|
||||
"networkSecurityGroup": {
|
||||
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"tags": {
|
||||
"workshop": "[parameters('workshopName')]"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "2017-11-01",
|
||||
"type": "Microsoft.Network/networkInterfaces",
|
||||
"name": "[concat(variables('nicName'),copyIndex(1))]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"copy": {
|
||||
"name": "nicLoop",
|
||||
"count": "[parameters('numberOfInstances')]"
|
||||
},
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
|
||||
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"ipConfigurations": [
|
||||
{
|
||||
"name": "ipconfig1",
|
||||
"properties": {
|
||||
"privateIPAllocationMethod": "Dynamic",
|
||||
"publicIPAddress": {
|
||||
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
|
||||
},
|
||||
"subnet": {
|
||||
"id": "[variables('subnet1Ref')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"tags": {
|
||||
"workshop": "[parameters('workshopName')]"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "2017-12-01",
|
||||
"type": "Microsoft.Compute/virtualMachines",
|
||||
"name": "[concat(variables('vmName'),copyIndex(1))]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"copy": {
|
||||
"name": "vmLoop",
|
||||
"count": "[parameters('numberOfInstances')]"
|
||||
},
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
|
||||
],
|
||||
"properties": {
|
||||
"hardwareProfile": {
|
||||
"vmSize": "[parameters('vmSize')]"
|
||||
},
|
||||
"osProfile": {
|
||||
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
|
||||
"adminUsername": "[parameters('adminUsername')]",
|
||||
"linuxConfiguration": {
|
||||
"disablePasswordAuthentication": true,
|
||||
"ssh": {
|
||||
"publicKeys": [
|
||||
{
|
||||
"path": "[variables('sshKeyPath')]",
|
||||
"keyData": "[parameters('sshKeyData')]"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"storageProfile": {
|
||||
"osDisk": {
|
||||
"createOption": "FromImage"
|
||||
},
|
||||
"imageReference": {
|
||||
"publisher": "[parameters('imagePublisher')]",
|
||||
"offer": "[parameters('imageOffer')]",
|
||||
"sku": "[parameters('imageSKU')]",
|
||||
"version": "latest"
|
||||
}
|
||||
},
|
||||
"networkProfile": {
|
||||
"networkInterfaces": [
|
||||
{
|
||||
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"tags": {
|
||||
"workshop": "[parameters('workshopName')]"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "2017-11-01",
|
||||
"type": "Microsoft.Network/networkSecurityGroups",
|
||||
"name": "[variables('netSecurityGroup')]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"tags": {
|
||||
"workshop": "[parameters('workshopName')]"
|
||||
},
|
||||
"properties": {
|
||||
"securityRules": [
|
||||
{
|
||||
"name": "default-open-ports",
|
||||
"properties": {
|
||||
"protocol": "Tcp",
|
||||
"sourcePortRange": "*",
|
||||
"destinationPortRange": "*",
|
||||
"sourceAddressPrefix": "*",
|
||||
"destinationAddressPrefix": "*",
|
||||
"access": "Allow",
|
||||
"priority": 1000,
|
||||
"direction": "Inbound"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"outputs": {
|
||||
"resourceID": {
|
||||
"type": "string",
|
||||
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sshKeyData": {
|
||||
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXTIl/M9oeSlcsC5Rfe+nZr4Jc4sl200pSw2lpdxlZ3xzeP15NgSSMJnigUrKUXHfqRQ+2wiPxEf0Odz2GdvmXvR0xodayoOQsO24AoERjeSBXCwqITsfp1bGKzMb30/3ojRBo6LBR6r1+lzJYnNCGkT+IQwLzRIpm0LCNz1j08PUI2aZ04+mcDANvHuN/hwi/THbLLp6SNWN43m9r02RcC6xlCNEhJi4wk4VzMzVbSv9RlLGST2ocbUHwmQ2k9OUmpzoOx73aQi9XNnEaFh2w/eIdXM75VtkT3mRryyykg9y0/hH8/MVmIuRIdzxHQqlm++DLXVH5Ctw6a4kS+ki7 workshop"
|
||||
},
|
||||
"workshopName": {
|
||||
"value": "workshop"
|
||||
},
|
||||
"numberOfInstances": {
|
||||
"value": 3
|
||||
},
|
||||
"vmSize": {
|
||||
"value": "Standard_D1_v2"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -15,6 +15,5 @@ services:
|
||||
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
|
||||
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
|
||||
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
|
||||
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
|
||||
USER: ${USER}
|
||||
entrypoint: /root/prepare-vms/workshopctl
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
_ERR() {
|
||||
error "Command $BASH_COMMAND failed (exit status: $?)"
|
||||
}
|
||||
set -eE
|
||||
set -e
|
||||
trap _ERR ERR
|
||||
|
||||
die() {
|
||||
|
||||
@@ -39,10 +39,7 @@ _cmd_cards() {
|
||||
need_tag $TAG
|
||||
need_settings $SETTINGS
|
||||
|
||||
# If you're not using AWS, populate the ips.txt file manually
|
||||
if [ ! -f tags/$TAG/ips.txt ]; then
|
||||
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
|
||||
fi
|
||||
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
|
||||
|
||||
# Remove symlinks to old cards
|
||||
rm -f ips.html ips.pdf
|
||||
@@ -127,7 +124,7 @@ _cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
|
||||
_cmd_kube() {
|
||||
|
||||
# Install packages
|
||||
pssh --timeout 200 "
|
||||
pssh "
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
|
||||
sudo apt-key add - &&
|
||||
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
|
||||
@@ -138,7 +135,7 @@ _cmd_kube() {
|
||||
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
|
||||
|
||||
# Initialize kube master
|
||||
pssh --timeout 200 "
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
|
||||
kubeadm token generate > /tmp/token
|
||||
sudo kubeadm init --token \$(cat /tmp/token)
|
||||
@@ -162,7 +159,7 @@ _cmd_kube() {
|
||||
fi"
|
||||
|
||||
# Join the other nodes to the cluster
|
||||
pssh --timeout 200 "
|
||||
pssh "
|
||||
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
|
||||
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
|
||||
@@ -171,19 +168,6 @@ _cmd_kube() {
|
||||
sep "Done"
|
||||
}
|
||||
|
||||
_cmd kubetest "Check that all notes are reporting as Ready"
|
||||
_cmd_kubetest() {
|
||||
# There are way too many backslashes in the command below.
|
||||
# Feel free to make that better ♥
|
||||
pssh "
|
||||
set -e
|
||||
if grep -q node1 /tmp/node; then
|
||||
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
|
||||
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
|
||||
done
|
||||
fi"
|
||||
}
|
||||
|
||||
_cmd ids "List the instance IDs belonging to a given tag or token"
|
||||
_cmd_ids() {
|
||||
TAG=$1
|
||||
@@ -296,7 +280,7 @@ _cmd_start() {
|
||||
result=$(aws ec2 run-instances \
|
||||
--key-name $AWS_KEY_NAME \
|
||||
--count $COUNT \
|
||||
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
|
||||
--instance-type t2.medium \
|
||||
--client-token $TOKEN \
|
||||
--image-id $AMI)
|
||||
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
|
||||
@@ -434,7 +418,6 @@ tag_is_reachable() {
|
||||
}
|
||||
|
||||
test_tag() {
|
||||
TAG=$1
|
||||
ips_file=tags/$TAG/ips.txt
|
||||
info "Picking a random IP address in $ips_file to run tests."
|
||||
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
# customize your cluster size, your cards template, and the versions
|
||||
|
||||
# Number of VMs per cluster
|
||||
clustersize: 5
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: Letter
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: test
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.18.0
|
||||
machine_version: 0.13.0
|
||||
@@ -1,106 +0,0 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://container.training/" -%}
|
||||
{%- set pagesize = 12 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine" -%}
|
||||
{%- set this_or_each = "this" -%}
|
||||
{%- set machine_is_or_machines_are = "machine is" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "Kubernetes workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "each" -%}
|
||||
{%- set machine_is_or_machines_are = "machines are" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_kube -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1em;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
table {
|
||||
border-spacing: 0;
|
||||
margin-top: 0.4em;
|
||||
margin-bottom: 0.4em;
|
||||
border-left: 0.8em double grey;
|
||||
padding-left: 0.4em;
|
||||
}
|
||||
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
width: 21.5%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.4em 0;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 4em;
|
||||
float: right;
|
||||
margin-right: -0.4em;
|
||||
}
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.pagebreak {
|
||||
page-break-after: always;
|
||||
clear: both;
|
||||
display: block;
|
||||
height: 8px;
|
||||
}
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
{% if loop.index0>0 and loop.index0%pagesize==0 %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Here is the connection information to your very own
|
||||
{{ cluster_or_machine }} for this {{ workshop_name }}.
|
||||
You can connect to {{ this_or_each }} VM with any SSH client.
|
||||
</p>
|
||||
<p>
|
||||
<img src="{{ image_src }}" />
|
||||
<table>
|
||||
<tr><td>login:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>password:</td></tr>
|
||||
<tr><td class="logpass">training</td></tr>
|
||||
</table>
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Your {{ machine_is_or_machines_are }}:
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>You can find the slides at:
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,24 +0,0 @@
|
||||
# 3 nodes for k8s 101 workshops
|
||||
|
||||
# Number of VMs per cluster
|
||||
clustersize: 3
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: settings/kube101.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: Letter
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: test
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.18.0
|
||||
machine_version: 0.13.0
|
||||
@@ -20,7 +20,7 @@ DEPENDENCIES="
|
||||
ssh
|
||||
curl
|
||||
jq
|
||||
pssh
|
||||
parallel-ssh
|
||||
wkhtmltopdf
|
||||
man
|
||||
"
|
||||
|
||||
@@ -1 +1,2 @@
|
||||
/* http://paris-container-training.netlify.com/:splat 200!
|
||||
/ /kube-halfday.yml.html 200!
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
## About these slides
|
||||
|
||||
- All the content is available in a public GitHub repository:
|
||||
|
||||
https://github.com/jpetazzo/container.training
|
||||
|
||||
- You can get updated "builds" of the slides there:
|
||||
|
||||
http://container.training/
|
||||
|
||||
<!--
|
||||
.exercise[
|
||||
```open https://github.com/jpetazzo/container.training```
|
||||
```open http://container.training/```
|
||||
]
|
||||
-->
|
||||
|
||||
--
|
||||
|
||||
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
|
||||
|
||||
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
|
||||
|
||||
<!--
|
||||
.exercise[
|
||||
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
|
||||
]
|
||||
-->
|
||||
@@ -1,12 +0,0 @@
|
||||
## Clean up
|
||||
|
||||
- Before moving on, let's remove those containers
|
||||
|
||||
.exercise[
|
||||
|
||||
- Tell Compose to remove everything:
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
]
|
||||
@@ -1,240 +0,0 @@
|
||||
## Restarting in the background
|
||||
|
||||
- Many flags and commands of Compose are modeled after those of `docker`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start the app in the background with the `-d` option:
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
- Check that our app is running with the `ps` command:
|
||||
```bash
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
`docker-compose ps` also shows the ports exposed by the application.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Viewing logs
|
||||
|
||||
- The `docker-compose logs` command works like `docker logs`
|
||||
|
||||
.exercise[
|
||||
|
||||
- View all logs since container creation and exit when done:
|
||||
```bash
|
||||
docker-compose logs
|
||||
```
|
||||
|
||||
- Stream container logs, starting at the last 10 lines for each container:
|
||||
```bash
|
||||
docker-compose logs --tail 10 --follow
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait units of work done```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
Tip: use `^S` and `^Q` to pause/resume log output.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Upgrading from Compose 1.6
|
||||
|
||||
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
|
||||
|
||||
- Up to 1.6
|
||||
|
||||
- `docker-compose logs` is the equivalent of `logs --follow`
|
||||
|
||||
- `docker-compose logs` must be restarted if containers are added
|
||||
|
||||
- Since 1.7
|
||||
|
||||
- `--follow` must be specified explicitly
|
||||
|
||||
- new containers are automatically picked up by `docker-compose logs`
|
||||
|
||||
---
|
||||
|
||||
## Scaling up the application
|
||||
|
||||
- Our goal is to make that performance graph go up (without changing a line of code!)
|
||||
|
||||
--
|
||||
|
||||
- Before trying to scale the application, we'll figure out if we need more resources
|
||||
|
||||
(CPU, RAM...)
|
||||
|
||||
- For that, we will use good old UNIX tools on our Docker node
|
||||
|
||||
---
|
||||
|
||||
## Looking at resource usage
|
||||
|
||||
- Let's look at CPU, memory, and I/O usage
|
||||
|
||||
.exercise[
|
||||
|
||||
- run `top` to see CPU and memory usage (you should see idle cycles)
|
||||
|
||||
<!--
|
||||
```bash top```
|
||||
|
||||
```wait Tasks```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
|
||||
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
|
||||
|
||||
<!--
|
||||
```bash vmstat 1```
|
||||
|
||||
```wait memory```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
We have available resources.
|
||||
|
||||
- Why?
|
||||
- How can we use them?
|
||||
|
||||
---
|
||||
|
||||
## Scaling workers on a single node
|
||||
|
||||
- Docker Compose supports scaling
|
||||
- Let's scale `worker` and see what happens!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start one more `worker` container:
|
||||
```bash
|
||||
docker-compose scale worker=2
|
||||
```
|
||||
|
||||
- Look at the performance graph (it should show a x2 improvement)
|
||||
|
||||
- Look at the aggregated logs of our containers (`worker_2` should show up)
|
||||
|
||||
- Look at the impact on CPU load with e.g. top (it should be negligible)
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Adding more workers
|
||||
|
||||
- Great, let's add more workers and call it a day, then!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start eight more `worker` containers:
|
||||
```bash
|
||||
docker-compose scale worker=10
|
||||
```
|
||||
|
||||
- Look at the performance graph: does it show a x10 improvement?
|
||||
|
||||
- Look at the aggregated logs of our containers
|
||||
|
||||
- Look at the impact on CPU load and memory usage
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
# Identifying bottlenecks
|
||||
|
||||
- You should have seen a 3x speed bump (not 10x)
|
||||
|
||||
- Adding workers didn't result in linear improvement
|
||||
|
||||
- *Something else* is slowing us down
|
||||
|
||||
--
|
||||
|
||||
- ... But what?
|
||||
|
||||
--
|
||||
|
||||
- The code doesn't have instrumentation
|
||||
|
||||
- Let's use state-of-the-art HTTP performance analysis!
|
||||
<br/>(i.e. good old tools like `ab`, `httping`...)
|
||||
|
||||
---
|
||||
|
||||
## Accessing internal services
|
||||
|
||||
- `rng` and `hasher` are exposed on ports 8001 and 8002
|
||||
|
||||
- This is declared in the Compose file:
|
||||
|
||||
```yaml
|
||||
...
|
||||
rng:
|
||||
build: rng
|
||||
ports:
|
||||
- "8001:80"
|
||||
|
||||
hasher:
|
||||
build: hasher
|
||||
ports:
|
||||
- "8002:80"
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Measuring latency under load
|
||||
|
||||
We will use `httping`.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the latency of `rng`:
|
||||
```bash
|
||||
httping -c 3 localhost:8001
|
||||
```
|
||||
|
||||
- Check the latency of `hasher`:
|
||||
```bash
|
||||
httping -c 3 localhost:8002
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
`rng` has a much higher latency than `hasher`.
|
||||
|
||||
---
|
||||
|
||||
## Let's draw hasty conclusions
|
||||
|
||||
- The bottleneck seems to be `rng`
|
||||
|
||||
- *What if* we don't have enough entropy and can't generate enough random numbers?
|
||||
|
||||
- We need to scale out the `rng` service on multiple machines!
|
||||
|
||||
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
|
||||
|
||||
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
|
||||
<br/>
|
||||
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)
|
||||
@@ -9,7 +9,8 @@
|
||||
|
||||
- We recommend having a mentor to help you ...
|
||||
|
||||
- ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ...
|
||||
- ... Or be comfortable spending some time reading the Kubernetes
|
||||
[documentation](https://kubernetes.io/docs/) ...
|
||||
|
||||
- ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets
|
||||
|
||||
@@ -26,10 +27,41 @@ class: self-paced
|
||||
|
||||
- These slides include *tons* of exercises and examples
|
||||
|
||||
- They assume that you have access to a Kubernetes cluster
|
||||
- They assume that you have access to some Docker nodes
|
||||
|
||||
- If you are attending a workshop or tutorial:
|
||||
<br/>you will be given specific instructions to access your cluster
|
||||
|
||||
- If you are doing this on your own:
|
||||
<br/>the first chapter will give you various options to get your own cluster
|
||||
|
||||
---
|
||||
|
||||
## About these slides
|
||||
|
||||
- All the content is available in a public GitHub repository:
|
||||
|
||||
https://github.com/jpetazzo/container.training
|
||||
|
||||
- You can get updated "builds" of the slides there:
|
||||
|
||||
http://container.training/
|
||||
|
||||
<!--
|
||||
.exercise[
|
||||
```open https://github.com/jpetazzo/container.training```
|
||||
```open http://container.training/```
|
||||
]
|
||||
-->
|
||||
|
||||
--
|
||||
|
||||
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
|
||||
|
||||
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
|
||||
|
||||
<!--
|
||||
.exercise[
|
||||
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/intro.md```
|
||||
]
|
||||
-->
|
||||
@@ -24,19 +24,13 @@ class: extra-details
|
||||
|
||||
## Extra details
|
||||
|
||||
- This slide has a little magnifying glass in the top left corner
|
||||
- This slide should have a little magnifying glass in the top left corner
|
||||
|
||||
- This magnifiying glass indicates slides that provide extra details
|
||||
(If it doesn't, it's because CSS is hard — we're only backend people, alas!)
|
||||
|
||||
- Feel free to skip them if:
|
||||
- Slides with that magnifying glass indicate slides providing extra details
|
||||
|
||||
- you are in a hurry
|
||||
|
||||
- you are new to this and want to avoid cognitive overload
|
||||
|
||||
- you want only the most essential information
|
||||
|
||||
- You can review these slides another time if you want, they'll be waiting for you ☺
|
||||
- Feel free to skip them if you're in a hurry!
|
||||
|
||||
---
|
||||
|
||||
@@ -68,9 +62,9 @@ Misattributed to Benjamin Franklin
|
||||
|
||||
- This is the stuff you're supposed to do!
|
||||
|
||||
- Go to [container.training](http://container.training/) to view these slides
|
||||
- Go to [indexconf2018.container.training](http://indexconf2018.container.training/) to view these slides
|
||||
|
||||
- Join the chat room: @@CHAT@@
|
||||
- Join the chat room on @@CHAT@@
|
||||
|
||||
<!-- ```open http://container.training/``` -->
|
||||
|
||||
@@ -84,17 +78,11 @@ class: in-person
|
||||
|
||||
---
|
||||
|
||||
class: in-person, pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## You get a cluster of cloud VMs
|
||||
## You get three VMs
|
||||
|
||||
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
|
||||
- Each person gets 3 private VMs (not shared with anybody else)
|
||||
|
||||
- They'll remain up for the duration of the workshop
|
||||
|
||||
@@ -102,7 +90,7 @@ class: in-person
|
||||
|
||||
- You can automatically SSH from one VM to another
|
||||
|
||||
- The nodes have aliases: `node1`, `node2`, etc.
|
||||
- The nodes have aliases: `node1`, `node2`, `node3`.
|
||||
|
||||
---
|
||||
|
||||
@@ -159,7 +147,7 @@ class: in-person
|
||||
|
||||
<!--
|
||||
```bash
|
||||
for N in $(awk '/node/{print $2}' /etc/hosts); do
|
||||
for N in $(seq 1 3); do
|
||||
ssh -o StrictHostKeyChecking=no node$N true
|
||||
done
|
||||
```
|
||||
@@ -175,7 +163,7 @@ fi
|
||||
```bash
|
||||
ssh node2
|
||||
```
|
||||
- Type `exit` or `^D` to come back to `node1`
|
||||
- Type `exit` or `^D` to come back to node1
|
||||
|
||||
<!-- ```bash exit``` -->
|
||||
|
||||
|
||||
@@ -21,79 +21,6 @@
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Compose file format version
|
||||
|
||||
*Particularly relevant if you have used Compose before...*
|
||||
|
||||
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
|
||||
|
||||
- Services are no longer at the top level, but under a `services` section
|
||||
|
||||
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
|
||||
|
||||
- Containers are placed on a dedicated network, making links unnecessary
|
||||
|
||||
- There are other minor differences, but upgrade is easy and straightforward
|
||||
|
||||
---
|
||||
|
||||
## Service discovery in container-land
|
||||
|
||||
- We do not hard-code IP addresses in the code
|
||||
|
||||
- We do not hard-code FQDN in the code, either
|
||||
|
||||
- We just connect to a service name, and container-magic does the rest
|
||||
|
||||
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
|
||||
|
||||
---
|
||||
|
||||
## Example in `worker/worker.py`
|
||||
|
||||
```python
|
||||
redis = Redis("`redis`")
|
||||
|
||||
|
||||
def get_random_bytes():
|
||||
r = requests.get("http://`rng`/32")
|
||||
return r.content
|
||||
|
||||
|
||||
def hash_bytes(data):
|
||||
r = requests.post("http://`hasher`/",
|
||||
data=data,
|
||||
headers={"Content-Type": "application/octet-stream"})
|
||||
```
|
||||
|
||||
(Full source code available [here](
|
||||
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
|
||||
))
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Links, naming, and service discovery
|
||||
|
||||
- Containers can have network aliases (resolvable through DNS)
|
||||
|
||||
- Compose file version 2+ makes each container reachable through its service name
|
||||
|
||||
- Compose file version 1 did require "links" sections
|
||||
|
||||
- Network aliases are automatically namespaced
|
||||
|
||||
- you can have multiple apps declaring and using a service named `database`
|
||||
|
||||
- containers in the blue app will resolve `database` to the IP of the blue database
|
||||
|
||||
- containers in the green app will resolve `database` to the IP of the green database
|
||||
|
||||
---
|
||||
|
||||
## What's this application?
|
||||
|
||||
--
|
||||
@@ -138,7 +65,7 @@ fi
|
||||
|
||||
- Clone the repository on `node1`:
|
||||
```bash
|
||||
git clone git://github.com/jpetazzo/container.training
|
||||
git clone https://github.com/jpetazzo/container.training/
|
||||
```
|
||||
|
||||
]
|
||||
@@ -165,6 +92,7 @@ Without further ado, let's start our application.
|
||||
|
||||
<!--
|
||||
```longwait units of work done```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
]
|
||||
@@ -175,22 +103,29 @@ and displays aggregated logs.
|
||||
|
||||
---
|
||||
|
||||
## Our application at work
|
||||
## Lots of logs
|
||||
|
||||
- On the left-hand side, the "rainbow strip" shows the container names
|
||||
|
||||
- On the right-hand side, we see the output of our containers
|
||||
- The application continuously generates logs
|
||||
|
||||
- We can see the `worker` service making requests to `rng` and `hasher`
|
||||
|
||||
- For `rng` and `hasher`, we see HTTP access logs
|
||||
- Let's put that in the background
|
||||
|
||||
.exercise[
|
||||
|
||||
- Stop the application by hitting `^C`
|
||||
|
||||
]
|
||||
|
||||
- `^C` stops all containers by sending them the `TERM` signal
|
||||
|
||||
- Some containers exit immediately, others take longer
|
||||
<br/>(because they don't handle `SIGTERM` and end up being killed after a 10s timeout)
|
||||
|
||||
---
|
||||
|
||||
## Connecting to the web UI
|
||||
|
||||
- "Logs are exciting and fun!" (No-one, ever)
|
||||
|
||||
- The `webui` container exposes a web dashboard; let's view it
|
||||
|
||||
.exercise[
|
||||
@@ -210,94 +145,15 @@ graph will appear.
|
||||
|
||||
---
|
||||
|
||||
class: self-paced, extra-details
|
||||
## Clean up
|
||||
|
||||
## If the graph doesn't load
|
||||
|
||||
If you just see a `Page not found` error, it might be because your
|
||||
Docker Engine is running on a different machine. This can be the case if:
|
||||
|
||||
- you are using the Docker Toolbox
|
||||
|
||||
- you are using a VM (local or remote) created with Docker Machine
|
||||
|
||||
- you are controlling a remote Docker Engine
|
||||
|
||||
When you run DockerCoins in development mode, the web UI static files
|
||||
are mapped to the container using a volume. Alas, volumes can only
|
||||
work on a local environment, or when using Docker4Mac or Docker4Windows.
|
||||
|
||||
How to fix this?
|
||||
|
||||
Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Why does the speed seem irregular?
|
||||
|
||||
- It *looks like* the speed is approximately 4 hashes/second
|
||||
|
||||
- Or more precisely: 4 hashes/second, with regular dips down to zero
|
||||
|
||||
- Why?
|
||||
|
||||
--
|
||||
|
||||
class: extra-details
|
||||
|
||||
- The app actually has a constant, steady speed: 3.33 hashes/second
|
||||
<br/>
|
||||
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
|
||||
|
||||
- Yes, and?
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The reason why this graph is *not awesome*
|
||||
|
||||
- The worker doesn't update the counter after every loop, but up to once per second
|
||||
|
||||
- The speed is computed by the browser, checking the counter about once per second
|
||||
|
||||
- Between two consecutive updates, the counter will increase either by 4, or by 0
|
||||
|
||||
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
|
||||
|
||||
- What can we conclude from this?
|
||||
|
||||
--
|
||||
|
||||
class: extra-details
|
||||
|
||||
- "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme
|
||||
|
||||
---
|
||||
|
||||
## Stopping the application
|
||||
|
||||
- If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app
|
||||
|
||||
- The Docker Engine will send a `TERM` signal to the containers
|
||||
|
||||
- If the containers do not exit in a timely manner, the Engine sends a `KILL` signal
|
||||
- Before moving on, let's remove those containers
|
||||
|
||||
.exercise[
|
||||
|
||||
- Stop the application by hitting `^C`
|
||||
|
||||
<!--
|
||||
```keys ^C```
|
||||
-->
|
||||
- Tell Compose to remove everything:
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
Some containers exit immediately, others take longer.
|
||||
|
||||
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout.
|
||||
|
||||
|
||||
@@ -9,3 +9,14 @@ class: title, in-person
|
||||
That's all, folks! <br/> Questions?
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
# Links and resources
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
- [Local meetups](https://www.meetup.com/)
|
||||
- [Microsoft Cloud Developer Advocates](https://developer.microsoft.com/en-us/advocates/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → http://container.training/]
|
||||
|
||||
@@ -17,5 +17,5 @@ class: title, in-person
|
||||
*Don't stream videos or download big files during the workshop.*<br/>
|
||||
*Thank you!*
|
||||
|
||||
**Slides: http://container.training/**
|
||||
]
|
||||
**Slides: http://indexconf2018.container.training/**
|
||||
]
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 75 KiB |
@@ -68,32 +68,15 @@
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td>Nothing for now (stay tuned...)</td>
|
||||
thing for now (stay tuned...)</td>
|
||||
-->
|
||||
<td>March 14, 2018: Boosterconf — Kubernetes 101</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://2018.boosterconf.no/talks/1179" />
|
||||
<td>Nothing for now (stay tuned...)</td>
|
||||
-->
|
||||
<td>February 22, 2018: IndexConf — Kubernetes 101</td>
|
||||
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
|
||||
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>March 27, 2018: SREcon Americas — Kubernetes 101</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://www.usenix.org/conference/srecon18americas/presentation/kromhout" />
|
||||
</tr>
|
||||
|
||||
|
||||
<tr><td class="title" colspan="4">Past workshops</td></tr>
|
||||
|
||||
<tr>
|
||||
<!-- February 22, 2018 -->
|
||||
<td>IndexConf: Kubernetes 101</td>
|
||||
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
|
||||
<!--
|
||||
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
|
||||
-->
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>Kubernetes enablement at Docker</td>
|
||||
<td><a class="slides" href="http://kube.container.training/" /></td>
|
||||
|
||||
@@ -12,8 +12,7 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
- logistics.md
|
||||
- intro/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - intro/Docker_Overview.md
|
||||
#- intro/Docker_History.md
|
||||
@@ -41,4 +40,3 @@ chapters:
|
||||
- intro/Compose_For_Dev_Stacks.md
|
||||
- intro/Advanced_Dockerfiles.md
|
||||
- common/thankyou.md
|
||||
- intro/links.md
|
||||
|
||||
@@ -12,8 +12,7 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
# - common/logistics.md
|
||||
- intro/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - intro/Docker_Overview.md
|
||||
#- intro/Docker_History.md
|
||||
@@ -41,4 +40,3 @@ chapters:
|
||||
- intro/Compose_For_Dev_Stacks.md
|
||||
- intro/Advanced_Dockerfiles.md
|
||||
- common/thankyou.md
|
||||
- intro/links.md
|
||||
|
||||
@@ -90,11 +90,11 @@ COPY <test data sets and fixtures>
|
||||
RUN <unit tests>
|
||||
FROM <baseimage>
|
||||
RUN <install dependencies>
|
||||
COPY <code>
|
||||
COPY <vcode>
|
||||
RUN <build code>
|
||||
CMD, EXPOSE ...
|
||||
```
|
||||
|
||||
* The build fails as soon as an instruction fails
|
||||
* The build fails as soon as an instructions fails
|
||||
* If `RUN <unit tests>` fails, the build doesn't produce an image
|
||||
* If it succeeds, it produces a clean image (without test libraries and data)
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
## A brief introduction
|
||||
|
||||
- This was initially written to support in-person,
|
||||
instructor-led workshops and tutorials
|
||||
|
||||
- You can also follow along on your own, at your own pace
|
||||
|
||||
- We included as much information as possible in these slides
|
||||
|
||||
- We recommend having a mentor to help you ...
|
||||
|
||||
- ... Or be comfortable spending some time reading the Docker
|
||||
[documentation](https://docs.docker.com/) ...
|
||||
|
||||
- ... And looking for answers in the [Docker forums](forums.docker.com),
|
||||
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
|
||||
and other outlets
|
||||
|
||||
---
|
||||
|
||||
class: self-paced
|
||||
|
||||
## Hands on, you shall practice
|
||||
|
||||
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
|
||||
|
||||
- Likewise, it will take more than merely *reading* these slides
|
||||
to make you an expert
|
||||
|
||||
- These slides include *tons* of exercises and examples
|
||||
|
||||
- They assume that you have acccess to a machine running Docker
|
||||
|
||||
- If you are attending a workshop or tutorial:
|
||||
<br/>you will be given specific instructions to access a cloud VM
|
||||
|
||||
- If you are doing this on your own:
|
||||
<br/>we will tell you how to install Docker or access a Docker environment
|
||||
@@ -1 +0,0 @@
|
||||
../swarm/links.md
|
||||
@@ -1,11 +1,7 @@
|
||||
title: |
|
||||
Deploying and Scaling Microservices
|
||||
with Kubernetes
|
||||
Kubernetes 101
|
||||
|
||||
|
||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
chat: "In person!"
|
||||
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20180222-sf)"
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
@@ -13,14 +9,11 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
- logistics.md
|
||||
- kube/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- kube/versions-k8s.md
|
||||
- common/sampleapp.md
|
||||
#- common/composescale.md
|
||||
- common/composedown.md
|
||||
- - kube/concepts-k8s.md
|
||||
- common/declarative.md
|
||||
- kube/declarative.md
|
||||
@@ -36,4 +29,3 @@ chapters:
|
||||
- kube/rollout.md
|
||||
- kube/whatsnext.md
|
||||
- common/thankyou.md
|
||||
- kube/links.md
|
||||
|
||||
@@ -11,14 +11,11 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
#- logistics.md
|
||||
- kube/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- kube/versions-k8s.md
|
||||
- common/sampleapp.md
|
||||
- common/composescale.md
|
||||
- common/composedown.md
|
||||
- - kube/concepts-k8s.md
|
||||
- common/declarative.md
|
||||
- kube/declarative.md
|
||||
@@ -34,4 +31,3 @@ chapters:
|
||||
- kube/rollout.md
|
||||
- kube/whatsnext.md
|
||||
- common/thankyou.md
|
||||
- kube/links.md
|
||||
|
||||
@@ -210,24 +210,12 @@ class: pic
|
||||
|
||||

|
||||
|
||||
(Diagram courtesy of Weave Works, used with permission.)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Credits
|
||||
|
||||
- The first diagram is courtesy of Weave Works
|
||||
|
||||
- a *pod* can have multiple containers working together
|
||||
|
||||
- IP addresses are associated with *pods*, not with individual containers
|
||||
|
||||
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
|
||||
|
||||
- it's one of the best Kubernetes architecture diagrams available!
|
||||
|
||||
Both diagrams used with permission.
|
||||
(Diagram courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha).)
|
||||
|
||||
@@ -1,33 +1,17 @@
|
||||
# Daemon sets
|
||||
|
||||
- We want to scale `rng` in a way that is different from how we scaled `worker`
|
||||
- What if we want one (and exactly one) instance of `rng` per node?
|
||||
|
||||
- We want one (and exactly one) instance of `rng` per node
|
||||
|
||||
- What if we just scale up `deploy/rng` to the number of nodes?
|
||||
|
||||
- nothing guarantees that the `rng` containers will be distributed evenly
|
||||
|
||||
- if we add nodes later, they will not automatically run a copy of `rng`
|
||||
|
||||
- if we remove (or reboot) a node, one `rng` container will restart elsewhere
|
||||
- If we just scale `deploy/rng` to 2, nothing guarantees that they spread
|
||||
|
||||
- Instead of a `deployment`, we will use a `daemonset`
|
||||
|
||||
---
|
||||
|
||||
## Daemon sets in practice
|
||||
|
||||
- Daemon sets are great for cluster-wide, per-node processes:
|
||||
|
||||
- `kube-proxy`
|
||||
|
||||
- `weave` (our overlay network)
|
||||
|
||||
- monitoring agents
|
||||
|
||||
- hardware management tools (e.g. SCSI/FC HBA agents)
|
||||
|
||||
- etc.
|
||||
|
||||
- They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes)
|
||||
@@ -396,7 +380,7 @@ Of course, option 2 offers more learning opportunities. Right?
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
|
||||
- Check the logs of all `run=rng` pods to confirm that only 2 of them are now active:
|
||||
```bash
|
||||
kubectl logs -l run=rng
|
||||
```
|
||||
|
||||
@@ -4,15 +4,11 @@
|
||||
|
||||
- We are going to deploy that dashboard with *three commands:*
|
||||
|
||||
1) actually *run* the dashboard
|
||||
- one to actually *run* the dashboard
|
||||
|
||||
2) bypass SSL for the dashboard
|
||||
- one to make the dashboard available from outside
|
||||
|
||||
3) bypass authentication for the dashboard
|
||||
|
||||
--
|
||||
|
||||
There is an additional step to make the dashboard available from outside (we'll get to that)
|
||||
- one to bypass authentication for the dashboard
|
||||
|
||||
--
|
||||
|
||||
@@ -20,7 +16,7 @@ There is an additional step to make the dashboard available from outside (we'll
|
||||
|
||||
---
|
||||
|
||||
## 1) Running the dashboard
|
||||
## Running the dashboard
|
||||
|
||||
- We need to create a *deployment* and a *service* for the dashboard
|
||||
|
||||
@@ -43,109 +39,11 @@ The goo.gl URL expands to:
|
||||
|
||||
---
|
||||
|
||||
## Making the dashboard reachable from outside
|
||||
|
||||
## 2) Bypassing SSL for the dashboard
|
||||
- The dashboard is exposed through a `ClusterIP` service
|
||||
|
||||
- The Kubernetes dashboard uses HTTPS, but we don't have a certificate
|
||||
|
||||
- Recent versions of Chrome (63 and later) and Edge will refuse to connect
|
||||
|
||||
(You won't even get the option to ignore a security warning!)
|
||||
|
||||
- We could (and should!) get a certificate, e.g. with [Let's Encrypt](https://letsencrypt.org/)
|
||||
|
||||
- ... But for convenience, for this workshop, we'll forward HTTP to HTTPS
|
||||
|
||||
.warning[Do not do this at home, or even worse, at work!]
|
||||
|
||||
---
|
||||
|
||||
## Running the SSL unwrapper
|
||||
|
||||
- We are going to run [`socat`](http://www.dest-unreach.org/socat/doc/socat.html), telling it to accept TCP connections and relay them over SSL
|
||||
|
||||
- Then we will expose that `socat` instance with a `NodePort` service
|
||||
|
||||
- For convenience, these steps are neatly encapsulated into another YAML file
|
||||
|
||||
.exercise[
|
||||
|
||||
- Apply the convenient YAML file, and defeat SSL protection:
|
||||
```bash
|
||||
kubectl apply -f https://goo.gl/tA7GLz
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The goo.gl URL expands to:
|
||||
<br/>
|
||||
.small[.small[https://gist.githubusercontent.com/jpetazzo/c53a28b5b7fdae88bc3c5f0945552c04/raw/da13ef1bdd38cc0e90b7a4074be8d6a0215e1a65/socat.yaml]]
|
||||
|
||||
.warning[All our dashboard traffic is now clear-text, including passwords!]
|
||||
|
||||
---
|
||||
|
||||
## Connecting to the dashboard
|
||||
|
||||
|
||||
.exercise[
|
||||
|
||||
- Connect to http://oneofournodes:3xxxx/
|
||||
|
||||
<!-- ```open https://node1:3xxxx/``` -->
|
||||
|
||||
]
|
||||
|
||||
The dashboard will then ask you which authentication you want to use.
|
||||
|
||||
---
|
||||
|
||||
## Dashboard authentication
|
||||
|
||||
- We have three authentication options at this point:
|
||||
|
||||
- token (associated with a role that has appropriate permissions)
|
||||
|
||||
- kubeconfig (e.g. using the `~/.kube/config` file from `node1`)
|
||||
|
||||
- "skip" (use the dashboard "service account")
|
||||
|
||||
- Let's use "skip": we get a bunch of warnings and don't see much
|
||||
|
||||
---
|
||||
|
||||
## 3) Bypass authentication for the dashboard
|
||||
|
||||
- The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
|
||||
|
||||
- We just need to load another YAML file!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Grant admin privileges to the dashboard so we can see our resources:
|
||||
```bash
|
||||
kubectl apply -f https://goo.gl/CHsLTA
|
||||
```
|
||||
|
||||
- Reload the dashboard and enjoy!
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
|
||||
|
||||
---
|
||||
|
||||
## Exposing the dashboard over HTTPS
|
||||
|
||||
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
|
||||
|
||||
- Let's expose the dashboard over HTTPS!
|
||||
|
||||
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
|
||||
|
||||
- We will change that into a `NodePort` service (accepting outside traffic)
|
||||
- We need a `NodePort` service instead
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -170,8 +68,6 @@ The dashboard will then ask you which authentication you want to use.
|
||||
|
||||
- The dashboard was created in the `kube-system` namespace
|
||||
|
||||
--
|
||||
|
||||
.exercise[
|
||||
|
||||
- Edit the service:
|
||||
@@ -187,15 +83,56 @@ The dashboard will then ask you which authentication you want to use.
|
||||
|
||||
---
|
||||
|
||||
## Running the Kubernetes dashboard securely
|
||||
## Connecting to the dashboard
|
||||
|
||||
- The steps that we just showed you are *for educational purposes only!*
|
||||
.exercise[
|
||||
|
||||
- If you do that on your production cluster, people [can and will abuse it](https://blog.redlock.io/cryptojacking-tesla)
|
||||
- Connect to https://oneofournodes:3xxxx/
|
||||
|
||||
- For an in-depth discussion about securing the dashboard,
|
||||
<br/>
|
||||
check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca)
|
||||
- Yes, https. If you use http it will say:
|
||||
|
||||
This page isn’t working
|
||||
<oneofournodes> sent an invalid response.
|
||||
ERR_INVALID_HTTP_RESPONSE
|
||||
|
||||
- You will have to work around the TLS certificate validation warning
|
||||
|
||||
<!-- ```open https://node1:3xxxx/``` -->
|
||||
|
||||
]
|
||||
|
||||
- We have three authentication options at this point:
|
||||
|
||||
- token (associated with a role that has appropriate permissions)
|
||||
|
||||
- kubeconfig (e.g. using the `~/.kube/config` file from `node1`)
|
||||
|
||||
- "skip" (use the dashboard "service account")
|
||||
|
||||
- Let's use "skip": we get a bunch of warnings and don't see much
|
||||
|
||||
---
|
||||
|
||||
## Granting more rights to the dashboard
|
||||
|
||||
- The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
|
||||
|
||||
- We just need to load another YAML file!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Grant admin privileges to the dashboard so we can see our resources:
|
||||
```bash
|
||||
kubectl apply -f https://goo.gl/CHsLTA
|
||||
```
|
||||
|
||||
- Reload the dashboard and enjoy!
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
|
||||
|
||||
---
|
||||
|
||||
@@ -248,4 +185,3 @@ The dashboard will then ask you which authentication you want to use.
|
||||
- It introduces new failure modes
|
||||
|
||||
- Example: the official setup instructions for most pod networks
|
||||
|
||||
|
||||
@@ -136,8 +136,7 @@ There is already one service on our cluster: the Kubernetes API itself.
|
||||
```
|
||||
|
||||
- `-k` is used to skip certificate verification
|
||||
|
||||
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc`
|
||||
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `$ kubectl get svc`
|
||||
|
||||
]
|
||||
|
||||
@@ -212,11 +211,9 @@ The error that we see is expected: the Kubernetes API requires authentication.
|
||||
|
||||
*Ding ding ding ding ding!*
|
||||
|
||||
The `kube-system` namespace is used for the control plane.
|
||||
|
||||
---
|
||||
|
||||
## What are all these control plane pods?
|
||||
## What are all these pods?
|
||||
|
||||
- `etcd` is our etcd server
|
||||
|
||||
@@ -235,34 +232,3 @@ The `kube-system` namespace is used for the control plane.
|
||||
- the pods with a name ending with `-node1` are the master components
|
||||
<br/>
|
||||
(they have been specifically "pinned" to the master node)
|
||||
|
||||
---
|
||||
|
||||
## What about `kube-public`?
|
||||
|
||||
.exercise[
|
||||
|
||||
- List the pods in the `kube-public` namespace:
|
||||
```bash
|
||||
kubectl -n kube-public get pods
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
- Maybe it doesn't have pods, but what secrets is `kube-public` keeping?
|
||||
|
||||
--
|
||||
|
||||
.exercise[
|
||||
|
||||
- List the secrets in the `kube-public` namespace:
|
||||
```bash
|
||||
kubectl -n kube-public get secrets
|
||||
```
|
||||
|
||||
]
|
||||
--
|
||||
|
||||
- `kube-public` is created by kubeadm & [used for security bootstrapping](http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html)
|
||||
|
||||
@@ -245,4 +245,4 @@ at the Google NOC ...
|
||||
<br/>
|
||||
.small[are we getting 1000 packets per second]
|
||||
<br/>
|
||||
.small[of ICMP ECHO traffic from these IPs?!?”]
|
||||
.small[of ICMP ECHO traffic from Azure ?!?”]
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
# Links and resources
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
|
||||
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
|
||||
|
||||
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
|
||||
- [Azure Container Service](https://docs.microsoft.com/azure/aks/)
|
||||
|
||||
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
|
||||
|
||||
- [Local meetups](https://www.meetup.com/)
|
||||
|
||||
- [devopsdays](https://www.devopsdays.org/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → http://container.training/]
|
||||
@@ -1,20 +0,0 @@
|
||||
# Links and resources
|
||||
|
||||
All things Kubernetes:
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
|
||||
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
|
||||
All things Docker:
|
||||
|
||||
- [Docker documentation](http://docs.docker.com/)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
|
||||
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
|
||||
|
||||
Everything else:
|
||||
|
||||
- [Local meetups](https://www.meetup.com/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → http://container.training/]
|
||||
@@ -185,7 +185,6 @@ The curl command should now output:
|
||||
- Build and push the images:
|
||||
```bash
|
||||
export REGISTRY
|
||||
export TAG=v0.1
|
||||
docker-compose -f dockercoins.yml build
|
||||
docker-compose -f dockercoins.yml push
|
||||
```
|
||||
@@ -221,30 +220,6 @@ services:
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Avoiding the `latest` tag
|
||||
|
||||
.warning[Make sure that you've set the `TAG` variable properly!]
|
||||
|
||||
- If you don't, the tag will default to `latest`
|
||||
|
||||
- The problem with `latest`: nobody knows what it points to!
|
||||
|
||||
- the latest commit in the repo?
|
||||
|
||||
- the latest commit in some branch? (Which one?)
|
||||
|
||||
- the latest tag?
|
||||
|
||||
- some random version pushed by a random team member?
|
||||
|
||||
- If you keep pushing the `latest` tag, how do you roll back?
|
||||
|
||||
- Image tags should be meaningful, i.e. correspond to code branches, tags, or hashes
|
||||
|
||||
---
|
||||
|
||||
## Deploying all the things
|
||||
|
||||
- We can now deploy our code (as well as a redis instance)
|
||||
@@ -259,7 +234,7 @@ class: extra-details
|
||||
- Deploy everything else:
|
||||
```bash
|
||||
for SERVICE in hasher rng webui worker; do
|
||||
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
|
||||
kubectl run $SERVICE --image=$REGISTRY/$SERVICE
|
||||
done
|
||||
```
|
||||
|
||||
@@ -293,7 +268,7 @@ class: extra-details
|
||||
|
||||
---
|
||||
|
||||
# Exposing services internally
|
||||
# Exposing services internally
|
||||
|
||||
- Three deployments need to be reachable by others: `hasher`, `redis`, `rng`
|
||||
|
||||
|
||||
@@ -149,7 +149,7 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
|
||||
|
||||
- We want to:
|
||||
|
||||
- revert to `v0.1`
|
||||
- revert to `v0.1` (which we now realize we didn't tag - yikes!)
|
||||
- be conservative on availability (always have desired number of available workers)
|
||||
- be aggressive on rollout speed (update more than one pod at a time)
|
||||
- give some time to our workers to "warm up" before starting more
|
||||
@@ -163,7 +163,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: worker
|
||||
image: $REGISTRY/worker:v0.1
|
||||
image: $REGISTRY/worker:latest
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
@@ -192,7 +192,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: worker
|
||||
image: $REGISTRY/worker:v0.1
|
||||
image: $REGISTRY/worker:latest
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
--
|
||||
|
||||
- We used `kubeadm` on freshly installed VM instances running Ubuntu 16.04 LTS
|
||||
- We used `kubeadm` on Azure instances with Ubuntu 16.04 LTS
|
||||
|
||||
1. Install Docker
|
||||
|
||||
@@ -36,7 +36,7 @@
|
||||
|
||||
--
|
||||
|
||||
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
|
||||
- "It's still twice as many steps as setting up a Swarm cluster 😕 " -- Jérôme
|
||||
|
||||
---
|
||||
|
||||
@@ -50,8 +50,6 @@
|
||||
|
||||
- If you are on AWS:
|
||||
[EKS](https://aws.amazon.com/eks/)
|
||||
or
|
||||
[kops](https://github.com/kubernetes/kops)
|
||||
|
||||
- On a local machine:
|
||||
[minikube](https://kubernetes.io/docs/getting-started-guides/minikube/),
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
## Versions installed
|
||||
## Versions Installed
|
||||
|
||||
- Kubernetes 1.9.3
|
||||
- Docker Engine 18.02.0-ce
|
||||
|
||||
@@ -131,9 +131,9 @@ And *then* it is time to look at orchestration!
|
||||
|
||||
- shell scripts invoking `kubectl`
|
||||
- YAML resources descriptions committed to a repo
|
||||
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
|
||||
- [Helm](https://github.com/kubernetes/helm) (~package manager)
|
||||
- [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform)
|
||||
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
## Intros
|
||||
|
||||
- Hello! We are:
|
||||
|
||||
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
|
||||
|
||||
- .emoji[🌟] Joe ([@joelaha](https://twitter.com/joelaha))
|
||||
|
||||
- The workshop will run from 13:30-16:45
|
||||
|
||||
- There will be a break from 15:00-15:15
|
||||
|
||||
- Feel free to interrupt for questions at any time
|
||||
|
||||
- *Especially when you see full screen container pictures!*
|
||||
|
||||
@@ -2,18 +2,19 @@
|
||||
|
||||
- Hello! We are:
|
||||
|
||||
- .emoji[👷🏻♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
|
||||
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
|
||||
|
||||
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Docker Inc.)
|
||||
- .emoji[🌟] Jessica ([@jldeen](https://twitter.com/jldeen))
|
||||
|
||||
- The workshop will run from 9am to 4pm
|
||||
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo))
|
||||
|
||||
- There will be a lunch break at noon
|
||||
- This workshop will run from 10:30am-12:45pm.
|
||||
|
||||
(And coffee breaks!)
|
||||
- Lunchtime is after the workshop!
|
||||
|
||||
(And we will take a 15min break at 11:30am!)
|
||||
|
||||
- Feel free to interrupt for questions at any time
|
||||
|
||||
- *Especially when you see full screen container pictures!*
|
||||
|
||||
- Live feedback, questions, help on @@CHAT@@
|
||||
|
||||
@@ -16,14 +16,11 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
- logistics.md
|
||||
- swarm/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- swarm/versions.md
|
||||
- common/sampleapp.md
|
||||
- common/composescale.md
|
||||
- common/composedown.md
|
||||
- swarm/swarmkit.md
|
||||
- common/declarative.md
|
||||
- swarm/swarmmode.md
|
||||
@@ -54,4 +51,3 @@ chapters:
|
||||
- swarm/stateful.md
|
||||
- swarm/extratips.md
|
||||
- common/thankyou.md
|
||||
- swarm/links.md
|
||||
|
||||
@@ -16,14 +16,11 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
- logistics.md
|
||||
- swarm/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- swarm/versions.md
|
||||
- common/sampleapp.md
|
||||
- common/composescale.md
|
||||
- common/composedown.md
|
||||
- swarm/swarmkit.md
|
||||
- common/declarative.md
|
||||
- swarm/swarmmode.md
|
||||
@@ -54,4 +51,3 @@ chapters:
|
||||
#- swarm/stateful.md
|
||||
#- swarm/extratips.md
|
||||
- common/thankyou.md
|
||||
- swarm/links.md
|
||||
|
||||
@@ -11,8 +11,7 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
#- common/logistics.md
|
||||
- swarm/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- swarm/versions.md
|
||||
@@ -23,8 +22,6 @@ chapters:
|
||||
|
||||
Part 1
|
||||
- common/sampleapp.md
|
||||
- common/composescale.md
|
||||
- common/composedown.md
|
||||
- swarm/swarmkit.md
|
||||
- common/declarative.md
|
||||
- swarm/swarmmode.md
|
||||
@@ -63,4 +60,3 @@ chapters:
|
||||
- swarm/stateful.md
|
||||
- swarm/extratips.md
|
||||
- common/thankyou.md
|
||||
- swarm/links.md
|
||||
|
||||
@@ -11,8 +11,7 @@ exclude:
|
||||
chapters:
|
||||
- common/title.md
|
||||
#- common/logistics.md
|
||||
- swarm/intro.md
|
||||
- common/about-slides.md
|
||||
- common/intro.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- swarm/versions.md
|
||||
@@ -23,8 +22,6 @@ chapters:
|
||||
|
||||
Part 1
|
||||
- common/sampleapp.md
|
||||
- common/composescale.md
|
||||
- common/composedown.md
|
||||
- swarm/swarmkit.md
|
||||
- common/declarative.md
|
||||
- swarm/swarmmode.md
|
||||
@@ -63,4 +60,3 @@ chapters:
|
||||
- swarm/stateful.md
|
||||
- swarm/extratips.md
|
||||
- common/thankyou.md
|
||||
- swarm/links.md
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
## A brief introduction
|
||||
|
||||
- This was initially written to support in-person,
|
||||
instructor-led workshops and tutorials
|
||||
|
||||
- You can also follow along on your own, at your own pace
|
||||
|
||||
- We included as much information as possible in these slides
|
||||
|
||||
- We recommend having a mentor to help you ...
|
||||
|
||||
- ... Or be comfortable spending some time reading the Docker
|
||||
[documentation](https://docs.docker.com/) ...
|
||||
|
||||
- ... And looking for answers in the [Docker forums](forums.docker.com),
|
||||
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
|
||||
and other outlets
|
||||
|
||||
---
|
||||
|
||||
class: self-paced
|
||||
|
||||
## Hands on, you shall practice
|
||||
|
||||
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
|
||||
|
||||
- Likewise, it will take more than merely *reading* these slides
|
||||
to make you an expert
|
||||
|
||||
- These slides include *tons* of exercises and examples
|
||||
|
||||
- They assume that you have access to some Docker nodes
|
||||
|
||||
- If you are attending a workshop or tutorial:
|
||||
<br/>you will be given specific instructions to access your cluster
|
||||
|
||||
- If you are doing this on your own:
|
||||
<br/>the first chapter will give you various options to get your own cluster
|
||||
@@ -1,12 +0,0 @@
|
||||
# Links and resources
|
||||
|
||||
- [Docker Community Slack](https://community.docker.com/registrations/groups/4316)
|
||||
- [Docker Community Forums](https://forums.docker.com/)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Docker Blog](http://blog.docker.com/)
|
||||
- [Docker documentation](http://docs.docker.com/)
|
||||
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
|
||||
- [Docker on Twitter](http://twitter.com/docker)
|
||||
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → http://container.training/]
|
||||
@@ -10,7 +10,7 @@ Otherwise: check [part 1](#part-1) to learn how to set up your own cluster.
|
||||
|
||||
We pick up exactly where we left you, so we assume that you have:
|
||||
|
||||
- a Swarm cluster with at least 3 nodes,
|
||||
- a five nodes Swarm cluster,
|
||||
|
||||
- a self-hosted registry,
|
||||
|
||||
|
||||
Reference in New Issue
Block a user