Compare commits

..

89 Commits

Author SHA1 Message Date
Jerome Petazzoni
c97d7faa40 paris.container.training 2018-03-07 12:51:03 -08:00
Jerome Petazzoni
aca51901a1 Tag images properly
This tags the first build with v0.1, allowing for a smoother, more
logical rollback. Also adds a slide explaining why to stay away
from latest. @kelseyhightower would be proud :-)
2018-03-05 16:13:30 -08:00
Jerome Petazzoni
c778fc84ed Add a dump of the checklist I use when delivering 2018-03-05 14:30:39 -08:00
Jérôme Petazzoni
1981ac0b93 Merge pull request #135 from bridgetkromhout/bridget-specific
Adding Bridget-specific files
2018-03-05 13:36:06 -08:00
Jérôme Petazzoni
a8f2fb4586 Merge pull request #134 from bridgetkromhout/dedup-thanks
De-dup thanks; add comma
2018-03-05 13:35:45 -08:00
Jérôme Petazzoni
a69d3d0828 Merge pull request #133 from bridgetkromhout/no-chatroom
Makes more sense for "in person" chat
2018-03-05 13:32:51 -08:00
Jérôme Petazzoni
40760f9e98 Merge pull request #131 from bridgetkromhout/change-instance-type
Changing Azure instance type
2018-03-05 13:25:49 -08:00
Bridget Kromhout
b64b16dd67 Adding Bridget-specific files 2018-03-05 14:54:28 -06:00
Bridget Kromhout
8c2c9bc5df De-dup thanks; add comma 2018-03-05 14:51:26 -06:00
Bridget Kromhout
3a21cbc72b Makes more sense for "in person" chat 2018-03-05 14:37:10 -06:00
Bridget Kromhout
a09521ceb1 Changing Azure instance type 2018-03-05 13:44:02 -06:00
Jérôme Petazzoni
0d6501a926 Merge pull request #130 from atsaloli/patch-1
Two small fixes
2018-03-05 10:10:25 -08:00
Aleksey Tsalolikhin
c25f7a119b Fix very small typo -- remove extra "v" in "code" 2018-03-04 19:58:27 -08:00
Aleksey Tsalolikhin
1958c85a96 Fix noun plural tense (change "instructions" -> "instruction")
"An" means one. So "an instruction" rather than "an instructions".  (Small grammar fix.)
2018-03-04 19:56:03 -08:00
Jérôme Petazzoni
a7ba4418c6 Merge pull request #129 from bridgetkromhout/improve-directions
Improve directions
2018-03-03 19:52:15 -06:00
Bridget Kromhout
d6fcbb85e8 Improve directions 2018-03-03 18:44:56 -06:00
Jérôme Petazzoni
278fbf285a Merge pull request #128 from bridgetkromhout/cleanup
Cleanup
2018-03-03 14:39:56 -06:00
Bridget Kromhout
ca828343e4 Remove azure instances post-workshop. 2018-03-03 08:51:54 -06:00
Bridget Kromhout
5c663f9e09 Updating help output 2018-03-03 08:48:02 -06:00
Bridget Kromhout
9debd76816 Document kubetest 2018-03-03 08:44:58 -06:00
Bridget Kromhout
848679829d Removed -i and trailing space 2018-03-02 18:18:04 -06:00
Bridget Kromhout
6727007754 Missing variable 2018-03-02 18:11:32 -06:00
Jerome Petazzoni
03a563c172 Merge branch 'master' of github.com:jpetazzo/container.training 2018-03-02 14:17:54 -06:00
Jerome Petazzoni
cfbd54bebf Add hacky-backslashy kubetest command 2018-03-02 14:17:37 -06:00
Jérôme Petazzoni
7f1e9db0fa Missing curly brace 2018-03-02 13:08:48 -06:00
Jérôme Petazzoni
1367a30a11 Merge pull request #126 from bridgetkromhout/add-azure
Adding Azure examples
2018-03-02 12:46:02 -06:00
Bridget Kromhout
31b234ee3a Adding Azure examples 2018-03-02 12:42:55 -06:00
Jérôme Petazzoni
57dd5e295e Merge pull request #125 from bridgetkromhout/increase-timeouts
Increase timeouts
2018-03-01 17:43:29 -06:00
Bridget Kromhout
c188923f1a Increase timeouts 2018-03-01 17:39:51 -06:00
Jérôme Petazzoni
7a8716d38b Merge pull request #124 from bridgetkromhout/postprep
Postprep is now python
2018-03-01 17:17:04 -06:00
Bridget Kromhout
2e77c13297 Postprep is now python 2018-03-01 17:15:01 -06:00
Jerome Petazzoni
d5279d881d Add info about pre-built images 2018-03-01 15:13:39 -06:00
Jerome Petazzoni
34e9cc1944 Don't assume 5 nodes 2018-03-01 14:55:02 -06:00
Jerome Petazzoni
2a7498e30e A bit of rewording, and a couple of links about dashboard security 2018-03-01 14:51:00 -06:00
Jerome Petazzoni
4689d09e1f One typo and two minor tweaks 2018-03-01 14:18:48 -06:00
Jerome Petazzoni
b818a38307 Correctly report errors happening in functions
`trap ... ERR` does not automatically propagate to functions. Therefore,
Our fancy error-reporting mechanism did not catch errors happening in
functions; and we do most of the actual work in functions. The solution
is to `set -E` or `set -o errtrace`.
2018-03-01 13:56:08 -06:00
Jérôme Petazzoni
7e5d869472 Merge pull request #123 from bridgetkromhout/kube101
Kube101 & non-AWS
2018-03-01 13:23:04 -06:00
Jérôme Petazzoni
3eaf31fd48 Merge pull request #122 from bridgetkromhout/pssh-clarity
Pssh clarity
2018-03-01 13:21:05 -06:00
Bridget Kromhout
fe5e22f5ae How to set up non-AWS workshops 2018-02-28 21:45:36 -06:00
Bridget Kromhout
61da583080 Don't overwrite ip file if exists 2018-02-28 21:44:58 -06:00
Bridget Kromhout
94dfe1a0cd Adding sample file mentioned in README 2018-02-28 21:44:29 -06:00
Bridget Kromhout
412dbadafd Adding settings for kube101 2018-02-28 21:43:41 -06:00
Bridget Kromhout
8c5e4e0b09 Require pssh 2018-02-28 21:28:20 -06:00
Bridget Kromhout
2ac6072d80 Invoke as pssh 2018-02-28 21:26:17 -06:00
Jerome Petazzoni
ef4591c4fc Allow to override instance type (closes #39) 2018-02-28 13:45:08 -06:00
Jerome Petazzoni
22dfbab09b Minor formatting 2018-02-28 13:41:22 -06:00
Jérôme Petazzoni
37f595c480 Merge pull request #120 from bridgetkromhout/clarify-kube-public
Clarify kube-public; define kube-system
2018-02-27 17:42:11 -06:00
Bridget Kromhout
1fc951037d Slight clarification per request 2018-02-27 17:39:52 -06:00
Jérôme Petazzoni
affd46dd88 Merge pull request #121 from bridgetkromhout/obviate-https
Remove need for https in the workshop dashboard
2018-02-27 17:34:27 -06:00
Bridget Kromhout
cfaff3df04 Remove need for https in the workshop dashboard 2018-02-27 17:31:14 -06:00
Jérôme Petazzoni
ce2451971d Merge pull request #118 from bridgetkromhout/twice-the-steps
Proper attribution
2018-02-27 16:57:52 -06:00
Jérôme Petazzoni
8cf5d0efbd Merge pull request #119 from bridgetkromhout/naming-things
Naming things is hard; considering scope here
2018-02-27 16:40:40 -06:00
Bridget Kromhout
f61d61223d Clarify kube-public; define kube-system 2018-02-27 16:31:36 -06:00
Bridget Kromhout
6b6eb50f9a Naming things is hard; considering scope here 2018-02-27 15:26:43 -06:00
Jerome Petazzoni
89ab66335f ... and trim down kube half-day 2018-02-27 14:49:39 -06:00
Jerome Petazzoni
5bc4e95515 Clarify service discovery 2018-02-27 14:45:08 -06:00
Jerome Petazzoni
893f05e401 Move docker-compose logs to the composescale.md chapter 2018-02-27 14:38:41 -06:00
Bridget Kromhout
4abc8ce34c Proper attribution 2018-02-27 14:38:32 -06:00
Jérôme Petazzoni
34d2c610bf Merge pull request #117 from bridgetkromhout/self-deprecating-humor
Attributing humor so it doesn't sound negative
2018-02-27 14:06:58 -06:00
Jerome Petazzoni
1492a8a0bc Rephrase daemon set intro to fit even without the entropy spiel 2018-02-27 13:53:34 -06:00
Bridget Kromhout
388d616048 Attributing humor so it doesn't sound negative 2018-02-27 13:46:19 -06:00
Jerome Petazzoni
28589f5a83 Remove cluster-size specific reference 2018-02-27 13:40:52 -06:00
Jerome Petazzoni
e7a80f7bfb Merge branch 'master' of github.com:jpetazzo/container.training 2018-02-27 13:39:55 -06:00
Jerome Petazzoni
ea47e0ac05 Add link to brigade 2018-02-27 13:39:50 -06:00
Jérôme Petazzoni
09d204038f Merge pull request #116 from bridgetkromhout/versions-installed
Clarify that these are the installed versions
2018-02-27 13:36:40 -06:00
Jérôme Petazzoni
47cb0afac2 Merge pull request #115 from bridgetkromhout/any-cloud
More cloud-provider generic
2018-02-27 13:36:10 -06:00
Jerome Petazzoni
8e2e7f44d3 Break out 'scale things on a single node' section 2018-02-27 13:35:03 -06:00
Bridget Kromhout
8c7702deda Clarify that these are the installed versions
* "Brand new" is a moving target
2018-02-27 13:29:40 -06:00
Bridget Kromhout
bdc1ca01cd More cloud-provider generic 2018-02-27 13:27:11 -06:00
Jerome Petazzoni
dca58d6663 Merge Lucas awesome diagram 2018-02-27 12:22:02 -06:00
Jerome Petazzoni
a0cf4b97c0 Add Lucas' amazing diagram 2018-02-27 12:17:10 -06:00
Jerome Petazzoni
a1c239260f Add Lucas' amazing diagram 2018-02-27 12:17:02 -06:00
Jerome Petazzoni
a8a2cf54a5 Factor out links in separate files 2018-02-27 12:01:53 -06:00
Jerome Petazzoni
d5ba80da55 Replace 'five VMs' with 'a cluster of VMs' 2018-02-27 11:53:01 -06:00
Jerome Petazzoni
3f2da04763 CSS is hard but it's not an excuse 2018-02-27 09:44:32 -06:00
Jerome Petazzoni
e092f50645 Branch out intro/intro.md into per-workshop variants 2018-02-27 09:40:54 -06:00
Jérôme Petazzoni
7f698bd690 Merge pull request #114 from bridgetkromhout/master
Adding upcoming events
2018-02-27 09:28:27 -06:00
Bridget Kromhout
7fe04b9944 Adding upcoming events 2018-02-27 09:26:03 -06:00
Jerome Petazzoni
2671714df3 Move indexconf2018 to past workshops section 2018-02-27 09:11:09 -06:00
Jerome Petazzoni
630e275d99 Merge branch 'bridgetkromhout-master-updates' 2018-02-26 17:52:14 -06:00
Jerome Petazzoni
614f10432e Mostly reformatting so that slides are nice and tidy 2018-02-26 17:52:06 -06:00
Bridget Kromhout
223b5e152b Version updates 2018-02-26 16:56:45 -06:00
Bridget Kromhout
ec55cd2465 Including ACR as one of the cloud k8s offerings 2018-02-26 16:55:56 -06:00
Bridget Kromhout
c59510f921 Updates & clarifications 2018-02-26 16:54:41 -06:00
Bridget Kromhout
0f5f481213 Typo fix 2018-02-26 16:52:23 -06:00
Bridget Kromhout
b40fa45fd3 Clarifications 2018-02-26 16:50:31 -06:00
Bridget Kromhout
8faaf35da0 Clarify we didn't tag the v1 release 2018-02-26 16:48:52 -06:00
Bridget Kromhout
ce0f79af16 Updates & links for all cloud-provided k8s 2018-02-26 16:46:49 -06:00
Bridget Kromhout
faa420f9fd Clarify language and explain https use 2018-02-26 16:41:21 -06:00
51 changed files with 1478 additions and 196 deletions

19
CHECKLIST.md Normal file
View File

@@ -0,0 +1,19 @@
This is the checklist that I (Jérôme) use when delivering a workshop.
- [ ] Create branch + `_redirects` + push to GitHub + Netlify setup
- [ ] Add branch to index.html
- [ ] Update the slides that says which versions we are using
- [ ] Update the version of Compose and Machine in settings
- [ ] Create chatroom
- [ ] Set chatroom in YML and deploy
- [ ] Put chat room in index.html
- [ ] Walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] How many VMs do we need?
- [ ] Provision VMs
- [ ] Print cards
- [ ] Cut cards
- [ ] Last minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shutdown VMs
- [ ] Update index.html to remove chat link and move session to past things

View File

@@ -247,6 +247,17 @@ content but you also know to skip during presentation.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
### Pre-built images
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
There are also three variants, for demo purposes:
- `dockercoins/rng:v0.2` is broken (the server won't even start),
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
## Past events
Since its inception, this workshop has been delivered dozens of times,

View File

@@ -4,6 +4,12 @@
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
## General Workflow
@@ -35,6 +41,16 @@ The Docker Compose file here is used to build a image with all the dependencies
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
If you're not using AWS, set these to placeholder values:
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
@@ -48,6 +64,7 @@ workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
@@ -55,6 +72,7 @@ help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
@@ -63,6 +81,7 @@ start Start a batch of VMs
status List instance status for a given batch
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
wrap Run this program in a container
```
### Summary of What `./workshopctl` Does For You
@@ -75,12 +94,12 @@ test Run tests (pre-flight checks) on a batch of VMs
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of Instances for a Workshop
### Example Steps to Launch a Batch of AWS Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
@@ -88,6 +107,67 @@ test Run tests (pre-flight checks) on a batch of VMs
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
- Optional:
- Choose a name for the workshop (default is "workshop")
- Choose the number of instances (default is 3)
- Customize the desired instance size (default is Standard_D1_v2)
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
```
az group create --name workshop --location eastus
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
```
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
To display the IPs of the instances you've launched:
```
az vm list-ip-addresses --resource-group workshop --output table
```
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
```
brew install jq
mkdir -p tags/myworkshop
```
2) And then generate the IP list:
```
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
```
After the workshop is over, remove the instances:
```
az group delete --resource-group workshop
```
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
## Other Tools
### Deploying your SSH key to all the machines
@@ -97,13 +177,6 @@ test Run tests (pre-flight checks) on a batch of VMs
- Run `pcopykey`.
### Installing extra packages
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
## Even More Details
#### Sync of SSH keys
@@ -132,7 +205,7 @@ Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
@@ -142,6 +215,10 @@ The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and exe
$ ./workshopctl cards TAG settings/somefile.yaml
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
#### List tags
$ ./workshopctl list

View File

@@ -0,0 +1,250 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workshopName": {
"type": "string",
"defaultValue": "workshop",
"metadata": {
"description": "Workshop name."
}
},
"vmPrefix": {
"type": "string",
"defaultValue": "node",
"metadata": {
"description": "Prefix for VM names."
}
},
"numberOfInstances": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Number of VMs to create."
}
},
"adminUsername": {
"type": "string",
"defaultValue": "ubuntu",
"metadata": {
"description": "Admin username for VMs."
}
},
"sshKeyData": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "SSH rsa public key file as a string."
}
},
"imagePublisher": {
"type": "string",
"defaultValue": "Canonical",
"metadata": {
"description": "OS image publisher; default Canonical."
}
},
"imageOffer": {
"type": "string",
"defaultValue": "UbuntuServer",
"metadata": {
"description": "The name of the image offer. The default is Ubuntu"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "16.04-LTS",
"metadata": {
"description": "Version of the image. The default is 16.04-LTS"
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D1_v2",
"metadata": {
"description": "VM Size."
}
}
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
"vmName": "[parameters('vmPrefix')]",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"publicIPAddressName": "PublicIP",
"publicIPAddressType": "Dynamic",
"virtualNetworkName": "MyVNET",
"netSecurityGroup": "MyNSG",
"addressPrefix": "10.0.0.0/16",
"subnet1Name": "subnet-1",
"subnet1Prefix": "10.0.0.0/24",
"nicName": "myVMNic"
},
"resources": [
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "publicIPLoop",
"count": "[parameters('numberOfInstances')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnet1Name')]",
"properties": {
"addressPrefix": "[variables('subnet1Prefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
},
"subnet": {
"id": "[variables('subnet1Ref')]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "vmLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[variables('sshKeyPath')]",
"keyData": "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "[parameters('imagePublisher')]",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
}
]
}
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('netSecurityGroup')]",
"location": "[resourceGroup().location]",
"tags": {
"workshop": "[parameters('workshopName')]"
},
"properties": {
"securityRules": [
{
"name": "default-open-ports",
"properties": {
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
}
]
}
}
],
"outputs": {
"resourceID": {
"type": "string",
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
}
}
}

View File

@@ -0,0 +1,18 @@
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sshKeyData": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXTIl/M9oeSlcsC5Rfe+nZr4Jc4sl200pSw2lpdxlZ3xzeP15NgSSMJnigUrKUXHfqRQ+2wiPxEf0Odz2GdvmXvR0xodayoOQsO24AoERjeSBXCwqITsfp1bGKzMb30/3ojRBo6LBR6r1+lzJYnNCGkT+IQwLzRIpm0LCNz1j08PUI2aZ04+mcDANvHuN/hwi/THbLLp6SNWN43m9r02RcC6xlCNEhJi4wk4VzMzVbSv9RlLGST2ocbUHwmQ2k9OUmpzoOx73aQi9XNnEaFh2w/eIdXM75VtkT3mRryyykg9y0/hH8/MVmIuRIdzxHQqlm++DLXVH5Ctw6a4kS+ki7 workshop"
},
"workshopName": {
"value": "workshop"
},
"numberOfInstances": {
"value": 3
},
"vmSize": {
"value": "Standard_D1_v2"
}
}
}

View File

@@ -15,5 +15,6 @@ services:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
USER: ${USER}
entrypoint: /root/prepare-vms/workshopctl

View File

@@ -2,7 +2,7 @@
_ERR() {
error "Command $BASH_COMMAND failed (exit status: $?)"
}
set -e
set -eE
trap _ERR ERR
die() {

View File

@@ -39,7 +39,10 @@ _cmd_cards() {
need_tag $TAG
need_settings $SETTINGS
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
@@ -124,7 +127,7 @@ _cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
# Install packages
pssh "
pssh --timeout 200 "
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add - &&
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
@@ -135,7 +138,7 @@ _cmd_kube() {
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh "
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
sudo kubeadm init --token \$(cat /tmp/token)
@@ -159,7 +162,7 @@ _cmd_kube() {
fi"
# Join the other nodes to the cluster
pssh "
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
@@ -168,6 +171,19 @@ _cmd_kube() {
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd_kubetest() {
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
set -e
if grep -q node1 /tmp/node; then
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
done
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
@@ -280,7 +296,7 @@ _cmd_start() {
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type t2.medium \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
@@ -418,6 +434,7 @@ tag_is_reachable() {
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))

View File

@@ -0,0 +1,24 @@
# customize your cluster size, your cards template, and the versions
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0

View File

@@ -0,0 +1,106 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -0,0 +1,24 @@
# 3 nodes for k8s 101 workshops
# Number of VMs per cluster
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: settings/kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0

View File

@@ -20,7 +20,7 @@ DEPENDENCIES="
ssh
curl
jq
parallel-ssh
pssh
wkhtmltopdf
man
"

View File

@@ -1,2 +1 @@
/ /kube-halfday.yml.html 200!
/* http://paris-container-training.netlify.com/:splat 200!

View File

@@ -0,0 +1,28 @@
## About these slides
- All the content is available in a public GitHub repository:
https://github.com/jpetazzo/container.training
- You can get updated "builds" of the slides there:
http://container.training/
<!--
.exercise[
```open https://github.com/jpetazzo/container.training```
```open http://container.training/```
]
-->
--
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
<!--
.exercise[
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
]
-->

View File

@@ -0,0 +1,12 @@
## Clean up
- Before moving on, let's remove those containers
.exercise[
- Tell Compose to remove everything:
```bash
docker-compose down
```
]

View File

@@ -0,0 +1,240 @@
## Restarting in the background
- Many flags and commands of Compose are modeled after those of `docker`
.exercise[
- Start the app in the background with the `-d` option:
```bash
docker-compose up -d
```
- Check that our app is running with the `ps` command:
```bash
docker-compose ps
```
]
`docker-compose ps` also shows the ports exposed by the application.
---
class: extra-details
## Viewing logs
- The `docker-compose logs` command works like `docker logs`
.exercise[
- View all logs since container creation and exit when done:
```bash
docker-compose logs
```
- Stream container logs, starting at the last 10 lines for each container:
```bash
docker-compose logs --tail 10 --follow
```
<!--
```wait units of work done```
```keys ^C```
-->
]
Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
--
- Before trying to scale the application, we'll figure out if we need more resources
(CPU, RAM...)
- For that, we will use good old UNIX tools on our Docker node
---
## Looking at resource usage
- Let's look at CPU, memory, and I/O usage
.exercise[
- run `top` to see CPU and memory usage (you should see idle cycles)
<!--
```bash top```
```wait Tasks```
```keys ^C```
-->
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
<!--
```bash vmstat 1```
```wait memory```
```keys ^C```
-->
]
We have available resources.
- Why?
- How can we use them?
---
## Scaling workers on a single node
- Docker Compose supports scaling
- Let's scale `worker` and see what happens!
.exercise[
- Start one more `worker` container:
```bash
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
- Look at the aggregated logs of our containers (`worker_2` should show up)
- Look at the impact on CPU load with e.g. top (it should be negligible)
]
---
## Adding more workers
- Great, let's add more workers and call it a day, then!
.exercise[
- Start eight more `worker` containers:
```bash
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?
- Look at the aggregated logs of our containers
- Look at the impact on CPU load and memory usage
]
---
# Identifying bottlenecks
- You should have seen a 3x speed bump (not 10x)
- Adding workers didn't result in linear improvement
- *Something else* is slowing us down
--
- ... But what?
--
- The code doesn't have instrumentation
- Let's use state-of-the-art HTTP performance analysis!
<br/>(i.e. good old tools like `ab`, `httping`...)
---
## Accessing internal services
- `rng` and `hasher` are exposed on ports 8001 and 8002
- This is declared in the Compose file:
```yaml
...
rng:
build: rng
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
...
```
---
## Measuring latency under load
We will use `httping`.
.exercise[
- Check the latency of `rng`:
```bash
httping -c 3 localhost:8001
```
- Check the latency of `hasher`:
```bash
httping -c 3 localhost:8002
```
]
`rng` has a much higher latency than `hasher`.
---
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)

View File

@@ -24,13 +24,19 @@ class: extra-details
## Extra details
- This slide should have a little magnifying glass in the top left corner
- This slide has a little magnifying glass in the top left corner
(If it doesn't, it's because CSS is hard — we're only backend people, alas!)
- This magnifiying glass indicates slides that provide extra details
- Slides with that magnifying glass indicate slides providing extra details
- Feel free to skip them if:
- Feel free to skip them if you're in a hurry!
- you are in a hurry
- you are new to this and want to avoid cognitive overload
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺
---
@@ -62,9 +68,9 @@ Misattributed to Benjamin Franklin
- This is the stuff you're supposed to do!
- Go to [indexconf2018.container.training](http://indexconf2018.container.training/) to view these slides
- Go to [container.training](http://container.training/) to view these slides
- Join the chat room on @@CHAT@@
- Join the chat room: @@CHAT@@
<!-- ```open http://container.training/``` -->
@@ -78,11 +84,17 @@ class: in-person
---
class: in-person, pic
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get three VMs
## You get a cluster of cloud VMs
- Each person gets 3 private VMs (not shared with anybody else)
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
@@ -90,7 +102,7 @@ class: in-person
- You can automatically SSH from one VM to another
- The nodes have aliases: `node1`, `node2`, `node3`.
- The nodes have aliases: `node1`, `node2`, etc.
---
@@ -147,7 +159,7 @@ class: in-person
<!--
```bash
for N in $(seq 1 3); do
for N in $(awk '/node/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no node$N true
done
```
@@ -163,7 +175,7 @@ fi
```bash
ssh node2
```
- Type `exit` or `^D` to come back to node1
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->

View File

@@ -21,6 +21,79 @@
---
class: extra-details
## Compose file format version
*Particularly relevant if you have used Compose before...*
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- Services are no longer at the top level, but under a `services` section
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- Containers are placed on a dedicated network, making links unnecessary
- There are other minor differences, but upgrade is easy and straightforward
---
## Service discovery in container-land
- We do not hard-code IP addresses in the code
- We do not hard-code FQDN in the code, either
- We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
---
## Example in `worker/worker.py`
```python
redis = Redis("`redis`")
def get_random_bytes():
r = requests.get("http://`rng`/32")
return r.content
def hash_bytes(data):
r = requests.post("http://`hasher`/",
data=data,
headers={"Content-Type": "application/octet-stream"})
```
(Full source code available [here](
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## What's this application?
--
@@ -65,7 +138,7 @@ fi
- Clone the repository on `node1`:
```bash
git clone https://github.com/jpetazzo/container.training/
git clone git://github.com/jpetazzo/container.training
```
]
@@ -92,7 +165,6 @@ Without further ado, let's start our application.
<!--
```longwait units of work done```
```keys ^C```
-->
]
@@ -103,29 +175,22 @@ and displays aggregated logs.
---
## Lots of logs
## Our application at work
- The application continuously generates logs
- On the left-hand side, the "rainbow strip" shows the container names
- On the right-hand side, we see the output of our containers
- We can see the `worker` service making requests to `rng` and `hasher`
- Let's put that in the background
.exercise[
- Stop the application by hitting `^C`
]
- `^C` stops all containers by sending them the `TERM` signal
- Some containers exit immediately, others take longer
<br/>(because they don't handle `SIGTERM` and end up being killed after a 10s timeout)
- For `rng` and `hasher`, we see HTTP access logs
---
## Connecting to the web UI
- "Logs are exciting and fun!" (No-one, ever)
- The `webui` container exposes a web dashboard; let's view it
.exercise[
@@ -145,15 +210,94 @@ graph will appear.
---
## Clean up
class: self-paced, extra-details
- Before moving on, let's remove those containers
## If the graph doesn't load
If you just see a `Page not found` error, it might be because your
Docker Engine is running on a different machine. This can be the case if:
- you are using the Docker Toolbox
- you are using a VM (local or remote) created with Docker Machine
- you are controlling a remote Docker Engine
When you run DockerCoins in development mode, the web UI static files
are mapped to the container using a volume. Alas, volumes can only
work on a local environment, or when using Docker4Mac or Docker4Windows.
How to fix this?
Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again.
---
class: extra-details
## Why does the speed seem irregular?
- It *looks like* the speed is approximately 4 hashes/second
- Or more precisely: 4 hashes/second, with regular dips down to zero
- Why?
--
class: extra-details
- The app actually has a constant, steady speed: 3.33 hashes/second
<br/>
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
- Yes, and?
---
class: extra-details
## The reason why this graph is *not awesome*
- The worker doesn't update the counter after every loop, but up to once per second
- The speed is computed by the browser, checking the counter about once per second
- Between two consecutive updates, the counter will increase either by 4, or by 0
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
- What can we conclude from this?
--
class: extra-details
- "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme
---
## Stopping the application
- If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app
- The Docker Engine will send a `TERM` signal to the containers
- If the containers do not exit in a timely manner, the Engine sends a `KILL` signal
.exercise[
- Tell Compose to remove everything:
```bash
docker-compose down
```
- Stop the application by hitting `^C`
<!--
```keys ^C```
-->
]
--
Some containers exit immediately, others take longer.
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout.

View File

@@ -9,14 +9,3 @@ class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)
---
# Links and resources
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Local meetups](https://www.meetup.com/)
- [Microsoft Cloud Developer Advocates](https://developer.microsoft.com/en-us/advocates/)
.footnote[These slides (and future updates) are on → http://container.training/]

View File

@@ -17,5 +17,5 @@ class: title, in-person
*Don't stream videos or download big files during the workshop.*<br/>
*Thank you!*
**Slides: http://indexconf2018.container.training/**
]
**Slides: http://container.training/**
]

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View File

@@ -68,15 +68,32 @@
<tr>
<!--
<td>Nothing for now (stay tuned...)</td>
-->
<td>February 22, 2018: IndexConf — Kubernetes 101</td>
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
<td>Nothing for now (stay tuned...)</td>
thing for now (stay tuned...)</td>
-->
<td>March 14, 2018: Boosterconf — Kubernetes 101</td>
<td>&nbsp;</td>
<td><a class="attend" href="https://2018.boosterconf.no/talks/1179" />
</tr>
<tr>
<td>March 27, 2018: SREcon Americas — Kubernetes 101</td>
<td>&nbsp;</td>
<td><a class="attend" href="https://www.usenix.org/conference/srecon18americas/presentation/kromhout" />
</tr>
<tr><td class="title" colspan="4">Past workshops</td></tr>
<tr>
<!-- February 22, 2018 -->
<td>IndexConf: Kubernetes 101</td>
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
<!--
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
-->
</tr>
<tr>
<td>Kubernetes enablement at Docker</td>
<td><a class="slides" href="http://kube.container.training/" /></td>

View File

@@ -12,7 +12,8 @@ exclude:
chapters:
- common/title.md
- logistics.md
- common/intro.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
#- intro/Docker_History.md
@@ -40,3 +41,4 @@ chapters:
- intro/Compose_For_Dev_Stacks.md
- intro/Advanced_Dockerfiles.md
- common/thankyou.md
- intro/links.md

View File

@@ -12,7 +12,8 @@ exclude:
chapters:
- common/title.md
# - common/logistics.md
- common/intro.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
#- intro/Docker_History.md
@@ -40,3 +41,4 @@ chapters:
- intro/Compose_For_Dev_Stacks.md
- intro/Advanced_Dockerfiles.md
- common/thankyou.md
- intro/links.md

View File

@@ -90,11 +90,11 @@ COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <vcode>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
* The build fails as soon as an instructions fails
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)

38
slides/intro/intro.md Normal file
View File

@@ -0,0 +1,38 @@
## A brief introduction
- This was initially written to support in-person,
instructor-led workshops and tutorials
- You can also follow along on your own, at your own pace
- We included as much information as possible in these slides
- We recommend having a mentor to help you ...
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets
---
class: self-paced
## Hands on, you shall practice
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
- Likewise, it will take more than merely *reading* these slides
to make you an expert
- These slides include *tons* of exercises and examples
- They assume that you have acccess to a machine running Docker
- If you are attending a workshop or tutorial:
<br/>you will be given specific instructions to access a cloud VM
- If you are doing this on your own:
<br/>we will tell you how to install Docker or access a Docker environment

1
slides/intro/links.md Symbolic link
View File

@@ -0,0 +1 @@
../swarm/links.md

View File

@@ -1,7 +1,11 @@
title: |
Kubernetes 101
Deploying and Scaling Microservices
with Kubernetes
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20180222-sf)"
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
exclude:
- self-paced
@@ -9,11 +13,14 @@ exclude:
chapters:
- common/title.md
- logistics.md
- common/intro.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
#- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
@@ -29,3 +36,4 @@ chapters:
- kube/rollout.md
- kube/whatsnext.md
- common/thankyou.md
- kube/links.md

View File

@@ -11,11 +11,14 @@ exclude:
chapters:
- common/title.md
#- logistics.md
- common/intro.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
@@ -31,3 +34,4 @@ chapters:
- kube/rollout.md
- kube/whatsnext.md
- common/thankyou.md
- kube/links.md

View File

@@ -210,12 +210,24 @@ class: pic
![Node, pod, container](images/k8s-arch3-thanks-weave.png)
(Diagram courtesy of Weave Works, used with permission.)
---
class: pic
![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png)
(Diagram courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha).)
---
## Credits
- The first diagram is courtesy of Weave Works
- a *pod* can have multiple containers working together
- IP addresses are associated with *pods*, not with individual containers
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
- it's one of the best Kubernetes architecture diagrams available!
Both diagrams used with permission.

View File

@@ -1,17 +1,33 @@
# Daemon sets
- What if we want one (and exactly one) instance of `rng` per node?
- We want to scale `rng` in a way that is different from how we scaled `worker`
- If we just scale `deploy/rng` to 2, nothing guarantees that they spread
- We want one (and exactly one) instance of `rng` per node
- What if we just scale up `deploy/rng` to the number of nodes?
- nothing guarantees that the `rng` containers will be distributed evenly
- if we add nodes later, they will not automatically run a copy of `rng`
- if we remove (or reboot) a node, one `rng` container will restart elsewhere
- Instead of a `deployment`, we will use a `daemonset`
---
## Daemon sets in practice
- Daemon sets are great for cluster-wide, per-node processes:
- `kube-proxy`
- `weave` (our overlay network)
- monitoring agents
- hardware management tools (e.g. SCSI/FC HBA agents)
- etc.
- They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes)
@@ -380,7 +396,7 @@ Of course, option 2 offers more learning opportunities. Right?
.exercise[
- Check the logs of all `run=rng` pods to confirm that only 2 of them are now active:
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng
```

View File

@@ -4,11 +4,15 @@
- We are going to deploy that dashboard with *three commands:*
- one to actually *run* the dashboard
1) actually *run* the dashboard
- one to make the dashboard available from outside
2) bypass SSL for the dashboard
- one to bypass authentication for the dashboard
3) bypass authentication for the dashboard
--
There is an additional step to make the dashboard available from outside (we'll get to that)
--
@@ -16,7 +20,7 @@
---
## Running the dashboard
## 1) Running the dashboard
- We need to create a *deployment* and a *service* for the dashboard
@@ -39,11 +43,109 @@ The goo.gl URL expands to:
---
## Making the dashboard reachable from outside
- The dashboard is exposed through a `ClusterIP` service
## 2) Bypassing SSL for the dashboard
- We need a `NodePort` service instead
- The Kubernetes dashboard uses HTTPS, but we don't have a certificate
- Recent versions of Chrome (63 and later) and Edge will refuse to connect
(You won't even get the option to ignore a security warning!)
- We could (and should!) get a certificate, e.g. with [Let's Encrypt](https://letsencrypt.org/)
- ... But for convenience, for this workshop, we'll forward HTTP to HTTPS
.warning[Do not do this at home, or even worse, at work!]
---
## Running the SSL unwrapper
- We are going to run [`socat`](http://www.dest-unreach.org/socat/doc/socat.html), telling it to accept TCP connections and relay them over SSL
- Then we will expose that `socat` instance with a `NodePort` service
- For convenience, these steps are neatly encapsulated into another YAML file
.exercise[
- Apply the convenient YAML file, and defeat SSL protection:
```bash
kubectl apply -f https://goo.gl/tA7GLz
```
]
The goo.gl URL expands to:
<br/>
.small[.small[https://gist.githubusercontent.com/jpetazzo/c53a28b5b7fdae88bc3c5f0945552c04/raw/da13ef1bdd38cc0e90b7a4074be8d6a0215e1a65/socat.yaml]]
.warning[All our dashboard traffic is now clear-text, including passwords!]
---
## Connecting to the dashboard
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
The dashboard will then ask you which authentication you want to use.
---
## Dashboard authentication
- We have three authentication options at this point:
- token (associated with a role that has appropriate permissions)
- kubeconfig (e.g. using the `~/.kube/config` file from `node1`)
- "skip" (use the dashboard "service account")
- Let's use "skip": we get a bunch of warnings and don't see much
---
## 3) Bypass authentication for the dashboard
- The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
- We just need to load another YAML file!
.exercise[
- Grant admin privileges to the dashboard so we can see our resources:
```bash
kubectl apply -f https://goo.gl/CHsLTA
```
- Reload the dashboard and enjoy!
]
--
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
---
## Exposing the dashboard over HTTPS
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
- Let's expose the dashboard over HTTPS!
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
- We will change that into a `NodePort` service (accepting outside traffic)
.exercise[
@@ -68,6 +170,8 @@ The goo.gl URL expands to:
- The dashboard was created in the `kube-system` namespace
--
.exercise[
- Edit the service:
@@ -83,56 +187,15 @@ The goo.gl URL expands to:
---
## Connecting to the dashboard
## Running the Kubernetes dashboard securely
.exercise[
- The steps that we just showed you are *for educational purposes only!*
- Connect to https://oneofournodes:3xxxx/
- If you do that on your production cluster, people [can and will abuse it](https://blog.redlock.io/cryptojacking-tesla)
- Yes, https. If you use http it will say:
This page isnt working
<oneofournodes> sent an invalid response.
ERR_INVALID_HTTP_RESPONSE
- You will have to work around the TLS certificate validation warning
<!-- ```open https://node1:3xxxx/``` -->
]
- We have three authentication options at this point:
- token (associated with a role that has appropriate permissions)
- kubeconfig (e.g. using the `~/.kube/config` file from `node1`)
- "skip" (use the dashboard "service account")
- Let's use "skip": we get a bunch of warnings and don't see much
---
## Granting more rights to the dashboard
- The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
- We just need to load another YAML file!
.exercise[
- Grant admin privileges to the dashboard so we can see our resources:
```bash
kubectl apply -f https://goo.gl/CHsLTA
```
- Reload the dashboard and enjoy!
]
--
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
- For an in-depth discussion about securing the dashboard,
<br/>
check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca)
---
@@ -185,3 +248,4 @@ The goo.gl URL expands to:
- It introduces new failure modes
- Example: the official setup instructions for most pod networks

View File

@@ -9,8 +9,7 @@
- We recommend having a mentor to help you ...
- ... Or be comfortable spending some time reading the Kubernetes
[documentation](https://kubernetes.io/docs/) ...
- ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ...
- ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets
@@ -27,41 +26,10 @@ class: self-paced
- These slides include *tons* of exercises and examples
- They assume that you have access to some Docker nodes
- They assume that you have access to a Kubernetes cluster
- If you are attending a workshop or tutorial:
<br/>you will be given specific instructions to access your cluster
- If you are doing this on your own:
<br/>the first chapter will give you various options to get your own cluster
---
## About these slides
- All the content is available in a public GitHub repository:
https://github.com/jpetazzo/container.training
- You can get updated "builds" of the slides there:
http://container.training/
<!--
.exercise[
```open https://github.com/jpetazzo/container.training```
```open http://container.training/```
]
-->
--
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
<!--
.exercise[
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/intro.md```
]
-->

View File

@@ -136,7 +136,8 @@ There is already one service on our cluster: the Kubernetes API itself.
```
- `-k` is used to skip certificate verification
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `$ kubectl get svc`
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc`
]
@@ -211,9 +212,11 @@ The error that we see is expected: the Kubernetes API requires authentication.
*Ding ding ding ding ding!*
The `kube-system` namespace is used for the control plane.
---
## What are all these pods?
## What are all these control plane pods?
- `etcd` is our etcd server
@@ -232,3 +235,34 @@ The error that we see is expected: the Kubernetes API requires authentication.
- the pods with a name ending with `-node1` are the master components
<br/>
(they have been specifically "pinned" to the master node)
---
## What about `kube-public`?
.exercise[
- List the pods in the `kube-public` namespace:
```bash
kubectl -n kube-public get pods
```
]
--
- Maybe it doesn't have pods, but what secrets is `kube-public` keeping?
--
.exercise[
- List the secrets in the `kube-public` namespace:
```bash
kubectl -n kube-public get secrets
```
]
--
- `kube-public` is created by kubeadm & [used for security bootstrapping](http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html)

View File

@@ -245,4 +245,4 @@ at the Google NOC ...
<br/>
.small[are we getting 1000 packets per second]
<br/>
.small[of ICMP ECHO traffic from Azure ?!?”]
.small[of ICMP ECHO traffic from these IPs?!?”]

View File

@@ -0,0 +1,17 @@
# Links and resources
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Azure Container Service](https://docs.microsoft.com/azure/aks/)
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
- [Local meetups](https://www.meetup.com/)
- [devopsdays](https://www.devopsdays.org/)
.footnote[These slides (and future updates) are on → http://container.training/]

20
slides/kube/links.md Normal file
View File

@@ -0,0 +1,20 @@
# Links and resources
All things Kubernetes:
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
All things Docker:
- [Docker documentation](http://docs.docker.com/)
- [Docker Hub](https://hub.docker.com)
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
Everything else:
- [Local meetups](https://www.meetup.com/)
.footnote[These slides (and future updates) are on → http://container.training/]

View File

@@ -185,6 +185,7 @@ The curl command should now output:
- Build and push the images:
```bash
export REGISTRY
export TAG=v0.1
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push
```
@@ -220,6 +221,30 @@ services:
---
class: extra-details
## Avoiding the `latest` tag
.warning[Make sure that you've set the `TAG` variable properly!]
- If you don't, the tag will default to `latest`
- The problem with `latest`: nobody knows what it points to!
- the latest commit in the repo?
- the latest commit in some branch? (Which one?)
- the latest tag?
- some random version pushed by a random team member?
- If you keep pushing the `latest` tag, how do you roll back?
- Image tags should be meaningful, i.e. correspond to code branches, tags, or hashes
---
## Deploying all the things
- We can now deploy our code (as well as a redis instance)
@@ -234,7 +259,7 @@ services:
- Deploy everything else:
```bash
for SERVICE in hasher rng webui worker; do
kubectl run $SERVICE --image=$REGISTRY/$SERVICE
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
@@ -268,7 +293,7 @@ services:
---
# Exposing services internally
# Exposing services internally
- Three deployments need to be reachable by others: `hasher`, `redis`, `rng`

View File

@@ -149,7 +149,7 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
- We want to:
- revert to `v0.1` (which we now realize we didn't tag - yikes!)
- revert to `v0.1`
- be conservative on availability (always have desired number of available workers)
- be aggressive on rollout speed (update more than one pod at a time)
- give some time to our workers to "warm up" before starting more
@@ -163,7 +163,7 @@ spec:
spec:
containers:
- name: worker
image: $REGISTRY/worker:latest
image: $REGISTRY/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
@@ -192,7 +192,7 @@ spec:
spec:
containers:
- name: worker
image: $REGISTRY/worker:latest
image: $REGISTRY/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0

View File

@@ -4,7 +4,7 @@
--
- We used `kubeadm` on Azure instances with Ubuntu 16.04 LTS
- We used `kubeadm` on freshly installed VM instances running Ubuntu 16.04 LTS
1. Install Docker
@@ -36,7 +36,7 @@
--
- "It's still twice as many steps as setting up a Swarm cluster 😕 " -- Jérôme
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
---
@@ -50,6 +50,8 @@
- If you are on AWS:
[EKS](https://aws.amazon.com/eks/)
or
[kops](https://github.com/kubernetes/kops)
- On a local machine:
[minikube](https://kubernetes.io/docs/getting-started-guides/minikube/),

View File

@@ -1,4 +1,4 @@
## Versions Installed
## Versions installed
- Kubernetes 1.9.3
- Docker Engine 18.02.0-ce

View File

@@ -131,9 +131,9 @@ And *then* it is time to look at orchestration!
- shell scripts invoking `kubectl`
- YAML resources descriptions committed to a repo
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
- [Helm](https://github.com/kubernetes/helm) (~package manager)
- [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform)
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
---

View File

@@ -0,0 +1,16 @@
## Intros
- Hello! We are:
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
- .emoji[🌟] Joe ([@joelaha](https://twitter.com/joelaha))
- The workshop will run from 13:30-16:45
- There will be a break from 15:00-15:15
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*

View File

@@ -2,19 +2,18 @@
- Hello! We are:
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🌟] Jessica ([@jldeen](https://twitter.com/jldeen))
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Docker Inc.)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo))
- The workshop will run from 9am to 4pm
- This workshop will run from 10:30am-12:45pm.
- There will be a lunch break at noon
- Lunchtime is after the workshop!
(And we will take a 15min break at 11:30am!)
(And coffee breaks!)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
- Live feedback, questions, help on @@CHAT@@

View File

@@ -16,11 +16,14 @@ exclude:
chapters:
- common/title.md
- logistics.md
- common/intro.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
@@ -51,3 +54,4 @@ chapters:
- swarm/stateful.md
- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -16,11 +16,14 @@ exclude:
chapters:
- common/title.md
- logistics.md
- common/intro.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
@@ -51,3 +54,4 @@ chapters:
#- swarm/stateful.md
#- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -11,7 +11,8 @@ exclude:
chapters:
- common/title.md
#- common/logistics.md
- common/intro.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
@@ -22,6 +23,8 @@ chapters:
Part 1
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
@@ -60,3 +63,4 @@ chapters:
- swarm/stateful.md
- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -11,7 +11,8 @@ exclude:
chapters:
- common/title.md
#- common/logistics.md
- common/intro.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
@@ -22,6 +23,8 @@ chapters:
Part 1
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
@@ -60,3 +63,4 @@ chapters:
- swarm/stateful.md
- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

38
slides/swarm/intro.md Normal file
View File

@@ -0,0 +1,38 @@
## A brief introduction
- This was initially written to support in-person,
instructor-led workshops and tutorials
- You can also follow along on your own, at your own pace
- We included as much information as possible in these slides
- We recommend having a mentor to help you ...
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets
---
class: self-paced
## Hands on, you shall practice
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
- Likewise, it will take more than merely *reading* these slides
to make you an expert
- These slides include *tons* of exercises and examples
- They assume that you have access to some Docker nodes
- If you are attending a workshop or tutorial:
<br/>you will be given specific instructions to access your cluster
- If you are doing this on your own:
<br/>the first chapter will give you various options to get your own cluster

12
slides/swarm/links.md Normal file
View File

@@ -0,0 +1,12 @@
# Links and resources
- [Docker Community Slack](https://community.docker.com/registrations/groups/4316)
- [Docker Community Forums](https://forums.docker.com/)
- [Docker Hub](https://hub.docker.com)
- [Docker Blog](http://blog.docker.com/)
- [Docker documentation](http://docs.docker.com/)
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
- [Docker on Twitter](http://twitter.com/docker)
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
.footnote[These slides (and future updates) are on → http://container.training/]

View File

@@ -10,7 +10,7 @@ Otherwise: check [part 1](#part-1) to learn how to set up your own cluster.
We pick up exactly where we left you, so we assume that you have:
- a five nodes Swarm cluster,
- a Swarm cluster with at least 3 nodes,
- a self-hosted registry,