From 773528fc2be74cee57742bd1a4b3618880765b10 Mon Sep 17 00:00:00 2001 From: AJ Bowen Date: Fri, 7 Oct 2016 16:19:05 +0200 Subject: [PATCH 1/2] They're --> Their --- docs/index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.html b/docs/index.html index da776bd6..5a2abbdf 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1662,7 +1662,7 @@ Can you see how? - They are using a different kind of ID, reflecting the fact that they are SwarmKit objects instead of "classic" Docker Engine objects. -- They're *scope* is "swarm" instead of "local". +- Their *scope* is `swarm` instead of `local`. - They are using the overlay driver. From e403a005ea02a531d51d3d0c8e16c9bdc7cd3c6e Mon Sep 17 00:00:00 2001 From: AJ Bowen Date: Fri, 7 Oct 2016 17:09:34 +0200 Subject: [PATCH 2/2] 'Set up' when it's a verb, 'setup' when it's a noun. --- docs/index.html | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/index.html b/docs/index.html index 5a2abbdf..f137798f 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1859,7 +1859,7 @@ Note: if the hash rate goes to zero and doesn't climb back up, try to `rm` and ` ## Checkpoint -- We've seen how to setup a Swarm +- We've seen how to set up a Swarm - We've used it to host our own registry @@ -1869,7 +1869,7 @@ Note: if the hash rate goes to zero and doesn't climb back up, try to `rm` and ` - We've deployed and scaled our application -Let's treat ourselves with a nice pat in the back! +Let's treat ourselves with a nice pat on the back! -- @@ -2499,7 +2499,7 @@ What we will do: - Manually send a few log entries using one-shot containers -- Setup our containers to send their logs to Logstash +- Set our containers up to send their logs to Logstash --- @@ -2780,7 +2780,7 @@ After ~15 seconds, you should see the log messages in Kibana. **This is not a "production-grade" setup.** -It is just an educational example. We did setup a single +It is just an educational example. We did set up a single ElasticSearch instance and a single Logstash instance. In a production setup, you need an ElasticSearch cluster @@ -2789,7 +2789,7 @@ need multiple Logstash instances. And if you want to withstand bursts of logs, you need some kind of message queue: -Redis if you're cheap, Kafka is you want to make sure +Redis if you're cheap, Kafka if you want to make sure that you don't drop messages on the floor. Good luck. --- @@ -3500,7 +3500,7 @@ the task (it will delete+re-create on all nodes). --- -## Setup Grafana +## Set up Grafana .exercise[