Merge pull request #48 from soulshake/typo

Typo fixes
This commit is contained in:
Jérôme Petazzoni
2016-10-08 14:49:16 +02:00
committed by GitHub

View File

@@ -1662,7 +1662,7 @@ Can you see how?
- They are using a different kind of ID, reflecting the fact that they
are SwarmKit objects instead of "classic" Docker Engine objects.
- They're *scope* is "swarm" instead of "local".
- Their *scope* is `swarm` instead of `local`.
- They are using the overlay driver.
@@ -1859,7 +1859,7 @@ Note: if the hash rate goes to zero and doesn't climb back up, try to `rm` and `
## Checkpoint
- We've seen how to setup a Swarm
- We've seen how to set up a Swarm
- We've used it to host our own registry
@@ -1869,7 +1869,7 @@ Note: if the hash rate goes to zero and doesn't climb back up, try to `rm` and `
- We've deployed and scaled our application
Let's treat ourselves with a nice pat in the back!
Let's treat ourselves with a nice pat on the back!
--
@@ -2499,7 +2499,7 @@ What we will do:
- Manually send a few log entries using one-shot containers
- Setup our containers to send their logs to Logstash
- Set our containers up to send their logs to Logstash
---
@@ -2780,7 +2780,7 @@ After ~15 seconds, you should see the log messages in Kibana.
**This is not a "production-grade" setup.**
It is just an educational example. We did setup a single
It is just an educational example. We did set up a single
ElasticSearch instance and a single Logstash instance.
In a production setup, you need an ElasticSearch cluster
@@ -2789,7 +2789,7 @@ need multiple Logstash instances.
And if you want to withstand
bursts of logs, you need some kind of message queue:
Redis if you're cheap, Kafka is you want to make sure
Redis if you're cheap, Kafka if you want to make sure
that you don't drop messages on the floor. Good luck.
---
@@ -3500,7 +3500,7 @@ the task (it will delete+re-create on all nodes).
---
## Setup Grafana
## Set up Grafana
.exercise[