10 KiB
Running our first Swarm service
-
How do we run services? Simplified version:
docker run→docker service create
.exercise[
-
Create a service featuring an Alpine container pinging Google resolvers:
docker service create alpine ping 8.8.8.8 -
Check the result:
docker service ps <serviceID>
]
--detach for service creation
(New in Docker Engine 17.05)
If you are running Docker 17.05 or later, you will see the following message:
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
Let's ignore it for now; but we'll come back to it in just a few minutes!
Checking service logs
(New in Docker Engine 17.05)
-
Just like
docker logsshows the output of a specific local container ... -
...
docker service logsshows the output of all the containers of a specific service
.exercise[
- Check the output of our ping command:
docker service logs <serviceID>
]
Flags --follow and --tail are available, as well as a few others.
Note: by default, when a container is destroyed (e.g. when scaling down), its logs are lost.
class: extra-details
Before Docker Engine 17.05
-
Docker 1.13/17.03/17.04 have
docker service logsas an experimental feature
(available only when enabling the experimental feature flag) -
We have to use
docker logs, which only works on local containers -
We will have to connect to the node running our container
(unless it was scheduled locally, of course)
class: extra-details
Looking up where our container is running
- The
docker service pscommand told us where our container was scheduled
.exercise[
-
Look up the
NODEon which the container is running:docker service ps <serviceID> -
If you use Play-With-Docker, switch to that node's tab, or set
DOCKER_HOST -
Otherwise,
sshinto tht node or use$(eval docker-machine env node...)
]
class: extra-details
Viewing the logs of the container
.exercise[
-
See that the container is running and check its ID:
docker ps -
View its logs:
docker logs <containerID> -
Go back to
node1afterwards
]
Scale our service
- Services can be scaled in a pinch with the
docker service updatecommand
.exercise[
-
Scale the service to ensure 2 copies per node:
docker service update <serviceID> --replicas 10 -
Check that we have two containers on the current node:
docker ps
]
View deployment progress
(New in Docker Engine 17.05)
-
Commands that create/update/delete services can run with
--detach=false -
The CLI will show the status of the command, and exit once it's done working
.exercise[
- Scale the service to ensure 3 copies per node:
docker service update <serviceID> --replicas 15 --detach=false
]
Note: --detach=false will eventually become the default.
With older versions, you can use e.g.: watch docker service ps <serviceID>
Expose a service
-
Services can be exposed, with two special properties:
-
the public port is available on every node of the Swarm,
-
requests coming on the public port are load balanced across all instances.
-
-
This is achieved with option
-p/--publish; as an approximation:docker run -p → docker service create -p -
If you indicate a single port number, it will be mapped on a port starting at 30000
(vs. 32768 for single container mapping) -
You can indicate two port numbers to set the public port number
(just like withdocker run -p)
Expose ElasticSearch on its default port
.exercise[
- Create an ElasticSearch service (and give it a name while we're at it):
docker service create --name search --publish 9200:9200 --replicas 7 \ --detach=false elasticsearch`:2`
]
Note: don't forget the :2!
The latest version of the ElasticSearch image won't start without mandatory configuration.
Tasks lifecycle
-
During the deployment, you will be able to see multiple states:
-
assigned (the task has been assigned to a specific node)
-
preparing (this mostly means "pulling the image")
-
starting
-
running
-
-
When a task is terminated (stopped, killed...) it cannot be restarted
(A replacement task will be created)
class: extra-details
Test our service
-
We mapped port 9200 on the nodes, to port 9200 in the containers
-
Let's try to reach that port!
.exercise[
- Try the following command:
curl localhost:9200
]
(If you get Connection refused: congratulations, you are very fast indeed! Just try again.)
ElasticSearch serves a little JSON document with some basic information about this instance; including a randomly-generated super-hero name.
Test the load balancing
- If we repeat our
curlcommand multiple times, we will see different names
.exercise[
- Send 10 requests, and see which instances serve them:
for N in $(seq 1 10); do curl -s localhost:9200 | jq .name done
]
Note: if you don't have jq on your Play-With-Docker instance, just install it:
apk add --no-cache jq
Load balancing results
Traffic is handled by our clusters TCP routing mesh.
Each request is served by one of the 7 instances, in rotation.
Note: if you try to access the service from your browser, you will probably see the same instance name over and over, because your browser (unlike curl) will try to re-use the same connection.
Under the hood of the TCP routing mesh
-
Load balancing is done by IPVS
-
IPVS is a high-performance, in-kernel load balancer
-
It's been around for a long time (merged in the kernel since 2.4)
-
Each node runs a local load balancer
(Allowing connections to be routed directly to the destination, without extra hops)
Managing inbound traffic
There are many ways to deal with inbound traffic on a Swarm cluster.
-
Put all (or a subset) of your nodes in a DNS
Arecord -
Assign your nodes (or a subset) to an ELB
-
Use a virtual IP and make sure that it is assigned to an "alive" node
-
etc.
class: btw-labels
Managing HTTP traffic
-
The TCP routing mesh doesn't parse HTTP headers
-
If you want to place multiple HTTP services on port 80, you need something more
-
You can setup NGINX or HAProxy on port 80 to do the virtual host switching
-
Docker Universal Control Plane provides its own HTTP routing mesh
-
add a specific label starting with
com.docker.ucp.mesh.httpto your services -
labels are detected automatically and dynamically update the configuration
-
class: btw-labels
You should use labels
-
Labels are a great way to attach arbitrary information to services
-
Examples:
-
HTTP vhost of a web app or web service
-
backup schedule for a stateful service
-
owner of a service (for billing, paging...)
-
etc.
-
Pro-tip for ingress traffic management
-
It is possible to use local networks with Swarm services
-
This means that you can do something like this:
docker service create --network host --mode global traefik ...(This runs the
traefikload balancer on each node of your cluster, in thehostnetwork) -
This gives you native performance (no iptables, no proxy, no nothing!)
-
The load balancer will "see" the clients' IP addresses
-
But: a container cannot simultaneously be in the
hostnetwork and another network(You will have to route traffic to containers using exposed ports or UNIX sockets)
class: extra-details
Using local networks (host, macvlan ...) with Swarm services
-
Using the
hostnetwork is fairly straightforward(With the caveats described on the previous slide)
-
It is also possible to use drivers like
macvlan-
see this guide to get started on
macvlan -
see this PR for more information about local network drivers in Swarm mode
-
Visualize container placement
- Let's leverage the Docker API!
.exercise[
-
Get the source code of this simple-yet-beautiful visualization app:
cd ~ git clone git://github.com/dockersamples/docker-swarm-visualizer -
Build and run the Swarm visualizer:
cd docker-swarm-visualizer docker-compose up -d
]
Connect to the visualization webapp
- It runs a web server on port 8080
.exercise[
-
Point your browser to port 8080 of your node1's public ip
(If you use Play-With-Docker, click on the (8080) badge)
]
-
The webapp updates the display automatically (you don't need to reload the page)
-
It only shows Swarm services (not standalone containers)
-
It shows when nodes go down
-
It has some glitches (it's not Carrier-Grade Enterprise-Compliant ISO-9001 software)
Why This Is More Important Than You Think
-
The visualizer accesses the Docker API from within a container
-
This is a common pattern: run container management tools in containers
-
Instead of viewing your cluster, this could take care of logging, metrics, autoscaling ...
-
We can run it within a service, too! We won't do it, but the command would look like:
docker service create \ --mount source=/var/run/docker.sock,type=bind,target=/var/run/docker.sock \ --name viz --constraint node.role==manager ...
Credits: the visualization code was written by
Francisco Miranda.
Mano Marks adapted
it to Swarm and maintains it.
Terminate our services
-
Before moving on, we will remove those services
-
docker service rmcan accept multiple services names or IDs -
docker service lscan accept the-qflag -
A Shell snippet a day keeps the cruft away
.exercise[
- Remove all services with this one liner:
docker service ls -q | xargs docker service rm
]