diff --git a/www/htdocs/index.html b/www/htdocs/index.html index ee313eaa..9b3bfe3a 100644 --- a/www/htdocs/index.html +++ b/www/htdocs/index.html @@ -850,10 +850,142 @@ So, what do‽ --- -## +# Network plumbing on Swarm +- We will share *network namespaces* -- +- Other available options: + + - injecting service addresses in environment variables + + - implementing service discovery in the application + + - using an overlay network like Weave or Pipework + +--- + +## Network namespaces + +- Two (or more) containers can share a network stack + +- They will have the same IP address + +- They will be able to connect over `localhost` + +- Other containers can be added later + +--- + +## Connecting over localhost + +.exercise[ + +- Start a container running redis: +
`docker run -d --name myredis redis` + +- Start another container in the same network namespace: +
`docker run -ti --net container:myredis ubuntu` + +- In the 2nd container, install telnet: +
`apt-get update && apt-get install telnet` + +- In the 2nd container, connect to redis on localhost: +
`telnet localhost 6379` + +] + +Some Redis commands: `"SET key value"` `"GET key"` + +--- + +## Same IP address + +- Let's confirm that our containers share + the same IP address + +.exercise[ + +- Run a couple of times: +
`docker run ubuntu ip addr ls` + +- Now run a couple of times: +
`docker run --net container:myredis ubuntu ip addr ls` + +] + +--- + +## Our plan for service discovery + +- Replace all `links` with static `/etc/hosts` entries + +- Those entries will map to `127.0.0.X` +
(with different `X` for each service) + +- Example: `redis` will point to `127.0.0.2` +
(instead of a container address) + +- Start all services; scale them if we want +
(at this point, they will all fail to connect) + +- Start ambassadors in the services' namespace; +
each ambassador will listen on the right `127.0.0.X` + +.icon[![Warning](warning.png)] Services should try to reconnect! + +--- + +## .icon[![Warning](warning.png)] Work in progress + +- Ideally, we would use `--add-host` + (and Docker Compose counterpart, `extra_hosts`) to populate + `/etc/hosts` + +- Unfortunately, this does not work yet +
(See [Swarm issue #908](https://github.com/docker/swarm/issues/908) + for details) + +- We'll populate `/etc/hosts` manually instead +
(with `docker exec`) + +--- + +## Our tools + +- `unlink-services.py` + + - replaces all `links` with `extra_hosts` entries + +- `connect-services.py` + + - scans running containers + + - generate commands to patch `/etc/hosts` + + - generate commands to start ambassadors + +--- + +## Putting it together + +.exercise[ + +- Generate the new Compose YAML file: +
`../unlink-services.py docker-compose.yml-XXX deploy.yml` + +- Start our services: +
`docker-compose -f deploy.yml up -d` +
`docker-compose -f deploy.yml scale worker=10` +
`docker-compose -f deploy.yml scale rng=10` + +- Generate plumbing commands: +
`../connect-services.py deploy.yml` + +] + +Review the plumbing commands, then execute them. + +--- # Scaling on Swarm