In this chapter, you will learn how Docker containers are networked when using frameworks like Kubernetes, Docker Swarm, and Mesosphere.
We will cover the following topics:
Docker Swarm is a native clustering system for Docker. Docker Swarm exposes the standard Docker API so that any tool that communicates with the Docker daemon can communicate with Docker Swarm as well. The basic aim is to allow the creation and usage of a pool of Docker hosts together. The cluster manager of Swarm schedules the containers based on the availability resources in a cluster. We can also specify the constrained resources for a container while deploying it. Swarm is designed to pack containers onto a host by saving other host resources for heavier and bigger containers rather than scheduling them randomly to a host in the cluster.
Similar to other Docker projects, Docker Swarm uses a Plug and Play architecture. Docker Swarm provides backend services to maintain a list of IP addresses in your Swarm cluster. There are several services, such as etcd, Consul, and Zookeeper; even a static file can be used. Docker Hub also provides a hosted discovery service, which is used in the normal configuration of Docker Swarm.
Docker Swarm scheduling uses multiple strategies in order to rank nodes. When a new container is created, Swarm places it on the node on the basis of the highest computed rank, using the following strategies:
Docker Swarm also uses filters in order to schedule containers, such as:
environment=production
The following figure explains various components of a Docker Swarm cluster:
Let's set up our Docker Swarm setup, which will have two nodes and one master.
We will be using a Docker client in order to access the Docker Swarm cluster. A Docker client can be set up on a machine or laptop and should have access to all the machines present in the Swarm cluster.
After installing Docker on all three machines, we will restart the Docker service from a command line so that it can be accessed from TCP port 2375 on the localhost (0.0.0.0:2375
) or from a specific host IP address and can allow connections using a Unix socket on all the Swarm nodes, as follows:
$ docker -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock –d &
Docker Swarm images are required to be deployed as Docker containers on the master node. In our example, the master node's IP address is 192.168.59.134
. Replace it with your Swarm's master node. From the Docker client machine, we will be installing Docker Swarm on the master node using the following command:
$ sudo docker -H tcp://192.168.59.134:2375 run --rm swarm create Unable to find image 'swarm' locally Pulling repository swarm e12f8c5e4c3b: Download complete cf43a42a05d1: Download complete 42c4e5c90ee9: Download complete 22cf18566d05: Download complete 048068586dc5: Download complete 2ea96b3590d8: Download complete 12a239a7cb01: Download complete 26b910067c5f: Download complete 4fdfeb28bd618291eeb97a2096b3f841
The Swarm token generated after the execution of the command should be noted, as it will be used for the Swarm setup. In our case, it is this:
"4fdfeb28bd618291eeb97a2096b3f841"
The following are the steps to set up a two-node Docker Swarm cluster:
docker
command is required to be executed with Node 1's IP address (in our case, 192.168.59.135
) and the Swarm token generated in the preceding code in order to add it to the Swarm cluster:$ docker -H tcp://192.168.59.135:2375 run -d swarm join --addr=192.168.59.135:2375 token:// 4fdfeb28bd618291eeb97a2096b3f841 Unable to find image 'swarm' locally Pulling repository swarm e12f8c5e4c3b: Download complete cf43a42a05d1: Download complete 42c4e5c90ee9: Download complete 22cf18566d05: Download complete 048068586dc5: Download complete 2ea96b3590d8: Download complete 12a239a7cb01: Download complete 26b910067c5f: Download complete e4f268b2cc4d896431dacdafdc1bb56c98fed01f58f8154ba13908c7e6fe675b
$ sudo docker -H tcp://192.168.59.134:2375 run -d -p 5001:2375 swarm manage token:// 4fdfeb28bd618291eeb97a2096b3f841 f06ce375758f415614dc5c6f71d5d87cf8edecffc6846cd978fe07fafc3d05d3
The Swarm cluster is set up and can be managed using the Swarm manager residing on the master node. To list all the nodes, the following command can be executed using a Docker client:
$ sudo docker -H tcp://192.168.59.134:2375 run --rm swarm list token:// 4fdfeb28bd618291eeb97a2096b3f841 192.168.59.135:2375 192.168.59.136:2375
$ sudo docker -H tcp://192.168.59.134:5001 info Containers: 0 Strategy: spread Filters: affinity, health, constraint, port, dependency Nodes: 2 agent-1: 192.168.59.136:2375 └ Containers: 0 └ Reserved CPUs: 0 / 8 └ Reserved Memory: 0 B / 1.023 GiB agent-0: 192.168.59.135:2375 └ Containers: 0 └ Reserved CPUs: 0 / 8 └ Reserved Memory: 0 B / 1.023 GiB
ubuntu
container can be launched onto the cluster by specifying the name as swarm-ubuntu
and using the following command:$ sudo docker -H tcp://192.168.59.134:5001 run -it --name swarm-ubuntu ubuntu /bin/sh
$ sudo docker -H tcp://192.168.59.134:5001 ps
This completes the setup of a two-node Docker Swarm cluster.
Docker Swarm networking has integration with libnetwork and even provides support for overlay networks. libnetwork provides a Go implementation to connect containers; it is a robust container network model that provides network abstraction for applications and the programming interface of containers. Docker Swarm is now fully compatible with the new networking model in Docker 1.9 (note that we will be using Docker 1.9 in the following setup). The key-value store is required for overlay networks, which includes discovery, networks, IP addresses, and more information.
In the following example, we will be using Consul to understand Docker Swarm networking in a better way:
sample-keystore
using docker-machine
:$ docker-machine create -d virtualbox sample-keystore Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... To see how to connect Docker to this machine, run: docker-machine.exe env sample-keystore
progrium/consul
container on the sample-keystore
machine on port 8500
with the following command:$ docker $(docker-machine config sample-keystore) run -d -p "8500:8500" -h "consul" progrium/consul -server –bootstrap Unable to find image 'progrium/consul:latest' locally latest: Pulling from progrium/consul 3b4d28ce80e4: Pull complete e5ab901dcf2d: Pull complete 30ad296c0ea0: Pull complete 3dba40dec256: Pull complete f2ef4387b95e: Pull complete 53bc8dcc4791: Pull complete 75ed0b50ba1d: Pull complete 17c3a7ed5521: Pull complete 8aca9e0ecf68: Pull complete 4d1828359d36: Pull complete 46ed7df7f742: Pull complete b5e8ce623ef8: Pull complete 049dca6ef253: Pull complete bdb608bc4555: Pull complete 8b3d489cfb73: Pull complete c74500bbce24: Pull complete 9f3e605442f6: Pull complete d9125e9e799b: Pull complete Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274 Status: Downloaded newer image for progrium/consul:latest 1a1be5d207454a54137586f1211c02227215644fa0e36151b000cfcde3b0df7c
sample-keystore
machine:$ eval "$(docker-machine env sample-keystore)"
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1a1be5d20745 progrium/consul /bin/start -server 5 minutes ago Up 5 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp cocky_bhaskara
docker-machine
. The two machines can be created in VirtualBox; one can act as the Swarm master. As we create each Swarm node, we will be passing the options required for Docker Engine to have an overlay network driver:$ docker-machine create -d virtualbox --swarm --swarm-image="swarm" --swarm-master --swarm-discovery="consul://$(docker-machine ip sample-keystore):8500" --engine-opt="cluster-store=consul://$(docker-machine ip sample-keystore):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-master Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... To see how to connect Docker to this machine, run: docker-machine env swarm-master
The use of the parameters used in the preceding command is as follows:
--swarm
: This is used to configure a machine with Swarm.--engine-opt
: This option is used to define arbitrary daemon options required to be supplied. In our case, we will supply the engine daemon with the --cluster-store
option during creation time, which tells the engine the location of the key-value store for the overlay network usability. The --cluster-advertise
option will put the machine on the network at the specific port.--swarm-discovery
: It is used to discover services to use with Swarm, in our case, consul
will be that service.--swarm-master
: This is used to configure a machine as the Swarm master.$ docker-machine create -d virtualbox --swarm --swarm-image="swarm:1.0.0-rc2" --swarm-discovery="consul://$(docker-machine ip sample-keystore):8500" --engine-opt="cluster-store=consul://$(docker-machine ip sample-keystore):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-node-1 Running pre-create checks... Creating machine... Waiting for machine to be running, this may take a few minutes... Machine is running, waiting for SSH to be available... Detecting operating system of created instance... Provisioning created instance... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Configuring swarm... To see how to connect Docker to this machine, run: docker-machine env swarm-node-1
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM sample-keystore - virtualbox Running tcp://192.168.99.100:2376 swarm-master - virtualbox Running tcp://192.168.99.101:2376 swarm-master (master) swarm-node-1 - virtualbox Running tcp://192.168.99.102:2376 swarm-master
swarm-master
:$ eval $(docker-machine env --swarm swarm-master)
$ docker network create –driver overlay sample-net
$ docker network ls NETWORK ID NAME DRIVER 9f904ee27bf5 sample-net overlay 7fca4eb8c647 bridge bridge b4234109be9b none null cf03ee007fb4 host host
$ eval $(docker-machine env swarm-node-1) $ docker network ls NETWORK ID NAME DRIVER 7fca4eb8c647 bridge bridge b4234109be9b none null cf03ee007fb4 host host 9f904ee27bf5 sample-net overlay
$ eval $(docker-machine env swarm-master)
ubuntu
container with the constraint environment set to the first node:$ docker run -itd --name=os --net=sample-net --env="constraint:node==swarm-master" ubuntu
ifconfig
command that the container has two network interfaces, and it will be accessible from the container deployed using Swarm manager on any other host.