Services and tasks

Along with the new orchestration engine, Docker introduced the new abstraction of services and tasks in version 1.12. A service may consist of many instances of a task. We call each instance a replica. Each instance of a task runs on a Docker node in the form of a container.

A service can be created using the following command:

$ docker service create 
--replicas 3
--name web
-p 80:80
--constraint node.role==worker
nginx

This web service consists of three tasks, specified with --replicas. These tasks are submitted by the orchestration engine to run on selected nodes. The service's name, web, can be resolved using a virtual IP address. Other services on the same network, in this case maybe a reverse proxy service, can refer to it. We use --name to specify the name of the service.

We continue the discussion of the details of this command in the following diagram:

Figure 2.6: Swarm cluster in action

We assume that our cluster consists of one manager node and five worker nodes. There is no high availability setup for the manager; this will be left as an exercise for the reader.

We start at the manager. The manager is set to be drained because we do not want it to accept any scheduled tasks. This is the best practice, and we can drain a node as follows:

$ docker node update --availability drain mg0

This service will be published to port 80 on the routing mesh. The routing mesh is a mechanism to perform load balancing inside the Swarm mode. Port 80 will be opened on every worker node to serve this service. When a request comes in, the routing mesh will route the request to a certain container (a task) on a certain node, automatically.

The routing mesh relies on a Docker network with the overlay driver, namely ingress. We can use docker network ls to list all active networks:

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c32139129f45 bridge bridge local
3315d809348e docker_gwbridge bridge local
90103ae1188f host host local
ve7fj61ifakr ingress overlay swarm
489d441af28d none null local

We find a network with ID ve7fj61ifakr which is an overlay network of the swarm scope. As the information implies, this kind of network is working only in Docker Swarm mode. To see the details of this network, we use the docker network inspect ingress command:

$ docker network inspect ingress
[
{
"Name": "ingress",
"Id": "ve7fj61ifakr8ybux1icawwbr",
"Created": "2017-10-02T23:22:46.72494239+07:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
},
}
]

We can see that the ingress network has a subnet of 10.255.0.0/16, which means that we are allowed to use 65,536 IP addresses in this network by default. This number is the maximum number of tasks (containers) created by docker service create -p on a single Swarm mode cluster. This number is not affected when we use docker container run -p outside the Swarm.

To create a Swarm scoped overlay network, we use the docker network create command:

$ docker network create  --driver overlay appnet
lu29kfat35xph3beilupcw4m2

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
lu29kfat35xp appnet overlay swarm
c32139129f45 bridge bridge local
3315d809348e docker_gwbridge bridge local
90103ae1188f host host local
ve7fj61ifakr ingress overlay swarm
489d441af28d none null local

We can check again with the docker network ls command and see the appnet network with the overlay driver and swarm scope there. Your network's ID will be different. To attach a service to a specific network, we can pass the network name to the docker service create command. For example:

$ docker service create --name web --network appnet -p 80:80 nginx

The preceding example creates the web service and attaches it to the appnet network. This command works if, and only if, the appnet is Swarm-scoped.

We can dynamically detach or re-attach net networks to the current running service using the docker service update command with --network-add or --network-rm, respectively.  Try the following command:

$ docker service update --network-add appnet web
web

Here, we can observe the result with docker inspect web. You will find a chunk of JSON printed out with the last block looking as follows:

$ docker inspect web

...

"UpdateStatus": {
"State": "completed",
"StartedAt": "2017-10-09T15:45:03.413491944Z",
"CompletedAt": "2017-10-09T15:45:21.155296293Z",
"Message": "update completed"
}
}
]

It means that the service has been updated and the process of updating has been completed. We will now have the web service attaching to the appnet network:

Figure 2.7: The Gossip communication mechanism for Swarm-scope overlay networks

Overlay networks rely on the gossip protocol implementation over port 7946, for both TCP and UDP, accompanied by Linux's VXLAN over UDP port 4789. The overlay network is implemented with performance in mind. A network will cover only the necessary hosts and gradually expand when needed.

We can scale a service by increasing or decreasing the number of its replicas. Scaling the service can be done using the docker service scale command. For example, if we would like to scale the web service to five replicas, we could issue the following command:

$ docker service scale web=5

When the service is scaled, and its task is scheduled on a new node, all related networks bound to this service will be expanded to cover the new node automatically. In the following diagram, we have two replicas of the app service, and we would like to scale it from two to three with the command docker service scale app=3. The new replica app.3 will be scheduled on the worker node w03. Then the overlay network bound to this app service will be expanded to cover node w03 too. The network-scoped gossip communication is responsible for the network expansion mechanism:

Figure 2.8: Swarm-scoped network expansion
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset