In this chapter, we're going to discuss and deepen our knowledge on two very important topics related to Docker and orchestration systems: networking and consensus. In particular, we'll see how to:
Libnetwork is the networking stack designed from the ground-up to work with Docker regardless of platforms, environments, operating systems, or infrastructures. Libnetwork is not only an interface for the network driver. It's not only a library to manage VLAN or VXLAN networks but it does more.
Libnetwork is a full networking stack and consists of three planes, the Management Plane, the Control Plane, and the Data Plane as shown in the following diagram:
Following the usual Docker UX, that states the components should just work on any environment, also the networking stack must be portable. And to make Docker's networking stack portable, its design and implementation must be solid. For example, the Management Plane cannot be controlled by any other component. Also, the Control Plane cannot be replaced by other components. If we allowed that, the networking stack would break when we change our application environment from one to another.
The Data Plane is designed to be pluggable. In fact, it can be only managed by built-in or external plugins. For example, MacVLAN was implemented as a plugin into Docker 1.12 without affecting other parts of the system.
The most remarkable thing is that we can have several Drivers and plugins on the same networking stack they can work without interfering with one another. So typically, in Swarm, we can have an overlay network, a bridge network as well as a host driver running on the same cluster.
Libnetwork is designed and implemented to serve the Docker Swarm requirements to run Docker's distributed applications. That is, Libnetwork is actually the Docker Networking Fabric. The foundation of Libnetwork is a model called Container Networking Model (CNM). It is a well-defined basic model that describes how containers connect to the given networks. The CNM consists of three components:
The Drivers represent the Data Plane. Every Driver, being overlay, bridge, or MacVLAN are in the form of Plugins. Each plugin works in a Data Plane specific to it.
In the system, there is a built-in IPAM by default. This is an important issue because each container must have an IP address attached. So it's necessary to have an IPAM system built-in, which allows each container to be able to connect to each otheras we did in the traditional way and we need an IP address for others to talk to the container. We also require to define subnets as well as ranges of IP addresses. Also, the system is designed for IPAM to be pluggable. This means that it allows us to have our own DHCP drivers or allow plumbing the system to an existing DHCP server.
As previously mentioned, Libnetwork supports multihost networking out-of-the-box. Components worth to discuss for the multihost networking are its Data and Control Planes.
The Control Plane currently included in Docker 1.12 uses the gossip mechanism as the general discovery system for nodes. This gossip protocol-based network works on another layer in parallel of the Raft consensus system. Basically, we have twodifferent membership mechanisms work at the same time. Libnetwork allows the driver from other plugins to commonly use the control plane.
These are the features of Libnetwork's Control plane:
Docker 1.12 implements VIP-based service discovery in Swarm. This service works by mapping a Virtual IP address of the container to the DNS records. Then all DNS records are shared via gossip. In Docker 1.12, with the introduction of the concept of service, this notion fits directly to the concept of discovery.
In Docker 1.11 and previous versions, it was necessary instead to use container names and aliases to "simulate" service discovery and do DNS roundrobin to perform some kind of primitive load balancing.
Libnetwork carries on the principle of battery included but removable, which is implemented as the plugin system. In the future, Libnetwork will gradually expand the plugin system to cover other networking parts, for example, load balancing.