A driver owns a network and is responsible for making the network work and manages it. Network controller provides an API to configure the driver with specific labels/options that are not directly visible to the user but are transparent to libnetwork and can be handled by drivers directly. Drivers can be both in-built (such as bridge, host, or overlay) and remote (from plugin providers) to be deployed in various use cases and deployment scenarios.
The driver owns the network implementation and is responsible for managing it, including IP Address Management (IPAM). The following figure explains the process:
The following are the in-built drivers:
docker --net=none
, this option exists primarily in the case when no networking is required.A bridge driver represents a wrapper on a Linux bridge acting as a network for libcontainer. It creates a veth pair for each network created. One end is connected to the container and the other end is connected to the bridge. The following data structure represents a bridge network:
type driver struct { config *configuration etwork *bridgeNetwork natChain *iptables.ChainInfo filterChain *iptables.ChainInfo networks map[string]*bridgeNetwork store datastore.DataStore sync.Mutex }
Some of the actions performed in a bridge driver:
The following diagram shows how the network is represented using docker0
and veth
pairs to connect endpoints with the docker0
bridge:
Overlay network in libnetwork uses VXLan along with a Linux bridge to create an overlaid address space. It supports multi-host networking:
const ( networkType = "overlay" vethPrefix = "veth" vethLen = 7 vxlanIDStart = 256 vxlanIDEnd = 1000 vxlanPort = 4789 vxlanVethMTU = 1450 ) type driver struct { eventCh chan serf.Event notifyCh chan ovNotify exitCh chan chan struct{} bindAddress string neighIP string config map[string]interface{} peerDb peerNetworkMap serfInstance *serf.Serf networks networkTable store datastore.DataStore ipAllocator *idm.Idm vxlanIdm *idm.Idm once sync.Once joinOnce sync.Once sync.Mutex }