Flannel is the virtual network layer that provides the subnet to each host for use with Docker containers. It is packaged with CoreOS but can be configured on other Linux OSes as well. Flannel creates the overlay by actually connecting itself to Docker bridge, to which containers are attached, as shown in the following figure. To setup Flannel, two host machines or VMs are required, which can be CoreOS or, more preferably, Linux OS, as shown in this figure:
The Flannel code can be cloned from GitHub and built locally, if required, on a different flavor of Linux OS, as shown here. It comes preinstalled in CoreOS:
# git clone https://github.com/coreos/flannel.git Cloning into 'flannel'... remote: Counting objects: 2141, done. remote: Compressing objects: 100% (19/19), done. remote: Total 2141 (delta 6), reused 0 (delta 0), pack-reused 2122 Receiving objects: 100% (2141/2141), 4. Checking connectivity... done. # sudo docker run -v `pwd`:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build" Building flanneld...
CoreOS machines can be easily configured using Vagrant and VirtualBox, as per the tutorial mentioned in the following link:
https://coreos.com/os/docs/latest/booting-on-vagrant.html
After the machines are created and logged in to, we will find a Flannel bridge automatically created using the etcd
configuration:
# ifconfig flannel0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.1.30.0 netmask 255.255.0.0 destination 10.1.30.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 243 bytes 20692 (20.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 304 bytes 25536 (24.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The Flannel environment can be checked by viewing subnet.env
:
# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.1.0.0/16 FLANNEL_SUBNET=10.1.30.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=true
The Docker daemon requires to be restarted with the following commands in order to get the networking re-instantiated with the subnet from the Flannel bridge:
# source /run/flannel/subnet.env # sudo rm /var/run/docker.pid # sudo ifconfig docker0 ${FLANNEL_SUBNET} # sudo docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} & INFO[0000] [graphdriver] using prior storage driver "overlay" INFO[0000] Option DefaultDriver: bridge INFO[0000] Option DefaultNetwork: bridge INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] Firewalld running: false INFO[0000] Loading containers: start. .............. INFO[0000] Loading containers: done. INFO[0000] Daemon has completed initialization INFO[0000] Docker daemon commit=cedd534-dirty execdriver=native-0.2 graphdriver=overlay version=1.8.3
The Flannel environment for the second host can also be checked by viewing subnet.env
:
# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.1.0.0/16 FLANNEL_SUBNET=10.1.31.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=true
A different subnet is allocated to the second host. The Docker service can also be restarted in this host by pointing to the Flannel bridge:
# source /run/flannel/subnet.env # sudo ifconfig docker0 ${FLANNEL_SUBNET} # sudo docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} & INFO[0000] [graphdriver] using prior storage driver "overlay" INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] Option DefaultDriver: bridge INFO[0000] Option DefaultNetwork: bridge INFO[0000] Firewalld running: false INFO[0000] Loading containers: start. .... INFO[0000] Loading containers: done. INFO[0000] Daemon has completed initialization INFO[0000] Docker daemon commit=cedd534-dirty execdriver=native-0.2 graphdriver=overlay version=1.8.3
Docker containers can be created in their respective hosts, and they can be tested using the ping
command in order to check the Flannel overlay network connectivity.
For Host 1, use the following commands:
#docker run -it ubuntu /bin/bash INFO[0013] POST /v1.20/containers/create INFO[0013] POST /v1.20/containers/1d1582111801c8788695910e57c02fdba593f443c15e2f1db9174ed9078db809/attach?stderr=1&stdin=1&stdout=1&stream=1 INFO[0013] POST /v1.20/containers/1d1582111801c8788695910e57c02fdba593f443c15e2f1db9174ed9078db809/start INFO[0013] POST /v1.20/containers/1d1582111801c8788695910e57c02fdba593f443c15e2f1db9174ed9078db809/resize?h=44&w=80 root@1d1582111801:/# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0a:01:1e:02 inet addr:10.1.30.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe01:1e02/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:11 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:969 (969.0 B) TX bytes:508 (508.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
For Host 2, use the following commands:
# docker run -it ubuntu /bin/bash root@ed070166624a:/# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0a:01:1f:02 inet addr:10.1.31.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe01:1f02/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:18 errors:0 dropped:2 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1544 (1.5 KB) TX bytes:598 (598.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@ed070166624a:/# ping 10.1.30.2 PING 10.1.30.2 (10.1.30.2) 56(84) bytes of data. 64 bytes from 10.1.30.2: icmp_seq=1 ttl=60 time=3.61 ms 64 bytes from 10.1.30.2: icmp_seq=2 ttl=60 time=1.38 ms 64 bytes from 10.1.30.2: icmp_seq=3 ttl=60 time=0.695 ms 64 bytes from 10.1.30.2: icmp_seq=4 ttl=60 time=1.49 ms
Thus, in the preceding example, we can see the complexity that Flannel reduces by running the flanneld
agent on each host, which is responsible for allocating a subnet lease out of preconfigured address space. Flannel internally uses etcd
to store the network configuration and other details, such as host IP and allocated subnets. The forwarding of packets is achieved using the backend strategy.
Flannel also aims to resolve the problem of Kubernetes deployment on cloud providers other than GCE, where a Flannel overlay mesh network can ease the issue of assigning a unique IP address to each pod by creating a subnet for each server.