The following are the steps for creating containers using an overlay network:
c0
on mhs-demo0
and connect to the my-net
network:$ eval $(docker-machine env mhs-demo0) root@843b16be1ae1:/# $ sudo docker run -i -t --name=c0 --net=my-net debian /bin/bash
Execute ifconfig
to find the IPaddress of c0
. In this case, it is 10.0.0.4
:
root@843b16be1ae1:/# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0a:00:00:04 inet addr:10.0.0.4 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe00:4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1474 (1.4 KB) TX bytes:1474 (1.4 KB) eth1 Link encap:Ethernet HWaddr 02:42:ac:12:00:03 inet addr:172.18.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe12:3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
c1
on mhs-demo1,
and connect to the my-net
network:$ eval $(docker-machine env mhs-demo1) $ sudo docker run -i -t --name=c1 --net=my-net debian /bin/bash Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 0bf056161913: Pull complete 1796d1c62d0c: Pull complete e24428725dd6: Pull complete 89d5d8e8bafb: Pull complete Digest: sha256:a2b67b6107aa640044c25a03b9e06e2a2d48c95be6ac17fb1a387e75eebafd7c Status: Downloaded newer image for ubuntu:latest root@2ce83e872408:/#
ifconfig
to find the IP address of c1
. In this case, it is 10.0.0.3
:root@2ce83e872408:/# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0a:00:00:03 inet addr:10.0.0.3 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:aff:fe00:3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:13 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1066 (1.0 KB) TX bytes:578 (578.0 B) eth1 Link encap:Ethernet HWaddr 02:42:ac:12:00:02 inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe12:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:578 (578.0 B) TX bytes:578 (578.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
c1
(10.0.0.3
) from c0
(10.0.0.4
) and vice versa:root@2ce83e872408:/# ping 10.0.04 PING 10.0.04 (10.0.0.4) 56(84) bytes of data. 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.370 ms 64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.443 ms 64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=0.441 ms
Container network interface (CNI) is a specification that defines how executable plugins can be used to configure network interfaces for Linux application containers. The official GitHub repository of CNI explains how a go library explains the implementing specification.
The container runtime first creates a new network namespace for the container in which it determines which network this container should belong to and which plugins to be executed. The network configuration is in the JSON format and defines on the container startup which plugin should be executed for the network. CNI is actually an evolving open source technology that is derived from the rkt networking protocol. Each CNI plugin is implemented as an executable and is invoked by a container management system, docker, or rkt.
After inserting the container in the network namespace, namely by attaching one end of a veth pair to a container and attaching the other end to a bridge, it then assigns an IP to the interface and sets up routes consistent with IP address management by invoking an appropriate IPAM plugin.
The CNI model is currently used for the networking of kubelets in the Kubernetes model. Kubelets are the most important components of Kubernetes nodes, which takes the load of running containers on top of them.
The package CNI for kubelet is defined in the following Kubernetes package:
Constants const ( CNIPluginName = "cni" DefaultNetDir = "/etc/cni/net.d" DefaultCNIDir = "/opt/cni/bin" DefaultInterfaceName = "eth0" VendorCNIDirTemplate = "%s/opt/%s/bin" ) func ProbeNetworkPlugins func ProbeNetworkPlugins(pluginDir string) []network.NetworkPlugin