FlowVisor slicing

In this section, you will learn how to slice your OpenFlow network, construct logical networks over a physical infrastructure, and have each slice controlled by an OpenFlow controller. You will also learn during this process the concept of flowspaces and how the centralized control feature of OpenFlow provides flexible network slicing. The network topology for this exercise is shown in the following diagram, which includes four OpenFlow switches and four hosts.

Switches s1 and s4 are connected to each other through s2 via a low-bandwidth connection (that is, 1 Mbps and defined as LBW_path in the following custom topology script) and are also connected to each other via s3 through a high-bandwidth (that is, 10 Mbps, defined as HBW_path in the custom script) set of links:

Network topology

This network topology can be constructed using the following Mininet script (assuming that the flowvisor_topo.py file is available in the current directory). Mininet installation was presented in Chapter 3, Implementing the OpenFlow Switch, utilized as part of the OpenFlow laboratory in Chapter 5, Setting Up the Environment:

$ sudo mn --custom flowvisor_topo.py --topo slicingtopo --link tc 
--controller remote --mac --arp

The customized Python script defines a topology named slicingtopo, which then becomes accessible on the command line of Mininet:

#!/usr/bin/python 
# flowvisor_topo.py 
from mininet.topo import Topo 
class FVTopo(Topo): 
    def __init__(self): 
        # Initialize topology 
        Topo.__init__(self) 
        # Create template host, switch, and link 
        hconfig = {'inNamespace':True} 
        LBW_path = {'bw': 1} 
        HBW_path = {'bw': 10} 
        host_link_config = {} 
        # Create switch nodes 
        for i in range(4): 
            sconfig = {'dpid': "%016x" % (i+1)} 
            self.addSwitch('s%d' % (i+1), **sconfig) 
        # Create host nodes (h1, h2, h3, h4) 
        for i in range(4): 
            self.addHost('h%d' % (i+1), **hconfig) 
        # Add switch links according to the topology 
        self.addLink('s1', 's2', **LBW_path) 
        self.addLink('s2', 's4', **LBW_path) 
        self.addLink('s1', 's3', **HBW_path) 
        self.addLink('s3', 's4', **HBW_path) 
        # Add host links 
        self.addLink('h1', 's1', **host_link_config) 
        self.addLink('h2', 's1', **host_link_config) 
        self.addLink('h3', 's4', **host_link_config) 
        self.addLink('h4', 's4', **host_link_config) 
        topos = { 'slicingtopo': ( lambda: FVTopo() ) } 

After the network topology, the next step is to create a configuration for FlowVisor, which will be run in a new console terminal. Assuming that you have already installed FlowVisor on a separate virtual machine, the following command line creates this configuration:

$ sudo -u flowvisor fvconfig generate /etc/flowvisor/config.json

The fvadmin password can be left blank by just hitting the return (Enter) key when prompted. To activate this configuration, simply start FlowVisor:

$ sudo /etc/init.d/flowvisor start

Using the fvctl utility, enable the FlowVisor topology controller. The -f command-line parameter points to a password file. Since no password is set for FlowVisor, the password file could point to /dev/null. In order to activate this change, FlowVisor should be restarted:

$ fvctl -f /dev/null set-config --enable-topo-ctrl
$ sudo /etc/init.d/flowvisor restart

All the OpenFlow switches in the Mininet should connect to the FlowVisor when it is started. By getting the configuration of FlowVisor, ensure that it is properly running:

$ fvctl -f /dev/null get-config

You will see the following FlowVisor configuration (in JSON format), similar to the following screen output if it is running properly:

FlowVisor configuration in JSON format

Using the following command, list the existing slices and ensure that fvadmin (the default slice) is the only one, which is shown in the output of the fvctl command:

$ fvctl -f /dev/null list-slices

Issue the following command to print the existing flow spaces and ensure that there are no existing flowspaces:

$ fvctl -f /dev/null list-flowspace

Listing the datapaths will ensure that all the switches have connected to the FlowVisor. You can check it by executing the following fvctl command. Before executing the command, you might have to wait for a few seconds. This will give enough time to the switches (s1, s2, s3, and s4) to connect to FlowVisor:

$ fvctl -f /dev/null list-datapaths  

In the next step, ensure that all the network links are active by running the following command:

$ fvctl -f /dev/null list-links

The output will print out the DPIDs and source and destination ports, which are connected to each other.

Now, we are ready to slice the network. In this experiment, we will create two physical slices, which are named Slice: Upper and Slice: Lower, as shown in the following diagram:

Upper and Lower slices of the experimental network

Each slice can be controlled by a separate controller, which will control all the packet traffic in its own slice. The following command creates a slice named upper and connects it to a controller listening on tcp:localhost:10001:

$ fvctl -f /dev/null add-slice upper tcp:localhost:10001 admin@upperslice

Leave the slice password empty by pressing return (Enter) when prompted. Similarly, you can create a slice named lower and connect it to a controller listening on tcp:localhost:10002. Again, leave the slice password empty by hitting return (Enter) when prompted:

$ fvctl -f /dev/null add-slice lower tcp:localhost:10002 admin@lowerslice

Now, by executing the list-slices command, ensure that the slices were successfully added:

$ fvctl -f /dev/null list-slices

Besides the default fvadmin slice, you should be able to see both the upper and lower slices, and all of them should be enabled. In the next step, you will create flowspaces. Flowspaces associate packets of a particular type to specific slices. When a packet matches more than one flowspace, FlowVisor assigns it to the flowspace with the highest priority number. The description of flowspaces comprises a series of comma-separated field=value assignments. You can learn more about the add-flowspace command like this:

$ fvctl add-flowspace -h

Now, we create a flowspace named dpid1-port1 (with priority value 1) that maps all the traffic on port 1 of switch s1 to the upper slice in the network topology. This can be done by executing the following command:

$ fvctl -f /dev/null add-flowspace dpid1-port1 1 1 in_port=1 upper=7

Here we give the upper slice all permissions: DELEGATE, READ, and WRITE (1 + 4 + 2 = 7). In a similar way, we create a flowspace named dpid1-port3 that maps all the traffic on port 3 of switch s1 to the upper slice in the network:

$ fvctl -f /dev/null add-flowspace dpid1-port3 1 1 in_port=3 upper=7

By using the match value of any, we can create a flowspace for matching all the traffic at a switch. So, we add switch s2 to the upper slice by running the following command:

$ fvctl -f /dev/null add-flowspace dpid2 2 1 any upper=7 

Now, we create two more flowspaces (dpid4-port1 and dpid4-port3) to add ports 1 and 3 of switch s4 to the upper slice:

$ fvctl -f /dev/null add-flowspace dpid4-port1 4 1 in_port=1 upper=7
$ fvctl -f /dev/null add-flowspace dpid4-port3 4 1 in_port=3 upper=7

Ensure that these flowspaces are correctly added by running the following command:

$ fvctl -f /dev/null list-flowspace

You should see all the flowspaces (five in total) that you just created. Now, we create flowspaces for the lower slice:

$ fvctl -f /dev/null add-flowspace dpid1-port2 1 1 in_port=2 lower=7
$ fvctl -f /dev/null add-flowspace dpid1-port4 1 1 in_port=4 lower=7
$ fvctl -f /dev/null add-flowspace dpid3 3 1 any lower=7
$ fvctl -f /dev/null add-flowspace dpid4-port2 4 1 in_port=2 lower=7
$ fvctl -f /dev/null add-flowspace dpid4-port4 4 1 in_port=4 lower=7

Again, ensure that the flowspaces are correctly added:

$ fvctl -f /dev/null list-flowspace

Now you can launch two OpenFlow controllers on your local host, which are listening on ports 10001 and 10002, corresponding to the upper and lower slices. You should also write a small Net App that reactively installs routes based on the destination MAC address. After a short delay, both controllers should connect to FlowVisor. Now, you can verify that host h1 can ping h3 but not h2 and h4 (and vice versa).

Run the following command in the Mininet console:

mininet> h1 ping -c1 h3
mininet> h1 ping -c1 -W1 h2
mininet> h1 ping -c1 -W1 h4

Verify that h2 can ping h4 but not h1 and h3 (and vice versa). Run the following command in the Mininet console:

mininet> h2 ping -c1 h4
mininet> h2 ping -c1 -W1 h1
mininet> h2 ping -c1 -W1 h3

This concludes a simple network slicing using switch ports. However, by defining other slicing rules and developing other Net Apps, you can provide interesting and innovative services for each slice. For instance, you can differentiate traffic and treat them accordingly across the upper and lower network slices. We leave this to you as an exercise.

FlowVisor is a great controller with certain limitations considering the fact that certain resources cannot be virtualized. Virtual links in the FlowVisor topology are dependent on the physical infrastructure currently existing. Creating a complex virtual topology will require future modifications in the FlowVisor controller.

Flowspace maximization is also another limitation in FlowVisor. For instance, a certain IP address block 20.0.0.0/16 might want to be used by two slices; with FlowVisor it is impossible to maximize it simultaneously.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset