Chapter 8. PfR Provisioning


This chapter covers the following topics:

Image Dual-data-center enterprise topology

Image Overlay routing

Image Hub site configuration

Image Transit site configuration

Image Single CPE branch configuration

Image Dual CPE branch configuration

Image Path selection and the use of transit site and path preference


Configuring PfR for the IWAN domain includes the following steps:

Image Configuring the hub site: This is the central site that contains the Hub MC, which is responsible for distributing the PfR domain policies to all other sites. Configuration tasks at the hub site include the definition of the Hub MC and multiple BRs. The BRs are DMVPN hub routers. Every BR terminates only one overlay network. There is only one hub site in an IWAN domain.

Image Configuring one or more transit sites: This is another type of central site that houses DMVPN hub routers. Network traffic from branch sites may terminate at servers located in this central site itself or at other locations. Transit sites can provide branch-to-branch communications, where branches do not connect to the same WAN transport. Configuration tasks at the transit site include the definition of one MC and one or more Transit BRs per transit site. Every BR terminates only one overlay network. An IWAN domain can have multiple transit sites.

Image Configuring branch sites: Branch sites are remote sites that house DMVPN spoke routers. Configuration tasks include the definition of the MC and one or more BRs. A BR on a branch can support multiple DMVPN tunnels.

IWAN Domain

The IWAN domain includes multiple central sites and several branch sites. At each site, the MC is the local decision maker and controls the BRs responsible for performance measurement and path enforcement.

One of the central sites is defined as the hub site. The MC defined in this hub site is the Hub MC and acts as

Image A local MC for the site: It makes decisions and instructs the local BRs how to forward TCs.

Image A global domain controller for the IWAN domain: It is responsible for defining and distributing the PfR policies for that IWAN domain.

Branch sites typically include a single customer premises equipment (CPE) router or two CPEs. Each site is connected to multiple paths that have various SLAs. Every branch site has its own local MC and one or multiple BRs.

Topology

Figure 8-1 displays the dual-hub and dual-cloud topology used in this book to explain the configuration of PfR. The transport-independent design is based on DMVPN with two providers, one being considered the primary (MPLS) and one the secondary (Internet). Branch sites are connected to both DMVPN clouds, and both tunnels are up.

Image

Figure 8-1 Overlay Network

DMVPN tunnel 100 uses 192.168.100.0/24 and is associated with the MPLS transport, and DMVPN tunnel 200 uses 192.168.200.0/24 for the Internet transport. PfR does not have knowledge about the transport (underlay) network and only controls traffic destined for sites that are connected to the DMVPN network.

The dual-hub and dual-cloud topology provides active-active WAN paths that enable connectivity to each DMVPN hub for a transport, and it provides redundancy in the event of a hub failure. In the topology, a DMVPN hub router connects to only one transport on the Transit BRs (Site 1 and Site 2). R11 and R21 are the MPLS DMVPN hub routers for tunnel 100 (192.168.100.0/24). R12 and R22 are the Internet DMVPN hub routers for tunnel 200 (192.168.200.0/24).


Note

At the time of this writing, it is mandatory to use only one transport per DMVPN hub to guarantee that spoke-to-spoke tunnels will be established. Normal traffic flow follows the Cisco Express Forwarding path, so in a dual-tunnel hub scenario it is possible for one spoke (R31) to send a packet to a different spoke (R41) which will traverse both tunnel interfaces instead of a single interface, preventing a spoke-to-spoke tunnel from forming. This topology demonstrates one transport per DMVPN hub.


Site 3 and Site 4 do not have redundant routers, so R31 and R41 connect to both transports via DMVPN tunnels 100 and 200. However, at Site 5, redundant routers have been deployed. R51 connects to the MPLS transport with DMVPN tunnel 100, and R52 connects to the Internet transport with DMVPN tunnel 200. R51 and R52 establish connectivity with each other via a cross-link.


Note

At a branch site, the BRs must be directly connected to the Branch MC. This can be a direct link, a transit VLAN through a switch, or a GRE tunnel.


Table 8-1 provides the network site, site subnet, Loopback0 IP address, and transport connectivity for the routers.

Image

Table 8-1 Topology Table

As part of the dual-hub and dual-cloud topology, every DMVPN spoke router has two NHRP mappings for every DMVPN tunnel interface. The NHRP mapping correlates to the DMVPN hub router for the transport intended for that DMVPN tunnel. The NHRP mappings are as follows:

Image Tunnel 100 (MPLS) uses R11 and R21.

Image Tunnel 200 (Internet) uses R12 and R22.

Example 8-1 demonstrates R31’s NHRP mappings for DMVPN tunnel 100.

Example 8-1 R31’s Tunnel 100 NHRP Mapping Configuration


interface Tunnel100
 description DMVPN-MPLS
 ip nhrp nhs 192.168.100.11 nbma 172.16.11.1 multicast
 ip nhrp nhs 192.168.100.21 nbma 172.16.21.1 multicast



Note

The topology uses the DMVPN configuration displayed in Examples 3-48 and 3-49 and includes the use of the front-door VRF explained in Chapter 3, “Dynamic Multipoint VPN.”


Overlay Routing

As explained in Chapter 4, “Intelligent WAN (IWAN) Routing,” transit routing at branch locations is restricted because it introduces suboptimal routing or unpredictable traffic patterns. NHRP redirect messages establish spoke-to-spoke tunnels when multiple paths exist for a branch site connecting to a network behind a different branch site.

Transit routing at the centralized sites (hub or transit sites) is acceptable and commonly occurs during the following two scenarios:

Image Every centralized site advertises a unique set of network prefixes. Each site advertises its local networks with a network summary. In addition to the unique local network prefixes, broader network prefixes encompass the entire enterprise (all data centers, campuses, and remote locations) from all sites. The more specific prefix is used for optimal direct routing, but in failure scenarios, transit connectivity can be established through the alternative site’s broader network advertisements as shown in Figure 8-2.

Image

Figure 8-2 Site 1 and Site 2 Advertising Two Differents Sets of Prefixes

Image Both sites advertise the same set of network prefixes. These sites (Site 1 and Site 2) provide transit connectivity to the real data centers connected over a WAN core that connects to both Site 1 and Site 2. This scenario is shown in Figure 8-3.

Advertising Site Local Subnets

In this scenario, each site advertises its local subnets and a network summary for all the routes in the WAN topology. Figure 8-2 illustrates the scenario where Site 1 and Site 2 host their own data centers with two different sets of prefixes. In this scenario, Site 1 advertises the 10.1.0.0/16 network, and Site 2 advertises the 10.2.0.0/16 network.

Table 8-2 displays the potential PfR next-hop addresses for a prefix for a specific tunnel interface. Notice that there is only one potential next-hop address for PfR to use.

Image

Table 8-2 Routing Table—Different Prefix

Advertising the Same Subnets

In this scenario, each site advertises its local subnets and a network summary for all the routes in the WAN topology. These sites (Site 1 and Site 2) provide transit connectivity to the real data centers connected over a WAN core that connects to both Site 1 and Site 2.

Figure 8-3 displays a dual-data-center topology where Site 1 and Site 2 advertise the same set of prefixes from all the hub routers (R11, R12, R21, and R22).

Image

Figure 8-3 Site 1 and Site 2 Advertising the Same Prefixes

Table 8-3 displays the potential PfR next-hop addresses for a prefix for a specific tunnel interface. Notice that there are two potential next-hop addresses for PfR to use. The order is not significant.

Image

Table 8-3 Routing Table—Same Prefix

In this scenario, each branch router has multiple next hops available for each tunnel interface. Multiple next hop per DMVPN interface and same prefix from multiple PfR sites are new features introduced with IOS XE 3.15 and IOS 15.5(2)T under the name Transit Site support.


Note

Multiple next hop per DMVPN tunnel is supported only for traffic flowing from branch sites to central sites (hub or transit site).


Traffic Engineering for PfR

As shown in Chapter 5, it is a good practice to manipulate the routing protocols so that traffic flows across the preferred transport. Influencing the routing table ensures that when PfR is disabled, traffic will follow the Cisco Express Forwarding table derived from the RIB and forward traffic to the DMVPN over the preferred tunnel (transport).

The following logic applies for the transit routing scenarios provided earlier:

Image Advertising the site local subnets (Figure 8-2)—R11 is preferred over R12 for 10.1.0.0/16 and R21 is preferred over R22 for prefix 10.2.0.0/16.

Image Advertising the same subnets (Figure 8-3)—R11 and R21 are preferred over R12 and R22 for all prefixes.

PfRv3 always checks for a parent route of any destination prefix before creating a channel or controlling a TC. PfR selects next hops in the following order of lookup:

Image Check to see if there is an NHRP shortcut route (branch only).

Image If not, check in the order of BGP, EIGRP, static, and RIB.

Image If at any point an NHRP shortcut route appears, PfRv3 picks that up and relinquishes using the parent route from one of the routing protocols.


Note

It is essential to make sure that all destination prefixes are reachable over all available paths so that PfR can create the corresponding channels and control the TCs. Remember that PfR checks within the BGP or EIGRP topology table.


Specific NHRP shortcuts are installed at the spokes by NHRP as and when required. The validity of each NHRP shortcut is determined by the less specific, longest match IGP route present in the RIB. When a preferred path is defined in the routing configuration, it is key to disable the NHRP Route Watch feature on the secondary tunnel because the covering prefix is not available in the RIB for that tunnel.

The command no nhrp route-watch disables the NHRP Route Watch feature and allows the creation of a spoke-to-spoke tunnel over the secondary DMVPN network.

Example 8-2 illustrates how to disable NHRP Route Watch.

Example 8-2 Branch Routers Internet Tunnel Configuration with NHRP Route Watch Disabled


R31, R41, and R51
interface Tunnel 200
 no nhrp route-watch


PfR Components

Figure 8-4 illustrates the PfR components used in an IWAN domain. You need to configure an MC per site controlling one or more BRs. One of the MCs also hosts the domain controller component.

Image

Figure 8-4 PfR Components


Note

It is mandatory to use a dedicated Hub MC and dedicated Transit MCs.


Site 1 is defined as the hub site for the domain. R10 resides in Site 1 and is defined as the Hub MC (domain controller). PfR policies are defined on the Hub MC, and all other MCs in the domain peer to R10. In Site 1, R11 is a Transit BR for the DMVPN MPLS transport, and R12 is a Transit BR for the DMVPN Internet transport.

Site 2 is defined as a transit site with a POP ID of 1 (POP ID 1). R20 is defined as a Transit MC. R21 is a Transit BR for the DMVPN MPLS network, and R22 is a Transit BR for the DMVPN INET network.

Site 3, Site 4, and Site 5 are defined as branch sites. R31 and R41 are MC and BR for their sites. R51 is a Branch MC/BR (dual function) for Site 5, and R52 is a BR for Site 5.

All sites are assigned a site ID based on the IP address of the loopback interface of the local MC as shown in Table 8-4.

Image

Table 8-4 PfR Component Table


Note

This book uses the label INET for the Internet-based transport interfaces and related configuration so that it maintains the same length as MPLS.


PfR Configuration

All the PfR components need to communicate with each other in a hierarchical fashion. BRs communicate with their local MC, and the local MCs communicate with the Hub MC. Proper PfR design allocates a loopback interface for communication for the following reasons:

Image Loopback interfaces are virtual and are always in an up state.

Image They ensure that the address is consistent and easy to identify.

Typically, most networks create a dedicated loopback interface for management (SSH, TACACS, or SNMP) and for routing protocols. The same loopback interface can be used for PfR. The IP address assigned to the loopback interface should have a 255.255.255.255 (/32) subnet mask.

Master Controller Configuration

The MC serves as the control plane and decision maker for PfR. It collects performance measurements, bandwidth, and link utilization from the local BRs. Based upon the configured policy, the MC influences the forwarding behavior on BRs to ensure that network traffic uses the best interface based upon the defined policy.

This section focuses on the configuration of the MCs in an IWAN domain. An MC exists in all of the IWAN sites.

Hub Site MC Configuration

The Hub MC is located at the hub site in the IWAN topology. R10 is the Hub MC for the book’s sample topology. R10 controls two BRs in Site 1: R11 and R12.

It is also essential to remember that the Hub MC actually supports two very different roles:

Image It is a local MC for the site and as such peers with the local BRs and controls them. In that role, it is similar to a Transit MC.

Image It is a global domain controller with the PfR policies for the entire IWAN domain and as such peers with all MCs in the domain.

The Hub MC should run as a standalone platform, on a physical device, or as a virtual machine (CSR 1000V).

The process for configuring a Hub MC is given in the following steps:

Step 1. Define the IWAN domain name.

All PfR-related configuration is defined within the domain section. The command domain {default | domain-name} defines the PfR domain name. The domain name must be consistent for all devices participating in the same PfRv3 configuration.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR. You can use the default VRF with the command vrf default or use any VRF defined on the router with the command vrf {vrf-name}.

Step 3. Identify the router as the Hub MC.

The command master hub enters MC configuration mode and configures the router as the Hub MC. When the Hub MC is configured, EIGRP SAF auto-configuration is enabled by default, and requests from remote sites are sent to the Hub MC.

Step 4. Define the source interface for PfR communication.

A source interface is defined on the router for communication with other routers in the IWAN domain. The source interface is the loopback interface identified in Step 1. The source interface is configured with the command source-interface interface-id.

The source interface loopback also serves as a site ID for this particular site.

Step 5. Define a password (optional).

The command password password is used to secure the peering between the MC and BRs.

Step 6. Define the Hub MC site prefix.

The site-prefix prefix list defines static site prefixes for the local Hub MC site. The static site prefix list is required for Hub and Transit MCs. A site-prefix prefix list is optional on Branch MCs.

The site prefix is defined under the MC configuration with the command site-prefixes prefix-list prefix-list-name. Example 8-20 provides R10 and R20’s site-prefix prefix list configuration.

Example 8-3 provides the Hub MC configuration that is deployed to R10.

Example 8-3 R10 Hub MC Configuration


R10 (Hub MC)
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
!
domain IWAN
 vrf default
  master hub
   site-prefixes prefix-list SITE_PREFIX
   source-interface Loopback0
   password cisco123



Note

The site prefix must be defined on the Hub MC or the Hub MC status will be in a down state. Failure to define the site prefix might result in messages that look like “%TCP-6-BADAUTH: No MD5 digest from 10.1.0.10(17749) to 10.1.0.11(52576) (RST) tableid—0.”


Transit Site MC Configuration

The Transit MC is located at all transit sites in an IWAN topology. It should run as a standalone platform, on a physical device, or as a virtual machine (CSR 1000V).

R20 is the Transit MC for the book’s sample topology. R20 controls two BRs in Site 2: R21 and R22. A Transit MC needs to peer with the domain controller (Hub MC) to get the policies, monitor configurations, and global parameters.

The process for configuring a Transit MC is given in the following steps:

Step 1. Define the IWAN domain name.

All PfR-related configuration is defined within the domain section. The command domain {default | domain-name} defines the PfR domain name. The domain name must be consistent for all devices participating in the same PfRv3 configuration.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR.

Step 3. Identify the router as the Transit MC.

The command master transit pop-id enters MC configuration mode and configures the MC instance as a transit. EIGRP SAF auto-configuration is enabled by default, and requests are sent to the Hub MC. The POP ID should be unique among other Transit MCs in the same IWAN domain.

Step 4. Define the source interface for PfR communication.

A source interface is defined on the router for communication with other routers in the IWAN domain. The source interface is the loopback interface defined at the beginning of this section. The source interface is configured with the command source-interface interface-id.

The source interface loopback also serves as a site ID for this particular transit site.

Step 5. Configure the Hub MC IP address.

This is the IP address of the source interface defined on the Hub MC. The book’s example topology uses R10’s loopback, 10.1.0.10. Every MC peers with the Hub MC to get the domain policies, monitor configuration, and global parameters.

Step 6. Define a password (optional).

The command password password is used to secure the peering between the MC and BRs.

Example 8-4 displays R20’s transit site MC configuration. Notice that R20 assigns the POP ID of 1 for Site 2. Remember that the hub site is assigned a POP ID of 0 by default.

Example 8-4 R20 Transit MC Configuration


R20 (Transit MC)
domain IWAN
 vrf default
  master transit 1
   source-interface Loopback0
   password cisco123
   hub 10.1.0.10


Branch Site MC Configuration

A branch site is a site where no transit traffic is allowed. The PfR configuration is minimized for the Branch BR as the policy is defined at the local MC, which receives the routing policy from the Hub MC.

The MC at a branch site typically does not require a dedicated router to act as an MC. This implies that at least one branch site router contains the MC and the BR roles. If a branch site has two routers for resiliency, the MC is typically connected to the primary transport circuit (MPLS in the book’s examples) and is the HSRP master when the branch routers provide the Layer 2/Layer 3 delineation (all interfaces facing the LAN are Layer 2 only).

The process for configuring a Branch MC is given in the following steps:

Step 1. Define the IWAN domain name.

All PfR-related configuration is defined within the domain section. The command domain {default | domain-name} defines the PfR domain name. The domain name must be consistent for all devices participating in the same PfRv3 configuration.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR. You can use the default VRF or use any VRF defined on the router.

Step 3. Identify the router as the Branch MC.

The command master branch enters MC configuration mode and configures the MC instance as a branch. EIGRP SAF auto-configuration is enabled by default, and requests are sent to the Hub MC.

Step 4. Define the source interface for PfR communication.

A source interface is defined on the router for communication with other routers in the IWAN domain. The source interface is the loopback interface defined at the beginning of this section. The source interface is configured with the command source-interface interface-id.

The source interface loopback also serves as a site ID for this particular transit site.

Step 5. Configure the Hub MC IP address.

This is the IP address of the source interface defined on the Hub MC. The book’s example topology uses R10’s loopback, 10.1.0.10. Every MC peers with the Hub MC to get the domain policies, monitor configuration, and global parameters.

Step 6. Define a password (optional).

The command password password is used to secure the peering between MC and BRs.

Example 8-5 displays a branch site MC configuration.

Example 8-5 Branch Site MC Configuration


R31, R41, and R51 (Branch MCs)
domain IWAN
 vrf default
  master branch
   source-interface Loopback0
   password cisco123
   hub 10.1.0.10


MC Status Verification

The status of an MC is displayed with the command show domain name vrf name master status.


Note

If the default VRF (global routing table) is used, the specific VRF name can be omitted.


Example 8-6 verifies the status of the Site 3 MC (R31) for the global routing table. The output provides the configured and operational status and the list of the BRs that are controlled. External interfaces are listed with their corresponding path names. Notice that the BR is actually also the MC, but the command gives the IP address of the BR(s). The MC and BR are two completely independent components even if they run on the same platform.

Example 8-6 R31 Single CPE Branch MC Status


R31-Spoke# show domain IWAN master status
  *** Domain MC Status ***

 Master VRF: Global
  Instance Type:    Branch
  Instance id:      0
  Operational status:  Up
  Configured status:  Up
  Loopback IP Address: 10.3.0.31
  Load Balancing:
   Operational Status: Up
   Max Calculated Utilization Variance: 27%
   Last load balance attempt: 00:00:13 ago
   Last Reason:  No Controlled Traffic Classes Yet for load balancing
   Total unbalanced bandwidth:
         External links: 55 Kbps  Internet links: 0 Kbps
  External Collector: 10.1.200.1 port: 9995
  Route Control: Enabled
  Transit Site Affinity: Enabled
  Load Sharing: Enabled
  Mitigation mode Aggressive: Disabled
  Policy threshold variance: 20
  Minimum Mask Length: 28
  Syslog TCA suppress timer: 180 seconds
  Traffic-Class Ageout Timer: 5 minutes
  Minimum Packet Loss Calculation Threshold: 15 packets
  Minimum Bytes Loss Calculation Threshold: 1 bytes
  Minimum Requirement: Met

  Borders:
    IP address: 10.3.0.31
    Version: 2
    Connection status: CONNECTED (Last Updated 4d01h ago )
    Interfaces configured:
      Name: Tunnel100 | type: external | Service Provider: MPLS | Status: UP |
              Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 2

          Path-id list: 0:1 1:1

      Name: Tunnel200 | type: external | Service Provider: INET | Status: UP |
              Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 2

          Path-id list: 0:2 1:2

    Tunnel if: Tunnel0



Note

The BRs are already configured in Example 8-6.


Example 8-7 verifies the status of the Site 2 MC (R20). The output provides the configured and operational status of the MC and lists the local BRs that are controlled (R21 and R22). External interfaces are listed with their corresponding path names but also include the path identifiers because R12 and R22 are Transit BRs.

Example 8-7 R20 Standalone MC Status


R20-MC# show domain IWAN master status
  *** Domain MC Status ***

 Master VRF: Global
  Instance Type:    Transit
  POP ID:      1
  Instance id:      0
  Operational status:  Up
  Configured status:  Up
  Loopback IP Address: 10.2.0.20
  Load Balancing:
   Operational Status: Up
   Max Calculated Utilization Variance: 0%
   Last load balance attempt: never
   Last Reason:  Variance less than 20%
   Total unbalanced bandwidth:
         External links: 0 Kbps  Internet links: 0 Kbps
  External Collector: 10.1.200.1 port: 9995
  Route Control: Enabled
  Transit Site Affinity: Enabled
  Load Sharing: Enabled
  Mitigation mode Aggressive: Disabled
  Policy threshold variance: 20
  Minimum Mask Length: 28
  Syslog TCA suppress timer: 180 seconds
  Traffic-Class Ageout Timer: 5 minutes
  Minimum Packet Loss Calculation Threshold: 15 packets
  Minimum Bytes Loss Calculation Threshold: 1 bytes
  Minimum Requirement: Met

  Borders:
    IP address: 10.2.0.21
    Version: 2
    Connection status: CONNECTED (Last Updated 00:54:59 ago )
    Interfaces configured:
      Name: Tunnel100 | type: external | Service Provider: MPLS path-id:1 |
              Status: UP | Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 0

    Tunnel if: Tunnel0

   IP address: 10.2.0.22
    Version: 2
    Connection status: CONNECTED (Last Updated 00:54:26 ago )
    Interfaces configured:
      Name: Tunnel200 | type: external | Service Provider: INET path-id:2 |
               Status: UP | Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 0

    Tunnel if: Tunnel0


BR Configuration

The BR is the forwarding plane and selects the path based upon the MC’s decisions. The BRs are responsible for creation of performance probes and reporting channel performance metrics, TC bandwidth, and interface bandwidth to the MC so that it can make appropriate decisions based upon the PfR policy.

Transit BR Configuration

A Transit BR is the BR at a hub or transit site. DMVPN interfaces terminate at the BRs. The PfR path name and path identifier are defined on the tunnel interfaces. At the time of this writing, only one DMVPN tunnel is supported on a Transit BR. This limitation is overcome by using multiple BR devices.

The configuration for BRs at a hub or transit site is the same and is given in the following steps:

Step 1. Define the IWAN domain name.

All PfR-related configuration is defined within the domain section. The command domain {default | domain-name} defines the PfR domain name. The domain name must be consistent for all devices participating in the same PfRv3 configuration.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR. You can use the default VRF or use any VRF defined on the router.

Step 3. Identify the router as a BR.

The command border enters border configuration mode. Upon defining the router as a BR, the EIGRP SAF auto-configuration is enabled automatically by default, and requests are sent to the local MC.

Step 4. Define the source interface.

The source interface is one of the loopback interfaces defined on the router. The command source-interface interface-id configures the loopback used as a source for peering with the local MC.

Step 5. Define the site MC IP address.

The BR needs to communicate with the site MC. This is the IP address of the loopback interface defined in the site MC.

The MC is identified with the command master ip-address.

Step 6. Define a password (optional).

The command password password is used to secure the peering between MC and BRs.


Note

It is required that only a single transport exist per Transit BR to guarantee that spoke-to-spoke tunnels will be established. Normal traffic flow follows the Cisco Express Forwarding path, so in a dual-tunnel hub scenario it is possible for spoke 1 to send a packet to spoke 2, which traverses both tunnel interfaces instead of a single interface, keeping the traffic spoke-hub-spoke.


Step 7. Configure the path name and path index.

The DVMPN tunnel must define the path name and a path identifier. The path name uniquely identifies a transport network. For example, this book uses a primary transport network called MPLS and a secondary transport network called INET. The path identifier uniquely identifies a path on a site. This books uses path-id 1 for DMVPN tunnel 100 connected to MPLS and path-id 2 for tunnel 200 connected to INET.

The path name and path identifier are stored inside discovery probes. The BRs extract the information from the probe payload and store the mapping between WAN interfaces and the path name. That information is then transmitted to the site’s local MC so that the local MC knows about the WAN interface availability for all the site’s BRs.

This is the name of the DMVPN network and is limited to eight characters. The path name should be meaningful, such as MPLS or INET. The path index is a unique index for every tunnel per site. The path name and path index are defined under the DMVPN tunnel interface on the hub router with the command domain {default | domain-name} path path-name path-id path-id.

Step 8. Configure the path of last resort (optional).

PfR provides a path of last resort feature that provides a backup mechanism when all the other defined transports are unavailable. PfR does not check the path of last resort’s path characteristics (delay, jitter, packet loss) as a part of path selection. When other transports are available, PfR mutes (does not send smart probes) all channels on the path of last resort. When the defined transports fail, PfR unmutes the default channel, which is sent at a slower rate.

The path of last resort is typically used on metered transports that charge based on data consumption (megabytes per month) and not bandwidth consumption (megabits per second). The DMVPN tunnel and routing protocols must be established for PfR’s path of last resort to work properly so that it provides a faster failover technique than the one provided in Chapter 3 when deploying on cellular modem technologies.

The path of last resort is configured under the tunnel interface on the Transit BR that connects to the path of last resort with the command domain {default | domain-name} path path-name path-id path-id path-last-resort.

Step 9. Configure zero SLA (optional).

The zero SLA (0-SLA) feature enables users to reduce probing frequency in their network infrastructure. Reduction in the probing process helps to reduce costs, especially when ISPs charge based on traffic, and helps to optimize network performance when ISPs provide limited bandwidth. When this feature is configured, a probe is sent only on the DSCP-0 channel. For all other DSCPs, channels are created if there is traffic, but no probing is performed. The reachability of other channels is learned from the DSCP-0 channel that is available at the same branch site.

The zero SLA feature is configured under the tunnel interface on the Transit BR that connects to the path with the command domain {default | domain-name} path path-name path-id path-id zero-sla.


Note

It is essential that the path name for a specific transport be exactly the same as the path name for other transit or hub sites. The path identifier needs to be unique per site. To simplify troubleshooting, maintain a consistent path ID per transport across sites.


Example 8-8 provides the BR configuration for the hub site (Site 1) and the transit site (Site 2) of this book’s sample topology. The path names and path identifiers were taken from Table 8-5.

Image

Table 8-5 Site 2 PfR BRs, Path Names, and Path Identifiers

Example 8-8 Hub Site and Transit Site BR Configuration


R11
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.1.0.10
!
interface Tunnel100
 domain IWAN path MPLS path-id 1


R12
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.1.0.10
!
interface Tunnel200
 domain IWAN path INET path-id 2


R21
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.2.0.20
!
interface Tunnel100
 domain IWAN path MPLS path-id 1


R22
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.2.0.20
!
interface Tunnel200
domain IWAN path INET path-id 2


Branch Site BR Configuration

A branch site BR is the BR for a branch site that does not allow transit routing. A BR can have one or multiple DMVPN tunnels connected to it.

The configuration for BRs at a branch site is essentially the same as for transit or hub site BRs with one exception: the path names and identifiers are not configured but automatically discovered through the discovery probes.

The process for configuring a Branch BR is given in the following steps:

Step 1. Define the IWAN domain name.

All PfR-related configuration is defined within the domain section. The command domain {default | domain-name} defines the PfR domain name. The domain name must be consistent for all devices participating in the same PfRv3 configuration.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR. You can use the default VRF or use any VRF defined on the router.

Step 3. Identify the router as a BR.

The command border enters border configuration mode. When the BR is configured, EIGRP SAF auto-configuration is enabled by default, and requests are sent to the local MC.

Step 4. Define the source interface.

The source interface is one of the loopback interfaces defined on the router. The command source-interface interface-id configures the loopback used as a source for peering with the local MC.

Step 5. Define the site MC IP address.

The BR needs to communicate with the local MC. This is the IP address for the source interface specified at the branch’s MC configuration. The MC is identified with the command master {local | ip-address}. If the router is both an MC and a BR, the keyword local should be used.

Step 6. Define a password (optional).

The command password password is used to secure the peering between the MC and the BRs.

Example 8-9 demonstrates the branch site BR configuration for the book’s sample topology.

Example 8-9 Branch Site BR Configuration


R31, R41, and R51
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master local


R52
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.5.0.51


BR Status Verification

The status of a BR is displayed with the command show domain name vrf name border status. Example 8-10 provides the status of a BR in Site 1 (R11) for the global routing table. The output displays the status of the connection with the site MC, the local path.

Example 8-10 Hub BR Status


R11-Hub# show domain IWAN border status
  **** Border Status ****

Instance Status: UP
Present status last updated: 4d02h ago
Loopback: Configured Loopback0 UP (10.1.0.11)
Master: 10.1.0.10
Master version: 2
Connection Status with Master: UP
MC connection info: CONNECTION SUCCESSFUL
Connected for: 4d02h
External Collector: 10.1.200.1  port: 9995
Route-Control: Enabled
Asymmetric Routing: Disabled
Minimum Mask length: 28
Sampling: off
Channel Unreachable Threshold Timer: 1 seconds
Minimum Packet Loss Calculation Threshold: 15 packets
Minimum Byte Loss Calculation Threshold: 1 bytes
Monitor cache usage: 2000 (20%) Auto allocated
Minimum Requirement: Met
External Wan interfaces:
     Name: Tunnel100 Interface Index: 15 SNMP Index: 12 SP: MPLS path-id: 1
              Status: UP Zero-SLA: NO Path of Last Resort: Disabled

Auto Tunnel information:

   Name:Tunnel0 if_index: 16
   Virtual Template: Not Configured
   Borders reachable via this tunnel:  10.1.0.12


Example 8-11 provides the status of a BR in Site 3 (R31). It gives the status of the connection with the site MC and the two local paths. Notice that even if the BR is colocated with the MC, there is still a connection with the MC. BR and MC are two completely independent components that can run on different platforms or be colocated on the same router as shown in this example.

Example 8-11 R31 Branch BR Status


R31-Spoke# show domain IWAN border status
  **** Border Status ****

Instance Status: UP
Present status last updated: 4d02h ago
Loopback: Configured Loopback0 UP (10.3.0.31)
Master: 10.3.0.31
Master version: 2
Connection Status with Master: UP
MC connection info: CONNECTION SUCCESSFUL
Connected for: 4d02h
Route-Control: Enabled
Asymmetric Routing: Disabled
Minimum Mask length: 28
Sampling: off
Channel Unreachable Threshold Timer: 1 seconds
Minimum Packet Loss Calculation Threshold: 15 packets
Minimum Byte Loss Calculation Threshold: 1 bytes
Monitor cache usage: 2000 (20%) Auto allocated
Minimum Requirement: Met
External Wan interfaces:
     Name: Tunnel100 Interface Index: 15 SNMP Index: 12 SP: MPLS Status: UP
              Zero-SLA: NO Path of Last Resort: Disabled Path-id List: 0:1, 1:1
     Name: Tunnel200 Interface Index: 16 SNMP Index: 13 SP: INET Status: UP
              Zero-SLA: NO Path of Last Resort: Disabled Path-id List: 0:2, 1:2

Auto Tunnel information:
   Name:Tunnel0 if_index: 18
   Virtual Template: Not Configured
   Borders reachable via this tunnel:


NetFlow Exports

MCs and BRs export useful information using the NetFlow v9 export protocol to a NetFlow collector. A network management application can build reports based on information received. Exports include TCA event, route change, TC bandwidth, and performance metrics. To globally enable NetFlow export, a single command line has to be added on the Hub MC in the domain configuration.


Note

NetFlow export can be configured globally on the Hub MC. The NetFlow collector IP address and port are pushed to all MCs in the domain. A manual configuration on a specific branch is also available.


Table 8-6 gives the NetFlow records that are exported from the MC and BRs.

Image

Table 8-6 PfR NetFlow Records Table

The process for configuring NetFlow export on the Hub MC is given in the following steps:

Step 1. Define the IWAN domain name.

All PfR-related configuration is defined within the domain section. The command domain {default | domain-name} defines the PfR domain name. The domain name must be consistent for all devices participating in the same PfRv3 configuration.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR. You can use the default VRF or use any VRF defined on the router.

Step 3. Enter the Hub MC configuration.

The command master hub enters MC configuration mode.

Step 4. Define the collector IP address and port.

The MC and BR need to send NetFlow records to the collector IP address with a specific port. The IP address and port should match what is defined on the collector. The command collector {collector-ip} port {collector-port} defines the collector IP address and port used.

Example 8-12 provides the configuration of a NetFlow collector on the Hub MC. This information is distributed to all MCs in the domain.

Example 8-12 NetFlow Collector Configuration


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   source-interface Loopback0
   collector 10.1.200.1 port 9999


Domain Policies

PfR policies are rules for how PfR should monitor and control traffic. PfR policies are global to the IWAN domain and are defined in the Hub MC and distributed to Branch and Transit MCs using the EIGRP SAF infrastructure. The following sections explain the configuration of the PfR domain policies.

PfR policies include the following:

Image Administrative policies: These policies specify constraints such as path preference or transit site preference. Critical applications or media applications are forwarded over the preferred path that provides the best expected QoS (typically MPLS) and fail over to the fallback path INET only if performance is out of policy.


Note

Transit site preference was added to the IWAN 2.1 release. The PfR policy selects the preferred central site first, and the path second. For example, Site 1 is preferred over Site 2, and MPLS is preferred over the INET path. The hubs are selected in the following order: R11 (MPLS), R12 (INET), R21 (MPLS), and then R22 (INET).


Image Performance policies: These policies specify constraints such as delay, jitter, and loss threshold. They define the boundaries within which an application is correctly supported and user experience is good.

Image Load-balancing policy: When load balancing is enabled, all the traffic that falls in PfR’s default class is load balanced.

Image Monitor interval: This configures the interval time on ingress monitors that track the status of the probes. Lowering the value from the default setting provides a faster failover to alternative paths.

Performance Policies

PfR policies can be defined on a per-application basis or by DSCP QoS marking. Policies are grouped into classes. Class groups are primarily used to define a classification order priority but also to group all traffic that has the same administrative policies. Each class group is assigned a sequence number, and PfR looks at the policies by following the sequence numbers.

There are three core principles that should be considered:

Image PfR does not support the mixing and matching of DSCP- and application-based policies in the same class group.

Image PfR supports the use of predefined policies from the template or creates custom policies.

Image Traffic that does not match any of the class group match statements falls into a default bucket called the default class.

The load-balancing configuration is used to define the behavior of all the traffic that falls into PfR’s default class. When load balancing is enabled, TCs that fall into PfR’s default class are load balanced. When load balancing is disabled, PfRv3 deletes this default class and those TCs are forwarded based on the routing table information.


Note

There is a difference between QoS’s default class and PfR’s default class. PfR’s default class is any traffic that is not defined in an administrative or performance policy regardless of the QoS DSCP markings.


Configuring PfR policies for an IWAN domain includes the following:

Image A definition of class names with their respective sequence numbers

Image A defined policy based on DSCP or application name

Image The use of custom or predefined policies

Image Defined path preference

The process for configuring Hub MC policies is given in the following steps:

Step 1. Enter PfR domain configuration mode.

The command domain {default | domain-name} enters the PfR domain name. Enter the domain you previously configured to enable the Hub MC.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR.

Step 3. Enter Hub MC configuration mode.

The configuration related to performance policies is defined within the master hub section. The command master hub enters MC configuration mode.

Step 4. Enter policy class configuration mode.

Each class is defined with a name and a sequence number that gives the order of operations for PfR. You cannot mix and match DSCP- and application-based policies in the same class group. All traffic in a class follows the same administrative policies. The command class class-name sequence sequence-number defines the class group.

Step 5. Configure policy on per-application or DSCP basis.

You can select a DSCP value from 0 to 63 and assign a policy, or you can use application names. If you use application-based policies, NBAR2 is automatically enabled. You should make sure that all MCs and BRs use the same NBAR2 Protocol Packs. You can then select one of the predefined policies or use custom policies. The command match {application | dscp} services-value policy is used to define a policy with the corresponding threshold values.

You can select the following policy types:

Image best-effort

Image bulk-data

Image low-latency-data

Image real-time-video

Image scavenger

Image voice

Image custom

The configuration can leverage the use of predefined policies or can be entirely based on custom policies where you can manually define all thresholds for delay, loss, and jitter.

Step 6. Define custom policies (optional).

Configure the user-defined threshold values for loss, jitter, and one-way delay for the policy type. Threshold values are defined in milliseconds. The command priority priority-number [jitter | loss | one-way-delay] threshold threshold-value is used to define the threshold values.

Class-type priorities can be configured only for a custom policy. Multiple priorities can be configured for custom policies.

A common example includes the definition of a class group for voice, interactive video, and critical data. Table 8-7 provides a list of class groups with their respective policies.

Image

Table 8-7 Class Groups and Policies

Table 8-8 gives the predefined templates that can be used for policy definition.

Image

Table 8-8 PfR Predefined Policy Templates

Example 8-13 demonstrates the use of DSCP-based custom policies.

Example 8-13 Hub MC Custom Policy Configuration


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   class VOICE sequence 10
    match dscp ef policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
   class INTERACTIVE-VIDEO sequence 20
    match dscp af41 policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    match dscp cs4 policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
   class CRITICAL-DATA sequence 30
    match dscp af21 policy custom
     priority 2 loss threshold 10
     priority 1 one-way-delay threshold 600


Example 8-14 demonstrates the use of DSCP-based predefined policies with a more comprehensive list of groups.

Example 8-14 Hub MC Policy Configuration with Predefined Templates


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   load-balance
   class VOICE sequence 10
    match dscp ef policy voice
    path-preference MPLS fallback INET
   class REAL_TIME_VIDEO sequence 20
    match dscp cs4 policy real-time-video
    match dscp af41 policy real-time-video
    match dscp af42 policy real-time-video
    match dscp af43 policy real-time-video
    path-preference MPLS fallback INET
   class LOW_LATENCY_DATA sequence 30
    match dscp cs3 policy low-latency-data
    match dscp cs2 policy low-latency-data
    match dscp af21 policy low-latency-data
    match dscp af22 policy low-latency-data
    match dscp af23 policy low-latency-data
    path-preference MPLS fallback INET
   class BULK_DATA sequence 40
    match dscp af11 policy bulk-data
    match dscp af12 policy bulk-data
    match dscp af13 policy bulk-data
    path-preference MPLS fallback INET
   class SCAVENGER sequence 50
    match dscp cs1 policy scavenger
    path-preference MPLS fallback INET
   class DEFAULT sequence 60
    match dscp default policy best-effort
    path-preference INET fallback MPLS


The use of a class name and sequence numbers helps differentiate network traffic when the design needs to define a dedicated policy for a specific application of a defined group.

Example 8-15 displays a scenario where a network architect wants to define a policy for Apple FaceTime and a different policy for DSCP AF41 traffic. The network architect cannot put these two definitions in the same class because the TelePresence application may also use DSCP AF41, and PfR will not be able to select which policy to apply. If the network architect uses two different classes and gives TelePresence a higher priority (a lower sequence number is preferred), PfR will be able to assign the correct policy.

Example 8-15 Use of Sequence Number to Prioritize Apple FaceTime


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   class FACETIME sequence 10
    match application facetime policy real-time-video
    path-preference INET fallback MPLS
   class INTERACTIVE-VIDEO sequence 20
    match dscp cs4 policy real-time-video
    match dscp af41 policy real-time-video
    match dscp af42 policy real-time-video
    path-preference MPLS fallback INET


A complicated example would use application-based policies and leverage the IOS classification engine NBAR2.

Load-Balancing Policy

Traffic classes that do not have any performance policies assigned fall into the default class. PfR can dynamically load balance these traffic classes over all available paths based on the tunnel bandwidth utilization. The default range utilization is defined as 20%.

The process for configuring load balancing is given in the following steps:

Step 1. Enter PfR domain configuration mode.

The command domain {default | domain-name} enters the PfR domain name. Enter the domain you previously configured to enable the Hub MC.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR.

Step 3. Enter Hub MC configuration mode.

The command master hub enters MC configuration mode.

Step 4. Configure load balancing.

When load balancing is enabled, all the traffic that falls into the PfR default class is load balanced. When load balancing is disabled, PfRv3 deletes this default class and traffic is not load balanced but routed based on the routing table information. The command load-balance enables load balancing globally in the domain.

Example 8-16 demonstrates the use of load balancing.

Example 8-16 Load-Balancing Configuration


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   load-balance



Note

PfR load balancing is based on tunnel bandwidth, and PfR load sharing is within a tunnel between multiple next hops using a hash algorithm.


Path Preference Policies

PfR looks at all paths that satisfy the policies defined on the Hub MC. Some of the available transports may have a better SLA than other transports. The PfR policy should set the primary path on the transports with better SLAs for business applications.

The process for configuring PfR path preference policies is given in the following steps:

Step 1. Enter PfR domain configuration mode.

The command domain {default | domain-name} enters the PfR domain name. Enter the domain you previously configured to enable the Hub MC.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR.

Step 3. Enter Hub MC configuration mode.

The command master hub enters MC configuration mode.

Step 4. Enter policy class configuration mode.

Each class is defined with a name and a sequence number that gives the order of operations for PfR. You cannot mix and match DSCP- and application-based policies in the same class group. All traffic in a class follows the same administrative policies.

Step 5. Configure path preference per class.

Configure the path preference for applications. Group policies sharing the same purpose can be defined under the same class path preference. You cannot configure different path preferences under the same class. PfR supports three paths per path preference logic, so you can configure path-preference {path1} {path2} {path3} fallback {path4} {path5} {path6} next-fallback {path7} {path8} {path9}.

Step 6. Configure the policy for path of last resort (optional).

Configure the policy for path of last resort per class as an option to the path preference configuration defined in Step 5. The path of last resort is defined with the command path-last-resort path.

In Example 8-17, the MPLS path should be preferred over the INET path for business and media applications. PfR allows the definition of a three-level path preference hierarchy. With path preference configured, PfR first considers all paths belonging to the primary group and then goes to the fallback group and finally to the next fallback group.

Example 8-17 Hub MC Custom Policy Configuration


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   load-balance
   class VOICE sequence 10
    match dscp ef policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    path-preference MPLS fallback INET
   class INTERACTIVE-VIDEO sequence 20
    match dscp af41 policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    match dscp cs4 policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    path-preference MPLS fallback INET
   class CRITICAL-DATA sequence 30
    match dscp af21 policy custom
     priority 2 loss threshold 10
     priority 1 one-way-delay threshold 600
   path-preference MPLS fallback INET


Quick Monitor

When PfR is enabled on a BR, it instantiates three different monitors. One of them is applied on ingress and is used to measure the performance of the channel. The default monitor interval is 30 seconds.

This monitor interval can be lowered for critical applications to achieve a fast failover to the secondary path. This is known as quick monitor. You can define one quick monitor interval for all DSCP values associated to critical applications.

The process for configuring the quick monitor is given in the following steps:

Step 1. Enter PfR domain configuration mode.

The command domain {default | domain-name} enters the PfR domain name. Enter the domain you previously configured to enable the Hub MC.

Step 2. Define the VRF.

PfR is configured per VRF. The command vrf {default | vrf-name} defines the VRF for PfR.

Step 3. Enter Hub MC configuration mode.

The command master hub enters MC configuration mode.

Step 4. Define the monitor interval for specific QoS DSCPs.

The default monitor interval is 30 seconds. PfR allows lowering of the monitor interval for critical applications to achieve a fast failover to the secondary path. The command monitor-interval seconds dscp value is used to define the quick monitor interval.

Example 8-18 demonstrates defining a monitor interval.

Example 8-18 Configuration for Quick Monitor


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   source-interface Loopback0
   enterprise-prefix prefix-list ENTERPRISE_PREFIX
   site-prefixes prefix-list SITE_PREFIX
   monitor-interval 4 dscp af21
   monitor-interval 4 dscp cs4
   monitor-interval 4 dscp af41
   monitor-interval 4 dscp ef


Hub Site Master Controller Settings

The enterprise-prefix prefix list defines the boundary for all the internal enterprise prefixes. A prefix that is not from the enterprise prefix list is considered a PfR Internet prefix. PfR does not monitor performance (delay, jitter, byte loss, or packet loss) for network traffic. The enterprise-prefix prefix list is defined only on the Hub MC under the MC configuration with the command enterprise-prefix prefix-list prefix-list-name.


Note

When PfR parses the prefix list, it does not exclude any prefixes with the deny statement.


Example 8-19 demonstrates the configuration of the enterprise-prefix prefix list.

Example 8-19 PfR Enterprise-Prefix Prefix List


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   enterprise-prefix prefix-list ENTERPRISE_PREFIX
!
ip prefix-list ENTERPRISE_PREFIX seq 10 permit 10.0.0.0/8


Hub, Transit, or Branch Site Specific MC Settings

The site-prefix prefix list defines the static-site prefix for the local site and disables automatic site-prefix learning on the BR. The static-site prefix list is required only for Hub and Transit MCs. A site-prefix prefix list is optional on Branch MCs. The site prefix is defined under the MC configuration with the command site-prefixes prefix-list prefix-list-name.

Example 8-20 provides R10 and R20’s site-prefix prefix list configuration. Notice that a second entry is added to R10’s SITE_PREFIX prefix list to accommodate the DCI between Site 1 and Site 2.

Example 8-20 PfR Site-Prefix Prefix List


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   site-prefixes prefix-list SITE_PREFIX
!
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list SITE_PREFIX seq 20 permit 10.2.0.0/16


R20 (Transit MC)
domain IWAN
 vrf default
  master transit 1
   source-interface Loopback0
   site-prefixes prefix-list SITE_PREFIX
   hub 10.1.0.10
!
ip prefix-list ENTERPRISE_PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list SITE_PREFIX seq 20 permit 10.2.0.0/16



Note

When statically configuring a site prefix list, breaking the prefix list into smaller prefixes provides granularity to support load balancing. Creating smaller prefixes creates additional TCs for the same DSCP, which then provides more path choices for PfR.

For example, instead of creating a 10.1.0.0/16, using a 10.1.0.0/17 and a 10.1.128.0/17 provides more TCs.


Complete Configuration

The PfR configuration section explained the configuration in a step-by-step fashion to provide a thorough understanding of the configuration. Example 8-21 provides a complete sample PfR configuration for the hub site routers.

Example 8-21 Hub Site PfR Configuration


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   source-interface Loopback0
   enterprise-prefix prefix-list ENTERPRISE_PREFIX
   site-prefixes prefix-list SITE_PREFIX
   password cisco123
   monitor-interval 2 dscp af21
   monitor-interval 2 dscp cs4
   monitor-interval 2 dscp af41
   monitor-interval 2 dscp ef
   collector 10.1.200.200 port 2055
!
ip prefix-list ENTERPRISE_PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list SITE_PREFIX seq 20 permit 10.2.0.0/16


R11
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.1.0.10
!
interface Tunnel100
 description DMVPN-MPLS
 domain IWAN path MPLS path-id 1


R12
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.1.0.10
!
interface Tunnel200
 description DMVPN-INET
 domain IWAN path INET path-id 2


Example 8-22 provides a complete sample PfR configuration for the transit site routers.

Example 8-22 Transit Site PfR Configuration


R20 (Transit MC)
domain IWAN
 vrf default
  master transit 1
   source-interface Loopback0
   password cisco123
   site-prefixes prefix-list SITE_PREFIX
   hub 10.1.0.10
!
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list SITE_PREFIX seq 20 permit 10.2.0.0/16


R21
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.2.0.20
!
interface Tunnel100
 description DMVPN-MPLS
 domain IWAN path MPLS path-id 1


R22
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.2.0.20
!
interface Tunnel200
 description DMVPN-INET
 domain IWAN path INET path-id 2


Example 8-23 provides a complete sample PfR configuration for the branch site routers.

Example 8-23 Branch Site PfR Configuration


R31, R41, and R51
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master local
  master branch
   source-interface Loopback0
   password cisco123
   hub 10.1.0.10


R52
domain IWAN
 vrf default
  border
   source-interface Loopback0
   password cisco123
   master 10.5.0.51


Advanced Parameters

PfR configuration has been streamlined and simplified, and it includes many parameters defined by default that work for most customers. The PfR advanced mode is used on the Hub MC to change the default values for the entire IWAN domain.

Unreachable Timer

The BR declares a channel unreachable in both directions if no packets are received from the peer within the unreachable timer. The unreachable timer is defined as one second by default and can be tuned if needed.

The command channel-unreachable-timer seconds configures the unreachable timer on the Hub MC.

Example 8-24 provides a sample PfR advanced configuration for the unreachable timer for the Hub MC.

Example 8-24 Hub MC PfR Advanced Configuration for Unreachable Timer


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   advanced
    channel-unreachable-timer 4


Smart Probes Ports

Smart probes are generated from BRs when there is no user traffic for a specific channel on a specific path. These probes are configured with the following parameters:

Image Source address: Source site MC

Image Destination address: Destination site MC

Image Source port: 18000

Image Destination port: 19000

Source and destination ports can be changed on the Hub MC with the command smart-probes source-port src-port-number and the command smart-probes destination-port dst-port-number.

Example 8-25 provides a sample PfR advanced configuration for the smart probes source and destination ports for the Hub MC.

Example 8-25 Hub MC PfR Advanced Configuration for Smart Probes Source and Destination Ports


R10 (Hub MC)
domain IWAN
 vrf default
  border
   smart-probes source-port 180001
   smart-probes destination-port 19001


Transit Site Affinity

Transit Site Affinity (also known as Transit Site Preference) is used in the context of a multiple-transit-site deployment with the same set of prefixes advertised from all central sites (Figure 8-3). When routing is configured to define one of the central sites to be preferred, PfR prioritizes the use of the next hops available on that site.

The command no transit-site-affinity disables the transit site preference and is configured on the Hub MC.

Example 8-26 provides a sample PfR advanced configuration for the Hub MC.

Example 8-26 Hub MC PfR Advanced Configuration for Transit Site Preference


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   no transit-site-affinity


Path Selection

Path selection is a combination of the routing configuration and decisions made by PfR administrative and performance policies.

Routing—Candidate Next Hops

As explained in Chapter 7, “Introduction to Performance Routing (PfR),” routing configuration and design are key to PfR and determine the use of possible next hops. That is especially critical for the direction from branches to central sites (hub or transit sites). A branch can also have multiple next hops over a single tunnel interface for a specific destination prefix. These next hops can be spread across multiple transit sites as depicted in Figure 8-3 (Site 1 and Site 2 advertise the same set of prefixes).

The spoke router considers all the paths (multiple NHs) toward the POPs and maintains a list of active/standby candidate next hops per prefix and interface; these are derived based on the routing configuration.

Image Active next hop: A next hop is considered active for a given prefix if it has the best metric.

Image Standby next hop: A next hop is considered standby for a given prefix if it advertises a route for the prefix but does not have the best metric.

Image Unreachable next hop: A next hop is considered unreachable for a given prefix if it does not advertise any route for the prefix.

Routing—No Transit Site Preference

The same set of prefixes can be advertised from multiple transit sites, and routing configuration is done in such a way that there is no preference given to a particular transit site. The goal is to be able to load-balance the traffic across all transit sites to reach the data centers.

Figure 8-5 illustrates the hub routers advertising the same BGP local preference (100) from all BRs from Site 1 and Site 2 (default local preference of 100 used here). The same effect can be achieved with EIGRP.

Image

Figure 8-5 Site 1 and Site 2 Advertising Prefixes with the Same BGP Local Preference

Table 8-9 displays the PfR table and the status. PfR picks the next-hop information (parent route) from BGP and lists all next hops for prefix 10.1.0.0/16 as active.

Image

Table 8-9 PfR Table—Same Prefix, Same BGP Local Preference

Routing—Site Preference

You may want to define one of the transit sites as a preferred site in the WAN design. In the book’s generic topology, Site 1 could be the primary site for all spokes. Another use case is that all branches located in the United States want to use Site 1, which is located in the United States, and branches located in Europe want to use Site 2, which is located in Europe, because geographic proximity reduces latency.

Figure 8-6 illustrates the use of different BGP local preferences from BRs (DMVPN hub routers) located in Site 1 and Site 2. The WAN design sets the local preference of the MPLS transport over the Internet (INET), and Site 1 is preferred over Site 2. The local preference values set on the BRs are provided in Table 8-10.

Image

Figure 8-6 Site 1 and Site 2 Advertising Prefixes with Different BGP Local Preferences

Image

Table 8-10 BGP Local Preference Definition


Note

The values used in the BGP configuration should make reading the BGP table easy. A path is more preferred if it has a higher local preference. If the local preference is not displayed or set, a router assumes the default value of 100 during the BGP best-path calculation.


Defining the local preference is a matter of design choice. If all the sites share the same routing policy, it is probably easier to set this preference directly on the Transit BRs. If the routing policy varies at each branch, setting the local preference on an inbound route map based upon the BGP community is the best technique.

Example 8-27 provides the relevant BGP configuration for setting the local preference on R11. The complete BGP configuration was provided in Chapter 4, “Intelligent WAN (IWAN) Routing.”

Example 8-27 R11 BGP Local Preference Configuration


R11-Hub
router bgp 10
 bgp listen range 192.168.100.0/24 peer-group MPLS-SPOKES
 neighbor MPLS-SPOKES peer-group
 neighbor MPLS-SPOKES remote-as 10
!
 address-family ipv4
  aggregate-address 10.1.0.0 255.255.0.0 summary-only
  aggregate-address 10.0.0.0 255.0.0.0 summary-only
  neighbor MPLS-SPOKES activate
  neighbor MPLS-SPOKES send-community
  neighbor MPLS-SPOKES route-reflector-client
  neighbor MPLS-SPOKES weight 50000
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-OUT out
exit-address-family
!
route-map BGP-MPLS-SPOKES-OUT permit 10
set local-preference 100000


As a result, PfR picks the next-hop information (parent route) from BGP and builds Table 8-11.

Image

Table 8-11 PfR Table—Same Prefix with Different BGP Local Preference

Example 8-28 displays the BGP table for the 10.1.0.0/16 prefix. Notice that R12 is not the best path for BGP given the lower local preference compared to R11, but R12 is still considered as a valid next hop for PfR in Table 8-11. Remember that PfR picks at least one next hop for every external tunnel.

Example 8-28 R31 Next-Hop Information with Local Preference


R31-Spoke# show bgp ipv4 unicast
! Output omitted for brevity
BGP table version is 12, local router ID is 10.3.0.31
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 * i 10.1.0.0/16      192.168.200.22           0    400  50000 i
 * i                  192.168.100.21           0   3000  50000 i
 * i                  192.168.200.12           0  20000  50000 i
 *>i                  192.168.100.11           0 100000  50000 i



Note

R12 is not the best path for BGP given the lower local preference compared to R11 but is still considered a valid next hop for PfR. Remember that PfR picks at least one next hop for every external tunnel.


PfR Path Preference

Based on available next-hop information, PfR uses its own administrative policies to choose the appropriate next hops per prefix. At the time of this writing, PfR has two main administrative policies:

Image Transit site preference (3.16.1 onward)

Image Path preference

PfR supports three paths per path preference logic. The command path-preference {path1} {path2} {path3} fallback {path4} {path5} {path6} next-fallback {path7} {path8} {path9} is used in a class group to configure the path preference for applications.

Image {path1}, {path2}, and {path3} are the preferred paths for the class group.

Image {path4}, {path5}, and {path6} are the secondary paths and are used if all preferred paths are out of policy.

Image {path7}, {path8}, and {path9} are then used when all preferred and secondary paths are out of policy.

A generic PfR path preference configuration is shown in Example 8-29.

Example 8-29 Path Preference Definition


R10 (Hub MC)
domain IWAN
 vrf default
  master hub
   load-balance
   class VOICE sequence 10
    match dscp ef policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
     path-preference MPLS1 MPLS2 MPLS3 fallback INET1 INET2 INET3 next-fallback
       INET4 INET5 INET6



Note

The keyword next-fallback was introduced with IOS 15.5(3)M and IOS XE 3.16. Initial code supported five paths for path preference and four paths for fallback.


With path preference configured, PfR first considers all the links belonging to the preferred path preference (it includes the active and the standby links belonging to the preferred path) and then uses the fallback provider links. Without path preference configured, PfR gives preference to the active channels and then the standby channels (active/standby is per prefix) with respect to the performance and policy decisions.

PfR Transit Site Preference

Transit site preference is used in the context of a multiple-transit-site deployment with the same set of prefixes advertised from all central sites (Figure 8-3).

A specific transit site is preferred for a specific prefix, as long as there are available in-policy channels for that site. Transit site preference is a higher-priority filter and takes precedence over path preference. The concept of active and standby next hops based on routing metrics and advertised mask length in routing is used to gather information about the preferred transit site for a given prefix. For example, if the best metric for a given prefix is on Site 1, all the next hops on that site for all the paths are tagged as active (only for that prefix as shown earlier in Table 8-11).

Figure 8-7 illustrates the transit site preference with Site 3 preferring Site 1 and Site 4 preferring Site 2 to reach the same prefix.

Image

Figure 8-7 Transit Site Preference


Note

Active and standby channels are per prefix and span the POPs. A spoke will randomly (hash) choose the active channel.


Using Transit Site Preference and Path Preference

The BGP local preference has been configured as in Table 8-11 and is displayed in Figure 8-6 for the 10.1.0.0/16 prefix. The active/standby next-hop tagging happens irrespective of transit site affinity being enabled or disabled. The next-hop status results are listed in Table 8-12.

Image

Table 8-12 PfR Table—Next-Hop Status

The path selection results are listed in Table 8-13.

Image

Table 8-13 PfR Table—Path Selection Algorithm


Note

Transit site preference is enabled by default. Transit site preference was introduced with IOS 15.5(3)M1 and IOS XE 3.16.1.


Summary

This chapter focused on the configuration of Cisco PfR. PfRv3’s configuration has been drastically simplified compared to its earlier versions. PfR’s provisioning is streamlined now that all of the site’s PfR policies are centralized on the Hub MC. Intelligence and automation occur in the background to simplify path discovery, service prefix discovery, and site discovery. The result of this is that the majority of the configuration occurs on the Hub MC, and even then the policies have been simplified.

PfR configuration involves the configuration of the MCs for the IWAN domain. The Transit and Branch MC configuration simply identifies the PfR source interface and the Hub MC IP address. The Hub MC contains the logical domain controller functionality and acts as the local MC for the hub site. The Hub MC should act as a standalone device.

The BR functions consist of identifying the PfR source interface and the local MC IP address. Transit BRs are also required to configure the path ID and path name.

Combining PfR policies with the routing configuration defines the path used for transit sites.

Further Reading

Cisco. “Performance Routing Version 3.” www.cisco.com/go/pfr

Cisco. “PfRv3 Configuration.” www.cisco.com

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset