Chapter 1. Evolution of the WAN

A router’s primary job is to provide connectivity between networks. Designing and maintaining a LAN is straightforward because equipment selection, network design, and the ability to install or modify cabling are directly under the control of the network engineers.

WANs provide connectivity between multiple LANs that are spread across a broad area. Designing and supporting a WAN add complexity because of the variety of network transports, associated limitations, design choices, and costs of each WAN technology.

WAN Connectivity

WAN connectivity uses a variety of technologies, but the predominant methods come from service providers (SPs) with three primary solutions: leased circuits, Internet, and Multiprotocol Label Switching (MPLS) VPNs.

Leased Circuits

The cost to secure land rights and to purchase and install cables between two locations can present a financial barrier to most companies. Service providers can deliver dedicated circuits between two locations at a specific cost. Leased circuits can provide high-bandwidth and secure connectivity. Regardless of link utilization, leased lines provide guaranteed bandwidth between two locations because the circuits are dedicated to a specific customer.

Internet

The Internet was originally created based on the needs of the U.S. Department of Defense to allow communication even if a network segment is destroyed. The Internet’s architecture has evolved so that it now supports the IP protocol (IPv4 and IPv6) and consists of a global public network connecting multiple SPs. A key benefit of using the Internet as a WAN transport is that both locations do not have to use the same SP. A company can easily establish connectivity between sites using different SPs.

When a company purchases Internet connectivity, bandwidth is guaranteed only to networks under the control of the same SP. If the path between networks crosses multiple SPs, bandwidth is not guaranteed because the peering link can be oversubscribed depending upon the peering agreement between SPs. Bandwidth for peering links is typically smaller than the bandwidth of the native SP network. At times congestion may occur on the peering link, adding delay or packet loss as packets traverse the peering link.

Figure 1-1 illustrates a sample topology in which bandwidth contention can occur on peering links. AS100 guarantees 1 Gbps of connectivity to R1 and 10 Gbps of connectivity to R3. AS200 guarantees 10 Gbps of connectivity to R4, and AS300 guarantees 1 Gbps of connectivity to R2. AS100 and AS200 peer with a 10 Gbps circuit, and AS200 peers with AS300 with two 10 Gbps circuits. With normal traffic flows R1 can communicate at 1 Gbps rates with R2. However, if R3 is transmitting 10 Gbps of data to R4, 11 Gbps of traffic must travel across the 10 Gbps circuit into AS200. Because the peering links are not dedicated to a specific customer, some traffic is delayed or dropped because of oversubscription of the 10 Gbps link. Bandwidth or latency cannot be guaranteed when packets travel across peering links.

Image

Figure 1-1 Bandwidth Is Not Guaranteed on the Internet

Quality of service (QoS) is based on granting preference to one type of network traffic over another. QoS design is based on trust boundaries, classification, and prioritization. Because the Internet is composed of multiple SPs, the trust boundary continually changes. Internet SPs trust and prioritize only network traffic that originates from their devices. QoS is considered a best effort when using the Internet as a transport. Some organizations may deem the Internet unacceptable because this requirement cannot be met.

Multiprotocol Label Switching VPNs (MPLS VPNs)

Service providers use MPLS to provide a scalable peer-to-peer architecture that provides a dynamic method of tunneling for packets to transit from SP router to SP router without looking into the packet’s contents. Such networks forward traffic based upon the outermost label of the packet and do not require examination of the packet’s header or payload. As packets cross the core of the network, the source and destination IP addresses are not checked as long as a destination label exists in the packet. Only the SP provider edge (PE) routers need to know how to forward unlabeled packets toward the customer router.

The MPLS VPNs are able to forward customer networks via two options depending upon the customer’s requirements:

Image Layer 2 VPN (L2VPN): The SP provides connectivity to customer routers by creating a virtual circuit between the nodes. The SP emulates a cable or network switch and does not exchange any network information with the customer routers.

Image Layer 3 VPN (L3VPN): The SP routers create a virtual context, known as a Virtual Route Forwarding (VRF) instance, for each customer. Every VRF provides a method for routers to maintain a separate routing and forwarding table for each VPN network on a router. The SP communicates and exchanges routes with the customer edge (CE) routers. L3VPN exchanges IPv4 and IPv6 packets between PE routers.

The SPs own all the network components in an MPLS VPN network and can guarantee specific QoS levels to the customer. They price their services based on service-level agreements (SLAs) that specify bandwidth, QoS, end-to-end latency, uptime, and additional guarantees. The price of the connectivity typically correlates to higher demands in the SLAs to offset additional capacity and redundancy in their infrastructure.

Increasing Demands on Enterprise WANs

WAN traffic patterns have evolved since the 1990s. At first, a majority of network traffic remained on the LAN because people with similar job functions were grouped together in a building. Sharing files and interacting via email was localized, so WAN links typically transferred data between email servers, or for users accessing the corporate intranet. Over time, WAN circuits have seen an increase of network traffic as explained in the following sections.

Server Virtualization and Consolidation

Server CPUs have become faster and faster, allowing servers to do more processing. IT departments realized that consolidating file or email servers consumed fewer resources (power, network, servers, and staff) and lowered operational costs. Server consolidation reached a new height with the introduction of x86 server virtualization. Companies virtualized physical servers into virtual machines. An unintended consequence of server consolidation was that WAN utilization increased because servers were located at data centers (DCs), not in branch offices.

Cloud-Based Services

An organization’s IT department is responsible for maintaining business applications such as word processing, email, and e-commerce. Application sponsors must work with IT to accommodate costs for staffing, infrastructure (network, workstations, and servers) for day-to-day operations, architecture, and disaster recovery.

Cloud-based providers have emerged from companies like SalesForce.com, Amazon, Microsoft, and Google. Cloud SPs assume responsibility for the cost of disaster recovery, licensing, staff, and hardware while providing flexibility and lower costs to their customers. The cost of a cloud-based solution can be spread across the length of the contract. Changing vendors in a cloud-based model does not have the same financial impact as implementing an application with in-house resources.

Connectivity to cloud providers is established with dedicated circuits or through Internet portals. Some companies prefer a dedicated circuit because they manage the security aspect of the application at the point of attachment. However, providing connectivity through the Internet gives employees the same experience whether they are in the office or working remotely.

Collaboration Services

Enterprise organizations historically maintained a network for voice and a network for computer data. Phone calls between cities were classified as long distance, allowing telephone companies to charge the party initiating the call on a per-minute basis.

By consolidating phone calls onto the data network using voice over IP (VoIP), organizations were able to reduce their operating costs. Companies did not have to maintain both voice and data circuits between sites. Legacy private branch exchanges (PBXs) no longer needed to be maintained at all the sites, and calls between users in different sites used the WAN circuit instead of incurring per-minute long-distance charges.

Expanding upon the concepts of VoIP, collaboration tools such as Cisco WebEx now provide virtual meeting capability by combining voice, computer screen sharing, and interactive webcam video. These tools allow employees to meet with other employees, meet with customers, or provide training seminars without requiring attendees to be in the same geographic location. WebEx provides a significant reduction in operating costs because travel is no longer required. Management has realized the benefits of WebEx but has found video conferencing or Cisco TelePresence even more effective. These tools provide immersive face-to-face interaction, involving all participants in the meeting, thereby increasing the attention of all attendees. Decisions are made faster because of the reduced delay, and people are more likely to interact and share information with others over video.

Voice and video network traffic requires prioritization on a network. Voice traffic is sensitive to latency between endpoints, which should be less than 150 ms one way. Video traffic is more tolerant of latency than voice. Latency by itself causes a delay before the voice is heard, turning a phone call (two-way audio) into a CB radio (one-way). While this is annoying, people can still communicate. Jitter is the varying delay between packets as they arrive in a network and can cause gaps in the playback of voice or video streams. If packet loss, jitter, or latency is too high, users can become frustrated with choppy/distorted audio, video tiling, or one-way phone calls that drastically reduce the effectiveness of these technologies.

Bring Your Own Device (BYOD)

In 2010, employees began to use their personal computers, smartphones, and tablets for work. This trend is known as bring your own device (BYOD). Companies allowed their employees to BYOD because they anticipated an increase in productivity, cost savings, and employee satisfaction as a result.

However, because these devices are not centrally managed, corporations must take steps to ensure that their intellectual property is not compromised. Properly designed networks ensure that BYOD devices are separated from corporate-managed devices.

Smartphones and tablets for BYOD contain a variety of applications. Some may be used for work, but others are not. Application updates are an average size of 2 MB to 25 MB; some operating system updates are 150 MB to 750 MB in size. When users update multiple applications or the operating system (OS) on their device, it consumes network bandwidth from business-related applications.


Note

Some users connect their smartphones and tablets to corporate networks purely to avoid data usage fees associated with their wireless carrier contracts.


Guest Internet Access

Many organizations offer guest networks for multiple reasons, including convenience and security:

Image Convenience: Enterprises commonly provide their vendors, partners, and visitors with Internet access as a convenience. Providing connectivity allows access to the company’s network for email, VPN access for files, or to a lab environment, making meetings and projects productive.

Image Security: Separating the secured corporate resources (workstations, servers, and so on) from unmanaged devices creates a security boundary. If an unmanaged device becomes compromised because of malware or a virus, it cannot communicate with corporate devices.

Quality of Service for the WAN

Network users expect timely responsiveness from their network applications. Most LAN environments provide gigabit connectivity to desktops, with adequate links between network devices to prevent link saturation. Network engineers deploy QoS policies to grant preference of one type of network traffic over a different type. Although QoS policies should be deployed everywhere in a network, they are a vital component of any WAN edge design, where bandwidth is often limited because of cost and/or availability.

Media applications (voice and/or video) are sensitive to delay and packet loss and are often granted the highest priority in QoS policies. Typically, non-business-related traffic (Internet) is assigned the lowest QoS priority (best effort). All other business-related traffic is categorized and assigned an appropriate QoS priority and bandwidth based upon the business justification.

A vital component of QoS is the classification of network traffic according to the packet’s header information. Typically traffic is classified by class maps, which use a combination of protocol (TCP/UDP) and communication ports. Application developers have encountered issues with traffic passing through corporate firewalls on nonstandard ports or protocols. They have found methods to tunnel their application traffic over port 80, allowing instant messaging (IM), web conferencing, voice, and a variety of other applications to be embedded in HTTP. In essence, HTTP has become the new TCP.

HTTP is not sensitive to latency or loss of packets and uses TCP to detect packet loss and retransmission. Network engineers might assume that all web-browsing traffic can be marked as best effort because it uses HTTP, but other applications that are nested in HTTP can be marked incorrectly as well.

Deep packet inspection is the process of looking at the packet header and payload to determine the actual application for that packet. Packets that use HTTP or HTTPS header information should use deep packet inspection to accurately classify the application for proper QoS marking. Providing proper network traffic classification ensures that the network engineers can deploy QoS properly for every application.

Branch Internet Connectivity and Security

The Internet provides a wealth of knowledge and new methods of exchanging information with others. Businesses host web servers known as e-commerce servers to provide company information or allow customers to shop online. Just as with any aspect of society, criminals try to obtain data illegally for personal gain or blackmail. Security is deployed in a layered approach to provide effective solutions to this problem.

Firewalls restrict network traffic to e-commerce servers by specifying explicit destination IP addresses, protocols, and ports. Email servers scan email messages for viruses and phishing attempts. Hackers have become successful at inserting viruses and malware into well-known and respected websites. Content-filtering servers can restrict access to websites based on the domain-based classification and can dynamically scan websites for malicious content.

Internet access is provided to the branch with either a centralized or a distributed model. Both models are explained in the following sections.

Centralized Internet Access

In the centralized Internet access model, one centralized or regional site provides Internet connectivity. This model simplifies the management of Internet security policy and device configuration because network traffic flows through a minimal number of access points. This reduces the size of the security infrastructure and its associated maintenance costs.

The downside of the centralized model is that all network traffic from remote locations to the Internet is also backhauled across the WAN circuit. This can cause congestion on the enterprise WAN and centralized Internet access circuits during peak usage periods unless the Internet circuit contains sufficient bandwidth for all sites and the WAN circuits are sized to accommodate internal network traffic as well as the backhauled Internet traffic. Although Internet circuits have a low cost, the backhauled network traffic travels on more expensive WAN circuits. In addition, backhauling Internet traffic may add latency between the clients and servers on the Internet. The latency occurs for recreational web browsing as well as access to corporate cloud-based applications.

Figure 1-2 illustrates the centralized Internet model. All Internet traffic from R2 or R3 must cross the WAN circuit where it is forwarded out through the headquarters Internet connection.

Image

Figure 1-2 Centralized Internet Connectivity Model

Distributed Internet Access

In the distributed Internet access model, Internet access is available at all sites. Access to the Internet is more responsive for users in the branch, and WAN circuits carry only internal network traffic. Figure 1-3 illustrates the distributed Internet model. R2 and R3 are branch routers that can provide access to the Internet without having to traverse the WAN links. R2 and R3 route packets to the Internet out of their Internet circuits, reducing the need to backhaul Internet traffic across costly WAN circuits.

Image

Figure 1-3 Distributed Internet Connectivity Model

This model requires that the security policy be consistent at all sites, and that appropriate devices be located at each site to enforce those policies. These requirements can be a burden to some companies’ network and/or security teams.

Cisco Intelligent WAN

Cisco Intelligent WAN (IWAN) architecture provides organizations with the capability to supply more usable WAN bandwidth at a lower cost without sacrificing performance, security, or reliability. Cisco IWAN is based upon four pillars: transport independence, intelligent path control, application optimization, and secure connectivity.

Transport Independence

Cisco IWAN uses Dynamic Multipoint VPN (DMVPN) to provide transport independence via overlay routing. Overlay routing provides a level of abstraction that simplifies the control plane for any WAN transport, allowing organizations to deploy a consistent routing design across any transport and facilitating better traffic control and load sharing, and supports routing protocols, removing any barriers to equal-cost multipathing (ECMP). Overlay routing provides transport independence so that a customer can select any WAN technology: MPLS VPN (L2 or L3), metro Ethernet, direct Internet, broadband, cellular 3G/4G/LTE, or high-speed radios. Transport independence makes it easy to mix and match transport options or change SPs to meet business requirements.

For example, a new branch office requires network connectivity. Installing a physical circuit can take an SP six to 12 weeks to provision after the order is placed. If the order is not placed soon enough or complications are encountered, WAN connectivity for the branch is delayed. Cisco IWAN’s transport independence allows the temporary use of a cellular modem until the physical circuit is installed without requiring changes to the router’s routing protocol configuration, because DMVPN resides over the top of the cellular transport. Changing transports does not impact the overlay routing design.

Intelligent Path Control

Routers forward packets based upon destination address, and the methodology for path calculation varies from routing protocol to routing protocol. Routing protocols do not take into consideration packet loss, delay, jitter, or link utilization during path calculation, which can lead to using an unsuitable path for an application. Technologies such as IP SLAs can measure the path’s end-to-end characteristics but do not modify the path selected by the routing protocol.

Performance Routing (PfR) provides intelligent path control on an application basis. It monitors application performance on a traffic class basis and can forward packets on the best path for that application. In the event that a path becomes unacceptable, PfR can switch the path for that application until the original path is within application specifications again. In essence, PfR ensures that the path taken meets the requirements set for that application.

PfR has been enhanced multiple times for Cisco intelligent path control, integrating with DMVPN and making it a vital component of the IWAN architecture. It provides improved application monitoring, faster convergence, simple centralized configuration, service orchestration capability, automatic discovery, and single-touch provisioning.

Providing a highly available network requires elimination of single points of failure (SPoFs) to accommodate hardware failure and other failures in the SP infrastructure. In addition to redundancy, the second circuit can provide additional bandwidth with the use of transport independence and PfR. This can reduce WAN operating expenses in any of the IWAN deployment models.

Figure 1-4 depicts a topology that provides R1 connectivity to R5 across two different paths. R1 and R5 have identified DMVPN tunnel 100 as the best path with the routing protocol used and continue to send VoIP traffic up to that tunnel’s capacity. R1 uses the same tunnel for sending and transferring files. The total amount of network traffic exceeds tunnel 100’s bandwidth capacity. The QoS policies on the tunnel ensure that the VoIP traffic is not impacted, but file transfer traffic is impacted. The DMVPN tunnel 200 could be used to transfer files with intelligent path control.

Image

Figure 1-4 Path Optimizations with Intelligent Path Control

PfR overcomes scenarios like the one described previously. With PfR, R1 can send VoIP traffic across DMVPN tunnel 100 and send file transfer traffic toward DMVPN tunnel 200. PfR allows both DMVPN tunnels to be used while still supporting application requirements and not dropping packets.


Note

Some network engineers might correlate PfR with MPLS traffic engineering (TE). MPLS TE supports the capability to send specific marked QoS traffic down different TE tunnels but lacks the granularity that PfR provides for identifying an application.


Application Optimization

Most users assume that application responsiveness across a WAN is directly related to available bandwidth on the network link. This is an incorrect assumption because application responsiveness directly correlates to the following variables: bandwidth, path latency, congestion, and application behavior.

Most applications do not take network characteristics into account and rely on underlying protocols like TCP for communicating between computers. Applications are typically designed for LAN environments that provide high-speed links that do not have congestion and are “chatty.” Chatty applications transmit multiple packets in a back-and-forth manner, requiring an acknowledgment in each direction.

Cisco Wide Area Application Service (WAAS) and Akamai Connect technologies provide a complete solution for overcoming the variables that impact application performance across a WAN circuit. They are transparent to the endpoints (clients and servers) as well as devices between the WAAS/Akamai Connect devices.

Cisco WAAS incorporates data redundancy elimination (DRE) technology to identify data patterns in network traffic and reduce the patterns with a signature as the packets traverse the network. Cisco WAAS examines packets, looking for patterns in 256-byte, 1 KB, 4 KB, and 16 KB increments, and creates a signature for each of those patterns. If a pattern is sent a second time, the first WAAS device replaces the data with a signature. The signature is sent across the WAN link, and the second WAAS device replaces the signature with the data. This drastically reduces the size of the packet as it crosses the WAN but keeps the original payload between the communicating devices.

Cisco WAAS and Akamai Connect also provide a method of caching objects locally. Caching repeat content locally shrinks the path between two devices and can reduce latency on chatty applications. For example, the latency between a branch PC and the local object cache (WAAS/Akamai Connect) is 5 ms, which is a shorter delay than waiting for the file to be retrieved from the server, which takes approximately 100 ms. Only the initial file transfer takes the 100 ms delay, and subsequent requests for the same file are provided locally from the cache with a 5 ms response time.

Secure Connectivity

A router’s primary goal is to forward packets to a destination. However, corporate routers with direct Internet access need to be configured appropriately to protect them from malicious outsiders so that users may access external content (Internet) but external devices can access only appropriate corporate resources. The following sections describe the components of IWAN’s secure connectivity pillar.

Zone-Based Firewall

Access control lists (ACLs) provide the first capability for filtering network traffic on a router. They control access based on protocol, source IP address, destination IP address, and ports used. Unfortunately they are stateless and do not inspect packets to detect if hackers are using a port that they have found open.

Stateful firewalls are capable of looking into Layers 4 through 7 of a network packet and verifying the state of the transmission. Stateful firewalls can detect if a port is being piggybacked and can mitigate distributed denial of service (DDoS) intrusions.

Cisco Zone-Based Firewall (ZBFW) is the latest integrated stateful firewall in Cisco routers that reduces the need for a second security device. Cisco ZBFW uses a zone-based configuration. Router interfaces are assigned to specific security zones, and then interzone traffic is explicitly permitted or denied based on the security policy. This model provides flexibility and overcomes the administration burden on routers with multiple interfaces in the same security zone.

Cloud Web Security

Ensuring a consistent security policy for distributed Internet access can cause headaches for the network and information security (InfoSec) engineers who must support it. Most businesses deploy a content security device at each location, resulting in additional hardware cost and increasing the management of security devices.

Cisco IWAN routers can use distributed Internet access while also providing a consistent, centrally managed security policy by connecting to Cisco Cloud Web Security (CWS). Cisco CWS provides content security with unmatched zero-day threat and malware protection for any organization without the burden of purchasing and managing a dedicated security appliance for each branch location.

In essence, all HTTP/HTTPS traffic exiting a branch over the WAN is redirected (proxied) to one of the closest CWS global data centers. At that time, the access policy is checked based on the requesting user, location, or device to verify proper access to the Internet. All traffic is scanned for potential security threats. This allows organizations to use a distributed Internet access architecture while maintaining the security required by InfoSec engineers.

Software-Defined Networking (SDN) and Software-Defined WAN (SD-WAN)

Managing all the components of a network can be a daunting task. Software-defined networking (SDN) enables organizations to accelerate application deployment and delivery, dramatically reducing IT costs through policy-enabled workflow automation. The SDN technology enables cloud architectures by delivering automated, on-demand application delivery and mobility at scale. It enhances the benefits of DC virtualization, increasing resource flexibility and utilization and reducing infrastructure costs and overhead.

SDN accomplishes these business objectives by consolidating the management of network and application services into centralized, extensible orchestration platforms that can automate the provisioning and configuration of the entire infrastructure. Common centralized IT policies bring together disparate IT groups and workflows. The result is a modern infrastructure that can deliver new applications and services in minutes, rather than days or weeks as was required in the past.

SDN has been primarily focused on the LAN. Software-defined WAN (SD-WAN) is SDN directed strictly toward the WAN. It provides a complete end-to-end solution for the WAN and should remove the complexity of deciding on transports while meeting the needs of the applications that ride on top of it. As applications are deployed, the centralized policy can be updated to accommodate the new application. Combining the Cisco prescriptive IWAN architecture with the Application Policy Infrastructure Controller—Enterprise Module (APIC-EM) simplifies WAN deployments by providing a highly intuitive, policy-based interface that helps IT abstract network complexity and design for business intent. The business policy is automatically translated into network policies that are propagated across the network. This solution enables IT to quickly realize the benefits of an SD-WAN by lowering costs, simplifying IT, increasing security, and optimizing application performance.

Summary

This chapter provided an overview of the technologies and challenges faced by network architects tasked with deploying a WAN for organizations of any size. The Cisco IWAN architecture was developed to deliver an uncompromised user experience over any WAN technology. Network engineers are able to address the evolution and increase of WAN traffic by providing more bandwidth to their remote sites by

Image Removing complications with a specific WAN circuit type by providing transport independence with DMVPN tunnels

Image Increasing application performance and link utilization for all circuits while using intelligent path control (PfRv3)

Image Implementing optimization technologies that reduce bandwidth consumption across a WAN circuit and enabling a local cache to reduce latency

Image Migrating from a centralized Internet connectivity model to a distributed Internet connectivity model while maintaining a consistent security policy with Cisco CWS

The Cisco Intelligent WAN solution provides a scalable architecture with an ROI that can be measured in months and not years. In addition to cost savings, improved application responsiveness directly correlates to increased business productivity. The Cisco policy and orchestration tools (APIC-EM) simplify the deployment of services and deliver a complete SD-WAN solution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset