Images

Domain 2

Communications & Network Security

THE 2012 VERIZON DATA BREACH INVESTIGATION REPORT found that over 97 percent of breaches were avoidable if security measures classified as simple or intermediate had been in place1.

The communications and network security domain addresses the security concerns related to the critical role of telecommunications and networks in today’s distributed computing environments. The security architect understands the risks to communications networks, whether they are data, voice, or multimedia networks. This includes an understanding of communications processes and protocols, threats and countermeasures, support for organizational growth and operations, and the ability to design, implement, and monitor secure network architectures.

TOPICS

Images   Determine Communications Architecture

Images   Unified communication (e.g., convergence, collaboration, messaging)

Images   Content type (e.g., data, voice, video, facsimile)

Images   Transport mechanisms (e.g., satellite, landlines, microwave, radio, fiber)

Images   Communication topology (e.g., centralized, distributed, cloud, mesh)

Images   Determine Network Architecture

Images   Network types (e.g., public, private, hybrid)

Images   Protocols

Images   Securing common services (e.g., wireless, e-mail, VolP)

Images   Protect Communications and Networks

Images   Communication and network policies

Images   Boundary protection (e.g., firewalls, VPNs, airgaps)

Images   Gateways, routers, switches and architecture (e.g., access control segmentation, out-of-band management, OSI layers)

Images   Detection and response

Images   Content monitoring, inspection and filtering (e.g., email, web, data)

Images   Device control

Images   Identify Security Design Considerations and Associated Risks

Images   lnteroperability

Images   Auditability (e.g., regulatory, legislative, forensic requirements, segregation, verifiability of high assurance systems)

Images   Security configuration (e.g., baselines)

Images   Remote access

Images   Monitoring (e.g., sensor placement, time reconciliation, span of control, record compatibility)

Images   Network configuration (e.g., physical, logical, high availability)

Images   Operating environment (e.g., virtualization, cloud computing)

Images   Secure sourcing strategy

OBJECTIVES

The security architect must be ever vigilant to recognize the threats and available countermeasures in order to ensure the provisioning of secure communications. Key areas of knowledge include:

Images   Secure voice and fax communications

Images   Data communications architecture

Images   Network topologies

Images   Network protocols

Images   Network security devices

Images   Accountability and monitoring

Images   Data and network protection

Images   Telecommunications security management and techniques

Images   Remote access protocols

Images   Network design validation

This chapter focuses on the interrelationship between telecommunications and network security. The term telecommunications can have several meanings based on the period in which the term was used. This chapter uses the most modern definition of the term telecommunications, which describes the transmission of voice and facsimile information over both circuit-switched and packet-switched networks.

After discussing voice and facsimile communications and the convergence of packet- and circuit-switched networks, the chapter moves to an overview of voice security, voice protocols, and the various hardware and software that contribute to protecting networks. In concluding this chapter, the focus is on a series of network design issues related to enhancing security and how organizations can configure and validate the security architect’s efforts.

Voice and Facsimile Communications

Both voice and facsimile communications were originally developed to be transported via analog transmission. Although a typical person’s voice has a range of 20 kHz, the frequency range of a communications channel was limited by the use of low-pass and high-pass filters to an approximate 3 kHz passband, with multiple conversations between two locations carried by analog multiplexing, which shifts conversations onto predefined channels of frequency division multiplexers (FDMs)1. The use of FDM was prone to frequency shifting, which required the use of guard bands, limiting the number of channels that could be transported by the technology. Figure 2.1 illustrates a frequency division multiplexer, where multiple voice conversations are shifted up in frequency, with guard bands of a set frequency to minimize the effect of voice drift. The entire bandwidth is then output onto a trunk circuit, which enables multiple voice conversations to be carried on a common line between cities. The Y-axis of Figure 2.1 is deliberately omitted as its values depend on the type of voice channel multiplexed. For example, a 3000 Hz voice channel typically has a 75 Hz guard band, while a 48 kHz wideband voice channel uses a much wider guard band of approximately 1 kHz. Although once commonly used in North America, due to the conversion of communications carriers to digital technology, FDMs are now obsolete2.

Images

Figure 2.1 - Frequency Division Multiplexer (FDM)

Similar to voice communications, facsimile transmission was initially an analog system. Due to the use of low-pass and high-pass filters by communications carriers, facsimile transmission was restricted to the use of an approximate 3 kHz channel, which represented the standard telephone analog bandwidth.

Pulse Code Modulation (PCM)3

With the development of the computer during World War II, technology started to become focused on the design of digital products. Within a short period of time, pulse code modulation (PCM) was used to encrypt voice by the Allies. PCM represents one of the earliest methods developed to digitize an analog signal, such as human voice or facsimile transmission. First, the analog signal is sampled at predefined time intervals. Next, each sample, which can have an infinite number of heights, is quantized into a predefined value that is closest to the height of the signal. Then, the resulting height is encoded into a series of bits. Early PCM systems used 7 bits per quantized value, with more modern systems using 8 bits. Using a sample rate of 8000 samples per second with 8 bits per sample, a voice conversation that is digitized using PCM results in a data rate of 64 kbps.

PCM was used by AT&T and other communications carriers to develop a digital highway for transportation calls between telephone company offices. First, 24 voice calls were sampled and encoded into 8 bits, and a framing bit was added to provide a pattern used for synchronization. This was the well-known T1 frame, which comprises 193 bits (8 × 24 + 1). Because sampling occurs 8000 times per second, the data rate of the now ubiquitous T1 line became 193 bits/frame × 8000 frames/second, or 1.544 Mbps. Figure 2.2 illustrates the format of a T1 line. Note that when structured to hold 24 voice conversations, the T1 line is referred to as a “channelized” T1, while when used to transport data such as for Internet access, the T1 is referred to as a “nonchannelized” T1 line4.

Images

Figure 2.2 - Forming a channelized T1 frame

Moving up the initial digital highway are the T2 and T3 lines. A T2 consists of four T1 lines multiplexed with additional framing that is used between telephone company offices and operates at 6.312 Mbps, while a T3 consists of 28 T1 lines multiplexed with framing that is used for high-capacity communications and operates at 44.736 Mbps. Table 2.1 provides a summary of the initially developed digital highway in North America 5. Note that the DS0 (pronounced digital signal level zero) signal level references the basic voice bandwidth data channel encoded via PCM.

Digital Signal Level/Trans mission Facility

Data Rate

Number of DS0s

DS0

64 kbps

1

T1

1.544 Mbps

24

T2

6.312 Mbps

96

T3

44.736 Mbps

672

Table 2.1 - The Initial North American Digital Highway

Circuit-Switched versus Packet-Switched Networks

Due to the relative high cost of long-distance communications prior to the 1980s, it was expensive to access remote computers. Both dial-up and leased lines were expensive, with dial-up long distance based on the time of day a call occurred, its duration, and the distance between the caller and called telephone numbers. Using dial-up resulted in the telephone company network establishing a series of switched network segments within their network infrastructure to connect the caller to the called party. Thus, the term switched or circuit-switched network resulted as a reference to a dialed call. Initially, frequency division multiplexing (FDM) was used to enable multiple calls to flow between telephone company offices. With the development of digital technology, time division multiplexing (TDM) replaced FDM, with signaling software allocating the routing of DS0s as 64 kbps PCM data streams onto and off various TDMs on a path that was established to link the caller to the called party. Once the circuit-switched path is set up, the digitized voice conversation flows over that path with no loss or interruptions6.

The high cost associated with the use of the public switched telephone network resulted in the development of a new type of communications that was at first designed to transport data. Referred to as packet switching, vendors such as Telenet and Tymnet established networks consisting of modems in various cities, minicomputers located in those cities, and high-speed communications lines that connected the minicomputers to form a mesh-structured network. A customer would dial a Telenet or Tymnet telephone number to connect his terminal device to the network. He would then enter an authorization code, which the service provider would use to allow the customer the use of the network as well as for billing him or her. This would then be followed by an access code that would identify the resource connected to the network the customer wished to access. The minicomputer would packetize the data received from each customer in a particular city, placing a series of identifiers in each packet that indicated the source and destination address of the packet and its sequence number. This enabled packets from different customers to flow over the circuits that formed the backbone of the packet network.

Figure 2.3 illustrates a packet-switched network consisting of many nodes, shown as circles, where initially minicomputers were used for examining packet information and forwarding packets based on the contents of certain packet fields. Later, routers replaced the use of minicomputers in most packet networks.

Images

Figure 2.3 - Using a packet network

In examining Figure 2.3, note that a client is shown dialing into the network. After the client obtains authorization to use the network, packets are examined and a path is established through the packet network, which is indicated by heavy lines, to a destination computer located in Chicago and connected to the network via a leased line. Although the connection shown would be “taken down” once the client or computer completes the session, which is referred to as an “on demand” session, a connection can also be “permanent”; however, other users can have their data transmitted over most or all of the same connection paths as the permanent connection.

The use of packet-switched networks offered certain advantages over the use of the telephone network for transporting data7. First, numerous data sources could be routed over common high-speed circuits to either different or the same destination based on the connection desired by users. Second, each packet had its integrity checked via the use of a Cyclic Redundancy Check (CRC) character that was appended to each packet. The CRC was computed by treating the data in the packet as a long binary number, dividing that number by a fixed polynomial, discarding the integer, and keeping the remainder as the CRC. This CRC was referred to as the local CRC because it was computed locally. At the next minicomputer or router, the received packet was buffered and another CRC was computed, which was compared to the CRC in the packet. If they matched, the packet was forwarded toward its destination. If the two CRCs did not match, the packet was rejected, and the sender was requested to send another copy of the packet.

The use of CRCs for error checking on packet networks provides a higher level of data integrity than when asynchronous data is transmitted via the telephone network. This is because most asynchronous communications used parity checking, which cannot detect multiple bit errors commonly caused by machinery turning on or off, electric ballasts, and even sunspots. In comparison, the use of CRC checking reduces the probability of an undetected error to one in tens of millions of bits. Thus, packet networks offer a higher level of data integrity than the telephone network.

Other features common to early packet networks included reverse charges, which was similar to a collect call and alternate routing. Concerning the latter, if a packet network node that was typically a minicomputer failed or a circuit linking two nodes became inoperative, the network would use a series of predefined algorithms to route around the impediments. Once the problem was fixed, the alternate routing would terminate. Today, alternate routing is built into most of the routers on the Internet, enabling traffic to be moved around bottlenecks due to both line outages and high occupancy without the user being able to tell they are on an alternate route unless they use a traffic routing display program, such as traceroute, to determine the path from source to destination.

Although packet networks have significant advantages over circuit-switched networks, they also have many disadvantages. Foremost among the disadvantages was the delay resulting from the need to retransmit packets because of CRC mismatches caused by spurious hits on circuits resulting primarily from machinery and weather conditions. Fortunately, most impairments were due to the use of high-speed analog circuits by packet network operators during the 1970s. As fiber-optic cables began to interconnect cities, their use significantly reduced the error rate associated with older analog connections. Another problem associated with packet networks is data loss. Unlike circuit switching, which results in a dedicated connection between source and destination that prevents data loss, packet networks will drop data packets as they become overloaded. This is because network engineers size the transmission facilities to maximize revenues with minimum cost, knowing that dropped packets will result in the retransmission of the packet by the originator if a response (negative or positive acknowledgement) from the next node is not received within a predefined period of time.

The development of packet networks was based primarily on economics. In their prime, they could transport data from New York to San Francisco for the equivalent of 30 cents per minute while a long-distance call might cost well over $1 per minute. However, as the cost of telephone calls decreased, their reduction had a significant effect on the initial series of packet networks, with most of those networks shutting down in the late 1980s.

The packet networks previously described were based on the X.25 protocol, and were often referred to as X.25 networks8. Their development paved the way for the growth of a new type of packet network based on the TCP/IP protocol suite, which is now commonly referred to as the Internet protocol. Although originally developed to convey data between computers, advances in a series of technologies resulted in the transmission of digitized voice along with data, resulting in the convergence of voice and data on a common network.

VoIP Architecture Concerns

There are several key areas of concern in the development of a network architecture designed to move digitized voice over a packet network originally developed to transport data. Those concerns include the end-to-end delay associated with packets carrying digitized voice, jitter, the method of voice digitization used, the packet loss rate, and security.

End-to-End Delay

The end-to-end delay affects the ability of a user to know when the person at the other end of the connection has completed saying something. When the end-to-end delay is too long, a common conversational pause becomes so noticeable that the other party may begin to speak when the talker has merely paused in his or her conversation, resulting in a disjointed conversation. Table 2.2 compares five important characteristics of circuit- switched and packet-switched networks.

Images

Table 2.2 - Comparing Circuit-Switched and Packet-Switched Network Characteristics

Jitter

Jitter represents the variation in packet transit caused by queuing, contention, and the propagation of data through a network9. In general, the telephone network provides a fixed path for the transmission of data on an end-to-end basis, resulting in a near-uniform amount of jitter resulting primarily from transmission propagation. In comparison, on a packet network where multiple data sources can contend for transmission on a common backbone circuit, heavily congested links can result in variable jitter. This in turn can result in the reconstructed voice sounding awkward. To compensate for jitter, most VoIP devices employ a jitter buffer, allowing data arriving with different delays to be extracted uniformly with respect to time.

Method of Voice Digitization Used

Currently over ten voice digitization methods are used to encode a voice conversation. While it is fairly obvious that the decoding method must match the encoding method, less obvious but very important is the encoding method used. Currently, voice-encoding methods range in scope from the generation of a 64 kbps data stream developed via PCM encoding to more modern encoding methods that require as little as 2400 bps for the encoding of voice. While in general a lower encoding rate enables more voice conversations to be transported over a packet network, there is usually a reduction in the quality of a reconstructed voice conversation when an encoding method generates a lower digitization rate, especially when the digitization rate falls below 8000 bps.

Packet Loss Rate

A packet network experiences peaks and valleys with respect to packet flow, similar to a highway. However, instead of a traffic backup occurring on a highway when too many vehicles are entering the facility, on a modern packet network routers will drop packets. Although significant improvements have occurred in the type of packets dropped due to various expedited traffic flow methods, packets transporting voice will periodically be dropped along with data packets. While data packets can be retransmitted without adversely affecting data integrity, packets transporting digitized voice cannot be retransmitted if a real-time conversation is in effect. Thus, dropping too many packets transporting digitized voice will adversely affect a conversation.

Security

Perhaps an often-overlooked voice architecture concern is security. After all, many persons feel that if their voice conversation is somehow known to others, minimal harm will result. While this may be true, it may be possible for an unauthorized party to take control of a data PBX, router, or voice server; the unauthorized party can then take over a company’s hardware, dial persons in international destinations, or other pay numbers and run up an expensive communications bill that could endanger the organization’s financial health. While the preceding just touches the surface of voice security concerns, the following discuss some applicable policies and procedures.

Voice Security Policies and Procedures

There are several areas associated with voice that security architects must consider. Those areas include encryption, administrative change control, authentication, integrity, and availability.

Encryption

The use of encryption is significantly different when voice is transmitted via packets instead of being digitized and sent on a circuit-switched network. When voice is encrypted on a circuit-switched telephone network, the encryption and decryption process occurs on an analog waveform. Although the voice conversation is digitized, digitization occurs at the telephone company central office, and the subscriber line to the central office transports voice in analog form. To accomplish encryption, portions of the frequency spectrum are moved through the use of expensive filters. In comparison, when voice is transmitted over a packet network, there are two significant differences. First, each packet represents a digitized bit stream, so digital encryptors can be used. Second, encryption cannot occur on the full packet as each packet header contains one or more fields of routing information as well as other data that routers within the network must be able to examine and take action depending on their contents. Thus, this limits the type of encryption to hardware and software products specifically developed to operate with packet data.

Images

Figure 2.4 - Encryption and Decryption

Figure 2.4 illustrates the ease with which digital data, to include digitized voice, can be encrypted via simple modulo 2 addition and modulo 2 subtraction. At the top of the referenced figure, the encryption process is shown. Here, the data to be encrypted is modulo 2 added with a pseudo-random key, resulting in the encrypted data being transmitted. In the lower portion of Figure 2.4, the decryption process is shown. Here, the same pseudo-random key is used; however, this time it is modulo 2 subtracted from the received encrypted data to produce the decrypted or reconstructed original data10.

Through the use of encryption, it becomes very difficult for an unauthorized third party to hear different voice conversations. Thus, if someone was able to tap into a circuit connecting a business office to a packet network, the unauthorized party, for example, would normally not be able to ascertain that the company was able to bid up to $4 million on a project. However, if one had the resources of the NSA, it might be possible to decrypt the voice conversation and determine what was said, which explains why between the two Gulf wars, Iraq transferred most of its military communications onto fiber-optic lines, which were used in place of microwave towers that were relatively easy to monitor11.

Authentication

Authentication represents the process of determining whether someone or something is, in fact, who or what it is declared to be. In voice communications, one can use Caller ID to authenticate the calling number and if known, the person’s voice to authenticate the calling party. In the data world, authentication is commonly implemented through the use of logon passwords. Knowledge of the password is assumed to guarantee that the user is authentic. Within the past few years, two-factor authentication has gained prominence. In this technique, a user has a key fob or similar display device which changes its numeric display every minute or so. The user must enter both the numeric displayed on the device and their “secret” password to gain access to the computer or network; hence, the term two-factor authentication.

Administrative Change Control

The process of administrative change control in a voice security environment refers to examining hardware and software to locate and modify default settings that control access to the product’s administrative controls. Most security architects have used Wi-Fi in the home, office, or as a “road warrior.” The transmission between the user’s portable device and the Internet via an Internet Service Provider (ISP) occurs via a wireless router. That wireless router has a set of administrative controls that govern the type of data traffic permitted, hours when such traffic can flow to the Internet, and the key that governs encryption and other settings. All too often, routers have a default administrative password that is in the manual and never changed by the administrator. This allows a third party to simply gather a list of default passwords from different vendor manuals located on the Internet and try one after another to gain control of the router. Once control is achieved, the unauthorized person, depending on the router’s capability, may be able to transfer a duplicate data stream of traffic flowing through the router to another location for analysis that is totally transparent to the users of the router. Similarly, in a voice environment and even for large Web servers and other types of computer and communications hardware, many products are shipped with default passwords that should be the first item the security architect changes when configuring such equipment.

Integrity

In a communications environment, it is important that what a person says be received correctly. Integrity refers to the ability of communications being received as sent. In a voice environment, there are several mechanisms that can result in a loss of integrity, including the recording and selective replay of a conversation, spoofing or someone pretending to be the person he or she claims to be, and the injection of speech into an existing conversation to distort the meaning of the conversation. Although integrity is rarely compromised, it represents a threat that must be considered by the security architect.

Availability

In the field of communications, the term availability refers to the period of time that a system, subsystem, or circuit is operable and can function to perform its mission. As an example of availability, consider a voice answering system that is operational 8750 hours in a year. Then, its availability becomes

A = UptimeUptime + Downtime=87508750+10

Voice Protocols

While there are numerous voice protocols that have attained a degree of prominence, this section will focus on an umbrella protocol and two signaling protocols. The umbrella protocol is referred to as the H.323 Recommendation, which defines a series of protocols to support audiovisual communications on packet networks. Session Initiation Protocol (SIP) defines the signaling required to establish and tear down communications, including voice and video calls flowing over a packet network. The third voice protocol discussed is Signaling System 7 (SS7), which represents a signaling system protocol originally used for establishing and tearing down calls made over the world’s series of public switched telephone networks. However, to make a call over a packet network such as the Internet, SS7 information must be conveyed. This occurs by transporting SS7 over the Internet Protocol (IP).

The H.323 Protocol12

The H.323 standard can be considered to represent an umbrella recommendation from the International Telecommunications Union (ITU) that covers a variety of standards for audio, video, and data communications across packet-based networks and, more specifically, IP-based networks, such as the Internet and corporate intranets. The H.323 standard was specified within the ITU-Telecommunications organization by Study Group 16. The original standard was promulgated in 1996, and further enhancements have been developed in the intervening years13.

One of the functions of H.323 is to define standards for multimedia communications over local area networks (LANs) that do not provide a guaranteed quality of service (QoS). Such networks represent a vast majority of connectivity for corporate desktops and include packet-switched TCP/IP, Fast Ethernet, Gigabit Ethernet, and the now-obsolete Token Ring network technologies. Thus, the H.323 standards represent important building blocks for a broad range of collaborative, LAN-based applications for multimedia communications. This umbrella standard includes parts of H.225.0—RAS (Registration and Administration Status), Q.931, H.245, and the RTP/RTCP (Real Time Transport Protocol/Real Time Control Protocol).

H.225 is a call signaling protocol for packet-based multimedia communication systems14. RAS, as its name implies, is concerned with registration, admission, and status. Q.931 is ISDN’s connection control protocol, which is roughly comparable to TCP in the Internet protocol stack. Q.931 does not provide flow control or perform retransmission, because the underlying layers are assumed to be reliable and the circuit-oriented nature of ISDN allocates bandwidth in fixed increments of 64 kbps. However, Q.931 manages the connection setup and breakdown process15.

H.245 represents a control signaling protocol in the H.323 multimedia communication architecture that is used for the exchange of end-to-end H.245 messages between communicating H.323 endpoints/terminals. H.245 control messages are carried over an H.245 control channel with logical channel 0 permanently open, unlike media channels. Messages carried include exchanging the capabilities of terminals as well as opening and closing logical channels. After a connection has been set up via the call signaling procedure, the H.245 call control protocol is used to resolve the call media type and establish the media flow16.

RTP defines a standardized packet format for delivering audio and video over the Internet, while RTCP provides out-of-band control information for an RTP flow17. The RTP includes fields for carrying a sequence number, time stamp (which is useful in ensuring that playback at a receiver occurs correctly), a synchronization source field that identifies the synchronization source, and a field that can define up to 15 contributing sources, which enables a conference with many participants to be held. RTCP partners with RTP in the delivery and packaging of multimedia data, but does not transport any data itself. RTCP is used periodically to transmit control packets to participants in a streaming multimedia session. Thus, the primary function of RTCP is to provide feedback on the QoS provided by RTP. One of the key features of RTCP is its statistics-gathering capability. RTCP gathers statistics such as bytes sent, packets sent, lost packets, packet jitter, and round trip delay, which an application can use to perform different functions, such as increasing QoS by limiting data flow or selecting the use of a different codec. As previously mentioned, media streams are transported on RTP/RTCP. RTP carries the actual media, and RTCP carries status and control information. Signaling is transported reliably over TCP. The H.323 standard defines the following components:

Images Terminal

Images Gateway

Images Gatekeeper

Images MCU (multipoint control unit)

Images Multipoint controller

Images Multipoint processor

Images H.323 proxy

Terminal

An H.323 terminal (client) represents an endpoint in a LAN that participates in real-time, two-way communications with another H.323 terminal, gateway, or multipoint control unit (MCU). Under the H.323 standard, a terminal must support audio communication and can also support audio with video, audio with data, or a combination of all three.

Gateway

An H.323 gateway (GW) provides the physical and logical connections from a packet-switched network to and from circuit-switched networks. The gateway can perform a variety of functions, such as translation between H.323 conferencing endpoints on a LAN and other compliant terminals on other ITU-compliant circuit-switched and packet-switched networks. Such services include a translation between transmission formats and communications procedures. A gateway may also be required to perform the translation between audio and video CODECs as well as perform call setup and call clearing operations.

Gatekeeper

Gatekeepers are optional devices within an H.323 network. When present they perform three important call control housekeeping functions, which assist in the preservation of the integrity of the packet network. Those functions are admission control, address translation, and bandwidth management. Address translation is used to associate LAN aliases with terminals and gateways and IP or IPX addresses. Under bandwidth management, the gatekeeper can be configured to enable a maximum number of simultaneous conferences on a LAN. Once that limit is reached, the gatekeeper would refuse additional connection requests. The result of this action limits the bandwidth of voice or video to a predefined fraction of the total bandwidth available, with the rest left for Web surfing, file transfers, e-mail and other data applications.

MCU

A Multipoint Control Unit (MCU) represents an endpoint on a LAN that provides the capability for three or more terminals and gateways to participate in a multipoint conference. It controls and mixes video, audio, and data from terminal devices to create a video conference. An MCU can also connect two terminals in a point-to-point conference that can later develop into a multipoint conference. The collection of all terminals, gateways, and multipoint control units managed by a single gatekeeper is known as an H.323 Zone.

Multipoint Controller

A multipoint controller that is H.323 compliant provides negotiation capacity with terminals to carry out different communications. The multipoint controller can also control conference resources, such as video multicasting.

Figure 2.5 illustrates an H.323 zone that is connected via a gateway to other LANs or terminal devices via the public switched telephone network.

Images

Figure 2.5 - An H.323 zone communicating with other devices.

Network Calling

The various H.323 components illustrated in Figure 2.5 show three PCs with voice cards as H.323 terminal devices. All three are connected to a common Ethernet LAN. The LAN is in turn connected to the switched public telephone network, which enables call originating on the top network to be routed over the public switched telephone network to the client’s other LANs or to other voice and video terminal devices.

SIP

The Session Initiation Protocol (SIP) represents an application layer signaling protocol that enables telephony and VoIP services to be delivered over a packet network. This protocol is used for establishing, manipulating, and tearing down sessions in an IP network. A session can vary from a simple two-way telephone call to a collaborative multimedia conference. The ability to establish a variety of calls allows a number of innovative services to be developed, such as Web page click-to-dial, instant messaging with buddy lists, and IP Centrex services. The major goal of SIP is to assist session originators to deliver invitations to potential session participants wherever they may be. SIP was modeled after the HyperText Transport Protocol (HTTP), using Uniform Resource Locators (URLs) for addressing and the Session Description Protocol (SDP) to convey session information.

SIP is a text-based protocol that uses UTF-8 encoding, transmitting on port 5060 both for UDP and TCP. SIP supports such common Internet Telephony features as calling, media transfer, multiuser conference calling, call holding, call transfer, and call end tasks.

SIP uses an “invite” message to create sessions that transport descriptions which allow participants to agree on a set of compatible media types. During the negotiation process, SIP recognizes that not all parties support the same features; thus, SIP negotiates a common set of features that all of the parties can support. In addition, SIP can issue a “reinvite” message to change an established session and a “cancel” message to cancel an invite.

SIP makes use of proxy servers to help route requests to a user’s current location, authenticate and authorize users for services, implement provider call-routing policies, and provide numerous other features to users. SIP also provides a registration function that allows users to upload their current locations for use by proxy servers. This enables a call to reach a called party wherever he or she is located. Once a session is established, SIP can be used to terminate the session through the use of a “bye” message, which hangs up the session18.

Comparing H.323 with SIP

A comparison of the H.323 protocol to SIP underscores the considerable difference between the two protocols. The H.323 protocol defines a unified system to support multimedia communications over IP networks, providing support for audio, video, and even data conferencing. Within the umbrella protocol, H.323 defines methods for handling device failures, such as using alternative gatekeepers and endpoints, and messages are encoded in binary. In comparison, SIP was developed to initiate a call, referred to as a session, between two devices and has no support for multimedia conferencing. In addition, SIP does not define procedures for handling device failures, and messages are encoded in ASCII text. The latter results in larger messages that are less suitable for use on low-bandwidth circuits, but are easier to interpret than the binary messages associated with the H.323 protocol.

SS719

SS7, a mnemonic for Signaling System No. 7, represents a global telecommunications standard defined by the ITU. This standard defines the manner in which public switched telephone networks (PSTNs) perform call setup and breakdown, routing, and control by exchange signaling information over a digital signaling network that is separate from the network which actually transports calls. SS7 supports both landline or hardwired calls as well as cellular or mobile calls, with the latter including subscriber authentication and wireless roaming. Through the use of SS7, such enhanced features as call forwarding, caller identification, and three-way calling become possible. In addition, such products as toll-free calling via an 800, 888, or 878 and other prefixes, as well as toll services via the well-known 900 prefix, becomes possible.

Although the PSTN was at one time restricted to circuit-switched technology, over the past decade telephone companies have moved a considerable amount of traffic to their Internet, referred to as a corporate intranet. Using VoIP, telephone companies have saved considerable funds because the use of packet-switched technology and better voice digitization techniques permit more conversations to be transported per unit of bandwidth.

Because calls originated over the PSTN can be transported over IP, a method was required to transport signaling information over an IP network. That method is referred to as SS7-over-IP and employs protocols defined by the Signaling Transport (sigtran) working group of the Internet Engineering Task Force (IETF), the international organization responsible for recommending Internet standards. The actual conversion of SS7 signals to packets transported via IP is performed by a signaling gateway. The signaling gateway can perform such functions as terminating SS7 signaling or translating and relaying messages over the IP network to a media gateway, media gateway controller, or another signaling gateway. Due to its critical role in integrated voice networks, signaling gateways are often deployed in groups of two or more to ensure high availability.

The function of the media gateway is to terminate voice calls originating on interswitch trunks from the public switched telephone network, compress and packetize the voice data, and deliver compressed voice packets to the IP network. For voice calls originating in an IP network, the media gateway performs these functions in reverse order. For ISDN calls from the PSTN, Q.931 signaling information is transported from the media gateway to the media gateway controller for call processing. In comparison, the media gateway controller handles registration and management of resources at the media gateways. A media gateway controller exchanges messages with the PSTN central office switches via a signaling gateway. Because media gateway controllers are often created on a computer platform through the use of off-the-shelf software, a media gateway controller is sometimes referred to as a softswitch.

Facsimile Security

When discussing modern facsimile transmission, the Group 3 Facsimile Protocol (G3)20 is actually being discussed. G3 dates to 1980, when the International Telecommunications Union published its initial set of standards. Those standards included T.4, which specifies the image transfer protocol, and T.30, which specifies session management procedures that support the establishment of a fax transmission21.

Because there are over 100 million facsimile G3-compatible devices in use around the world, the ability of one device to communicate with another is provided by the G3 protocol. While this provides worldwide compatibility, it also results in a number of security-related problems. Those can range from the lack of a policy defining the use of facsimile devices to the failure to use a coversheet that specifies who the sender and recipient are and the number of pages “faxed.” Two of the major facsimile security-related problems are verifying the facsimile number dialed so the fax is not misdirected and the failure to enable the local facsimile device to print a confirmation of the delivery of the fax, which will include the number of pages transmitted and the receiving telephone number. Other facsimile security-related issues include having a secure location for a fax device and ensuring that incoming faxes are delivered to the correct recipient.

By itself, the G3 standard does not directly deal with security. Although a modified Huffman coding is employed to reduce transmission time of each scanned line, anyone who has the knowledge to tap a transmission can more than likely decode the transmission. Because the transmission of a fax is not secure, there are military standards that govern the encryption of fax transmission. In addition, because a fax machine radiates energy at certain frequencies that could be “read” by an unauthorized party, most military facsimile devices are “Tempest”-hardened by placing them in a secure area that is shielded from emitting frequencies that an uninvited third party sitting in a van in a parking lot might “read”22. Table 2.3 provides a sample TEMPEST Separation Matrix.

Images

Table 2.3 - TEMPEST Separation Matrix

Network Architecture

This section focuses on an examination of network architecture and terminology. Doing so will provide the security architect with the ability to better understand methods they can use to control and secure network facilities.

Redundancy and Availability

From a network engineering perspective, redundancy represents the duplication of circuits and equipments, with the goal of the additional components resulting in an increase in network availability. For example, an Internet service provider might connect its hub in one city to peering points at two different locations. Thus, if the connection to one peering point should become inoperative, data flow to and from the Internet could continue via the second peering point.

Internet versus Intranet

Most end users have little control over the network architecture of the Internet, with the exception of their access method. Concerning the latter, it is common for organizations to have multiple ISPs, because the failure of one vendor’s network would usually not affect the second vendor. To take full advantage of redundant vendors, one would ensure that the connection from the organization to each vendor occurs over different communication facilities. Figure 2.6 illustrates the use of two ISPs to provide redundant communications to the Internet from a customer premises.

Images

Figure 2.6 - Using two ISPs for redundant access to the Internet

Extranet

An extranet is a private network that while resembling an intranet extends the internal IP-based network of an organization to suppliers, vendors, and other types of business partners. Because an extranet is created by one or more organizations for their exclusive use, those vendors can control the network architecture of the extranet. Thus, they can order network circuits as well as equipment such as routers, DNS servers, and other devices to match the level of reliability and availability they both desire and can afford.

Network Types

The use of varied network architectures can benefit the security architect as they strive to create the appropriate balance for remote user connectivity, federation with one or more partner or vendor entities, as well as secure internal access to resources. The three main architectures traditionally identified are:

Images Private Networks

Images Public Networks

Images Hybrid Networks

Private networks are usually associated with internal only access to data and resources for employees of a company. These networks are made available to physical endpoints through the use of private IP address schemes based on the Internet Assigned Numbers Authority (IANA) having reserved the following three blocks of the IPv4 address space for private internets:23

10.0.0.0   -   10.255.255.255 (10/8 prefix)

172.16.0.0   -   172.31.255.255 (172.16/12 prefix)

192.168.0.0   -   192.168.255.255 (192.168/16 prefix)

Internet Protocol Version 6 (IPv6) is the latest revision of the Internet Protocol (IP). It is intended to replace IPv4, which still carries the vast majority of Internet traffic as of 2013. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. IPv6 uses a 128-bit address, allowing for 2128, or approximately 3.4×1038 addresses, or more than 7.9×1028 times as many as IPv4, which uses 32-bit addresses. IPv4 allows for only approximately 4.3 billion addresses.

IPv6 addresses consist of eight groups of four hexadecimal digits separated by colons, for example 2013:0db8:72b3:0082:1090:3c6h:0547:7264.

The hexadecimal digits are not case-sensitive; e.g., the groups 0DB8 and 0db8 are equivalent.

IPv6 is described in Internet standard document RFC 2460, published in December 199824. In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering and router announcements when changing network connectivity providers. It simplifies processing of packets by routers by placing the need for packet fragmentation into the end points. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from link-layer media addressing information (MAC address). Network security is also integrated into the design of the IPv6 architecture, including the option of IPsec.

An IPv6 address may be abbreviated by using one or more of the following rules: (Initial address: 2013:0db8:0000:0000:0000:ff00:0026:9734)

  1. Remove one or more leading zeroes from one or more groups of hexadecimal digits; this is usually done to either all or none of the leading zeroes. (For example, convert the group 0026 to 26.)

  2. Omit one or more consecutive sections of zeroes, using a double colon (::) to denote the omitted sections. The double colon may only be used once in any given address, as the address would be indeterminate if the double colon was used multiple times. (For example, 2013:db8::1:2 is valid, but 2013:db8::1::2 is not permitted.)

The following are the text representations of these addresses:25

Initial address: 2013:0db8:0000:0000:0000:ff00:0026:9734

After removing all leading zeroes: 2013:db8:0:0:0:ff00:26:9734

After omitting consecutive sections of zeroes: 2013:0db8::ff00:0026:9734

After doing both: 2013:db8::ff00:26:9734

IPv6 addresses are classified by three types of networking methodologies: unicast addresses identify each network interface, anycast addresses identify a group of interfaces, usually at different locations of which the nearest one is automatically selected, and multicast addresses are used to deliver one packet to many interfaces. The broadcast method is not implemented in IPv6. Each IPv6 address has a scope, which specifies in which part of the network it is valid and unique. Some addresses are unique only on the local (sub-)network. Others are globally unique.

Some IPv6 addresses are reserved for special purposes, such as loopback, 6to4 tunneling, and Teredo tunneling, as outlined in RFC 515626. Also, some address ranges are considered special, such as link-local addresses for use on the local link only, Unique Local addresses (ULA), as described in RFC 419327, and solicited-node multicast addresses used in the Neighbor Discovery Protocol.

Public networks are made up of computers that are connected to each other to create a network. These networks are often configured with “public” Internet Protocol (IP) addresses -- that is, the devices on the network are “visible” to devices outside the network (from the Internet or another network).

Computers on a public network have the advantage, and disadvantage, that they are completely visible to the Internet. As such, they have no boundaries between themselves and the rest of the Internet community. This advantage oftentimes becomes a distinct disadvantage since this visibility can lead to a computer vulnerability exploit if the devices on the public network have not been properly secured.

Hybrid networks use a combination of any two or more topologies to create a network design that leverages the best elements of the topologies being combined.

A newer architectural approach that security architects need to be able to address as well are the various Service architectures available through the “cloud”. The most common ones discussed are:

Images SaaS

Images PaaS

Images IaaS

The Software-as-a-Service (SaaS) service-model involves the cloud provider installing and maintaining software in the cloud and users running the software from their end point clients over the Internet. The users’ client machines require no installation of any application-specific software - cloud applications run on servers in the cloud. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. The cloud users do not manage the cloud infrastructure and platform on which the application is running. This eliminates the need to install and run the application on the cloud user’s own computers simplifying maintenance and support.

Platform as a Service (PaaS) is a cloud computing service that provides end users with application platforms and databases as a service. This is the equivalent to middleware in the traditional (non-cloud computing) delivery of application platforms and databases. In the PaaS model, cloud providers deliver a computing platform typically including operating system, programming language execution environment, database, and web server.

Infrastructure as a Service (IaaS) takes the physical hardware and goes completely virtual. IaaS providers offer computers, as physical or more often as virtual machines, and other resources to customers on a fee based scheduled. To deploy their applications, cloud users install operating system images and their application software on the cloud infrastructure. In this model, it is the cloud user who is responsible for patching and maintaining the operating systems and application software.

Perimeter Controls

Products that can be used to control the flow of data at the entryway to the network are referred to as perimeter controls; devices that can be used include routers, firewalls, and special types of modems. The place at the network perimeter where such equipment is commonly installed is referred to as the network demilitarized zone (DMZ).

Images

Figure 2.7 - Creating a DMZ

Figure 2.7 illustrates a common architecture for a corporate DMZ. In this example, a router provides a connection to the Internet while the firewall, which is sometimes referred to as a corporate gateway, resides between two LANs, one of which has a router as its only device while the second LAN is populated by terminals, routers, various types of servers such as e-mail servers or gateways, Web servers and VPN servers, and other networking devices protected by the firewall. This architecture ensures that all data flow to and from the corporate network and Internet are examined by the firewall. Although the devices behind the firewall are protected from many types of attacks, this architecture does not protect devices from persons using USB memory devices to off-load corporate data from computers and servers. This is why many organizations prohibit the use of USB devices, and use special software to deactivate USB ports.

In Figure 2.8, a revised corporate DMZ is shown. In this example, a bank of security modems were added to the upper LAN, between the firewall and the corporate terminals, servers, and other devices on the protected network. To better understand how each device operates, let’s review the operation of each of the three communications devices, with particular emphasis on their security role.

Images

Figure 2.8 - Boundary router

In addition to providing a communications capability that takes LAN frames and strips the header and trailer to convert them to IP datagrams for transport on the Internet, routers have a key role as the first line of defense in many organizations. Through the use of rule-based access lists, it becomes possible to filter packets based on a variety of data carried in the packet. Although most packet filtering occurs on the fields within a packet header, some boundary routers extend filtering into the packet, making it difficult to functionally separate the security features of a router from a firewall.

Figure 2.9 illustrates the delivery of TCP/IP application data onto a LAN. Note that as data delivery occurs, a string of headers is appended to the application data. Each header has a number of fields that can be examined by a router or firewall that can allow or deny the flow of data based on predefined criteria.

Images

Figure 2.9 - The delivery of TCP/IP application data onto a LAN

Figure 2.10 illustrates the composition of the IPv4 header, which is appended to either a TCP header or UDP header to form an IP datagram. By looking into the IP header, the router can perform many security-related operations, such as accepting or rejecting datagrams based on the source or destination IP address within the IP header. Unfortunately, source addresses are not checked by devices on the Internet, so filtering on a source address can be problematic. For example, one could program their computer to send constant pings to www.whitehouse.gov and use the source address for the United States Federal Bureau of Investigation (FBI) gateway. Although not recommended, this would result in a stream of pings to the White House server that appeared to originate from the FBI.

Images

Figure 2.10 - The IPv4 header

Images

Figure 2.11 - The Transport Control Protocol.

As previously mentioned, filtering based on the contents of packet headers, such as the headers in IP, TCP, and UDP, are commonly incorporated into firewalls. With applicable programming, the security architect can configure the router to reject packets either inbound, outbound, or both, based on source or destination address or type of packet or both. Concerning filtering based on packet type, this is accomplished by using port numbers to filter TCP or UDP packets.

Figure 2.11 illustrates the fields within the TCP header. Of particular importance—and used by both router access lists and firewalls for filtering purposes—are the source and destination port field values. TCP is used to transport connection-oriented, reliable data, such as control information. In comparison, UDP is used to transport connectionless data and reliability if an issue is provided by higher layers in the protocol stack. For example, setting up a VoIP call would require TCP data to convey the dialed number and other control information, while UDP would be used to transport digitized voice. By default, most router and firewall vendors disable the flow of data on all ports to each interface. Thus, because many applications use both TCP and UDP, it is quite common for routers and firewalls to be programmed to enable ports on both devices to allow corporate users to use certain types of Internet applications. In addition, many security devices can be programmed to support time-of-day functions, allowing the router and firewall administrators to open “holes” through their devices by equating data flow through certain ports on specific interfaces to the time of day. For example, a corporation could allow employees access to Amazon and eBay during lunch hour while blocking such access during the rest of the workday.

In addition to the previously mentioned types of filtering, many routers can be programmed to block all or certain types of Internet Control Message Protocol (ICMP) packets as well as some widely employed hacker attacks, such as the well-known SYN attack. Thus, many routers when programmed correctly can become an organization’s first line of network defense.

Security Modems28

A security modem represents a special type of modem that allows remote access from trusted locations, may encrypt data, and may support Caller ID to verify the calling telephone number. When security modems first appeared on the market, they were configured with a list of allowable callback numbers and passwords. A remote user who wished to gain access to the corporate LAN would first dial the telephone number associated with the dial-in security modem. Upon establishing a connection, the person would be prompted to enter his or her callback number and a password associated with the callback phone number. If the password is correct, the security modem would disconnect the connection and dial back the callback number.

Modern security modems have considerably evolved from a simple list of authorized locations that would be dialed back upon the entry of an applicable password. Today, in addition to a callback feature, security modems may be capable of using Caller ID and passwords to authenticate a user and encrypt data based on the key entered by a verified user. In addition, some security modems provide the authenticated user with the ability to select an encryption algorithm from a series of supported algorithms, such as 3DES and various versions of the Advanced Encryption Standard (AES).

The rationale behind the use of a security modem is the fact that the PSTN assigns telephone numbers to fixed locations and cell phone numbers are assigned to known persons, with the exception of prepaid cell phones. Thus, an organization can decide the telephone numbers that can receive connections to the corporate network and then associate passwords with those numbers. This means that not only will the security modem call predefined numbers but, in addition, to do so the person at that number must first call the security modem and enter an applicable password. Through the addition of encryption to security modems, it becomes possible to minimize potential threats while transmitting data over the public switched telephone network. Although the use of security modems as well as modems in general has to a large degree been replaced by the use of VPNs communicating over the Internet, certain applications continue to use security modems. For example, sales personnel, government investigators, and other travelers who must communicate securely and cannot use the Internet due to lack of availability or cost considerations frequently dial security modems at a corporate location29. While mobility can adversely affect the use of callback, the use of cell phones provides a fixed telephone number that avoids the problem of coordination and the reconfiguring of callback numbers as sales personnel move from motel to vendor location and need to quickly check the status of an order or the latest price of a product30.

One of the major problems associated with the callback feature of security modems results from the use of Local Area Signaling Service (LASS) codes. LASS codes are numbers entered on a telephone touchpad to access special features of the telephone system. Two well-known LASS codes are 67, which toggles Caller-ID blocking, and 69 for Call Return. By knowing how to use LASS codes, a hacker may be able to exploit the configuration of the callback feature of a security modem31.

Communications and Network Polices

On December 10, 2012, the Federal Communications Commission of the United States (FCC) chairman Julius Genachowski announced the formation of the ‘Technology Transitions Policy Task Force' to deal with the task of creating policy for the next generation communications network, coordinating efforts on IP interconnection and the reliability and resiliency of the next generation networks, with a particular focus on voice services. According to the FCC, “the Task Force will conduct a data-driven review and provide recommendations to modernize the Commission’s policies in a process that encourages the technological transition, empowers and protects consumers, promotes competition, and ensures network resiliency and reliability."32

Every security architect should have a good understanding of the importance of standards, policy, and procedure33. The need to be able to document all the information that is pertinent to the secure operation of the network is one of the most important responsibilities that the security architect has. Along with the responsibility to document, the security architect also has the obligation to strictly adhere to a change control regime that places all documentation, and system configurations under tight scrutiny and efficient control. The combination of complete documentation and change control systems to support the continued relevancy of that documentation is the foundation that the security architect builds on in order to create policy and procedures for the users of the network and systems. These policies and procedures need to be based on standards when and where appropriate to do so, such as NIST, CoBit, Payment Card Industry (PCI) Data Security Standard (PCI-DSS) or the ISO 27000 series. The policies and procedures then need to be communicated to all users within the organization that will be effected by them. This is typically carried out through security awareness training, and needs to be done at a minimum on a yearly basis, although the requirements for organizations will vary based on regulatory compliance concerns. The security architect should consider conducting ongoing awareness training as needed to support any major revisions carried out through change management to policies and procedures.

Overview of Firewalls34

Firewalls have been available to the security architect in one form or another for many years now. There are multiple generations of firewalls that many security architects will have deployed into their networks over the last number of years as the technology has continued to evolve. Firewalls are devices or programs that control the flow of network traffic between networks or hosts that employ differing security postures. Organizations often need to use firewalls to meet security requirements from mandates such as PCI-DSS, which specifically requires firewalling35.

The most basic feature of a firewall is the packet filter. Stateless inspection firewalls that were only packet filters were essentially routing devices that provided access control functionality for host addresses and communication sessions. These devices did not keep track of the state of each flow of traffic as it passed through the firewall. Unlike more advanced filters, packet filters are not concerned about the content of packets. Their access control functionality is governed by a set of directives referred to as a ruleset.

In their most basic form, firewalls with packet filters operate at the network layer. This provides network access control based on several pieces of information contained in a packet, including:

Images   The packet’s source IP address—the address of the host from which the packet originated (such as 1.2.3.4)

Images   The packet’s destination address—the address of the host the packet is trying to reach (e.g., 12.1.2.1)

Images   The network or transport protocol being used to communicate between source and destination hosts, such as TCP, UDP, or ICMP.

Images   Possibly some characteristics of the transport layer communications sessions, such as session source and destination ports.

Images   The interface being traversed by the packet, and its direction (inbound or outbound).

Stateful inspection improves on the functionality of packet filters by tracking the state of connections and blocking packets that deviate from the expected state. This is accomplished by incorporating greater awareness of the transport layer into the firewall. As with packet filtering, stateful inspection intercepts packets at the network layer and inspects them to see if they are permitted by an existing firewall rule, but unlike packet filtering, stateful inspection keeps track of each connection in a state table. While the details of state table entries vary by firewall product, they typically include source IP address, destination IP address, port numbers, and connection state information. Three major states exist for TCP traffic; connection establishment, usage, and termination. Stateful inspection in a firewall examines certain values in the TCP headers to monitor the state of each connection. Each new packet is compared by the firewall to the firewall’s state table to determine if the packet’s state contradicts its expected state.

The addition of a stateful protocol analysis capability creates deep packet inspection functionality in the application firewall. Stateful protocol analysis improves upon standard stateful inspection by adding basic intrusion detection technology via an inspection engine that analyzes protocols at the application layer to compare vendor-developed profiles of benign protocol activity against observed events to identify deviations. This allows a firewall to allow or deny access based on how an application is running over the network.

An application-proxy gateway is a feature of advanced firewalls that combines lower-layer access control with upper-layer functionality. These firewalls contain a proxy agent that acts as an intermediary between two hosts that wish to communicate with each other, and never allows a direct connection between them. Each successful connection attempt actually results in the creation of two separate connections; one between the client and the proxy server, and another between the proxy server and the true destination. The proxy is meant to be transparent to the two hosts. From their perspectives there is a direct connection. Like application firewalls, the proxy gateway operates at the application layer and can inspect the actual content of the traffic. These gateways also perform the TCP handshake with the source system and are able to protect against exploitations at each step of a communication. In addition, gateways can make decisions to permit or deny traffic based on information in the application protocol headers or payloads. Application-proxy gateways are quite different than application firewalls. First, an application-proxy gateway can offer a higher level of security for some applications because it prevents direct connections between two hosts and it inspects traffic content to identify policy violations. Another potential advantage is that some application-proxy gateways have the ability to decrypt packets (e.g., SSL-protected payloads), examine them, and re-encrypt them before sending them on to the destination host.

The term Unified Threat Management Gateway (UTM) was coined in 2004 by IDC; earlier in 2003 Internet Security Systems (ISS) launched a new product called Proventia, an “all-in-one protection product” which unified firewall, virtual private network (VPN), anti-virus, intrusion detection and prevention into one box. A UTM product typically will co-locate a stateful firewall and an IPS in one device, or use a limited DPI (just a stateful packet inspection firewall with some IDS/IPS signatures) which usually suffers from performance issues and limited visibility into network traffic. UTMs are a combination of network layer firewalls and application layer firewalls.

Web application firewalls are a relatively new technology, as compared to other firewall technologies, and the type of threats that they mitigate are still changing frequently. Because they are put in front of web servers to prevent attacks on the server, they are often considered to be very different than traditional firewalls.

Network activity that passes directly between virtualized operating systems within a host cannot be monitored by an external firewall. However, some virtualization systems offer built-in firewalls or allow third-party software firewalls to be added as plug-ins. Using firewalls to monitor virtualized networking is a relatively new area of firewall technology, and it is likely to change significantly as virtualization usage continues to increase.

Firewalls vs. Routers

The major difference between a router and firewall lies in three areas: the transfer of packets based on routing tables; the degree of packet inspection; and acting as an intermediate device by hiding the address of clients from users on the Internet, a technique referred to as acting as a proxy.

A router has routing tables that associate IP addresses with ports on the device. When a packet arrives at a router port, the device examines the destination address in the IP header. Then, through a table lookup process that associates IP addresses with router ports, the router forwards the packet onto and through the port listed in the routing table, with the packet then flowing onto a communications connection with the port. That communications connect depends on the type of router port, ranging from an Ethernet or Token Ring LAN to a serial port connected to a 56 kbps, T1, or even a T3 connection. In comparison, a firewall only performs one type of basic packet processing. That is, if a packet fails a test, it is discarded. Otherwise, the packet is forwarded through the firewall to its destination.

A second significant difference between a router and a firewall governs the degree of packet inspection. A router typically examines the headers in IP, TCP, and UDP. In comparison, a firewall looks deeper into packets, in some cases, examining the contents of the data transported within the packet, looking for repetitive potentially dangerous operations, such as attempted sign-ons to different IP addresses that might represent different corporate servers.

A third difference between a router and a firewall may result in the firewall performing proxy services. In doing so, the firewall services the requests of its clients by forwarding requests onto the Internet. In this situation, a client connects to the proxy service of the firewall, requesting some type of service, such as a file transfer (File Transfer Protocol (FTP)) operation or Web page access. The proxy service of the firewall provides the resource by connecting to the specified IP address requesting the service on behalf of the client. In doing so, the proxy service of the firewall may use a single source IP address for all clients, keeping track of client sessions by using different port numbers to associate the client’s real IP address with the common IP address used for all clients. Hiding the IP addresses of clients makes them more difficult to attack. If the proxy service passes all requests and replies in their original form, the service is usually referred to as a tunneling proxy service.

There are two basic types of firewall proxy services: circuit level and application. Previously, what is referred to as an application proxy service was discussed. In comparison, a circuit-level proxy is limited to a controlled network connection between internal and external systems. A circuit-level proxy results in a virtual “circuit” being established between the internal client and the proxy server. Internet requests are then routed through the circuit to the proxy server, and the proxy server forwards those requests to the Internet after changing the IP address of the internal client. Thus, external users are limited to denoting the IP address of the proxy server. In the reverse direction, responses are received by the proxy server and sent back through the circuit to the client. Although traffic is allowed to flow through the proxy, external systems never see the internal systems. This type of connection is often used to connect “trusted” internal users to the Internet.

Images

Figure 2.12 - Example of a proxy service

Figure 2.12 illustrates an example of a proxy service. In this example, the highlighted middle computer acts as the proxy server between the other two devices.

Demilitarized Zone’s Perimeter Controls

The perimeter network represents an additional network between the protected network and the unprotected network, which provides an additional layer of security. By controlling access from the “untrusted” network through the perimeter network to a “trusted” network, security is enhanced. However, it is important to realize that the perimeter network also represents a vulnerability. Thus it is extremely important to ensure that equipment is correctly configured and that software operates at the latest release to provide an effective level of protection.

IDS/IPS36

An IDS represents hardware or software that is specifically designed to detect unwanted attempts at accessing, manipulating, and even disabling networking hardware and computers connected to a network. Such attempts can be made by hackers or even disgruntled existing or former employees. Here, the key to an IDS system is its ability to detect attacks. It is important to note however, that unless an IDS system has access to keys used for encryption, the IDS cannot directly detect attacks within properly encrypted traffic.

The capabilities of an IDS can significantly vary from vendor to vendor. Because the goal of an IDS is to detect malicious behavior that can adversely affect computer or communications hardware, at a minimum it should detect a variety of malware directed at computers, such as viruses, Trojans, and worms as well as denial of service (DoS) attacks, logon attempts that cycle through passwords and IDs against a host or set of computers, as well as attempts to use guest or other accounts to gain access to sensitive files, such as the corporate payroll.

IDS Architecture

A typical IDS consists of a console that monitors events reported by sensors, controls such sensors, and generates alerts. Alerts can range from simple messages displayed on a console to the transmission of an e-mail or pager message and prerecorded telephone or cell phone calls.

Images

Figure 2.13 - A centralized detection architecture

Figure 2.13 illustrates a distributed IDS system with a centralized monitoring facility. In this example, agents designated by the letter A in boxes are placed at the entry point to remotely located subnets. Of course, one or more agents may also be located on the monitoring network, but they are not shown. This type of IDS represents a Network IDS (NIDS). Sensors or agents can be physical hardware or software that is designed to operate promiscuously and examine all traffic flowing on a network segment. This is comparable to a home alarm system. Windows and doors have sensors that can be considered to represent agents. Those agents are connected to a central panel in the home that broadcasts messages to control panels, typically located in a master bedroom and home entryways. When a door or window is open, a signal transmits to the control panel. The control panel examines the state of the alarm (off, at home, etc.) and generates a preplanned action, such as contacting a monitoring company. Similarly, when an intrusion is detected, the IDS will perform some predefined action based on its configuration. Most NIDS implementations use sensors located at choke points in the network to be monitored, such as the DMZ or at network borders. The sensors capture all network traffic, analyzing the contents of those packets for malicious traffic. Although many vendors market distributed NIDS systems, in less complex IDS implementations, all components are combined in a single device or network appliance.

There are numerous types of IDS systems that are designed to perform specific functions. For example, one common type of IDS is a protocol-based intrusion detection system that commonly is implemented in software and resides on servers, examining, for example, the HTTP data stream on a Web server. Another common type of IDS is a Host-based IDS (HIDS) that represents software tailored to operate on different types of computers, ranging from small Web servers to large IBM mainframes. A typical HIDS consists of an agent on a host that identifies intrusions by analyzing system calls, application logs, file-system modifications (such as password and access threshold files), and other host activities and state. These IDS types are commonly referred to as Network Behavior Analysis (NBA) IDS, which examines network traffic to identify threats that generate unusual traffic flows, such as Denial of Service (DoS) attacks, certain forms of malware, and policy violations (e.g., a client system providing network services to other systems).

Host-based IDS agents are most commonly deployed to critical hosts such as publicly accessible servers and servers containing sensitive information. Host-based IDSs typically perform extensive logging of data related to detected events. This data can be used to confirm the validity of alerts, to investigate incidents, and to correlate events between the host-based IDS and other logging sources. Data fields commonly logged by host-based IDSs include the following:

Images   Timestamp (usually date and time)

Images   Event or alert type

Images   Rating (e.g., priority, severity, impact, confidence)

Images   Event details specific to the type of event, such as IP address and port information, application information, filenames and paths, and user IDs

Images   Prevention action performed (if any)

NBA sensors are usually available only as appliances. Some sensors are similar to network-based IDS sensors in that they sniff packets to monitor network activity on one or a few network segments. Other NBA sensors do not monitor the networks directly, but instead rely on network flow information provided by routers and other networking devices. Flow refers to a particular communication session occurring between hosts. There are many standards for flow data formats, including NetFlow, sFlow, and IPFIX37. Typical flow data particularly relevant to intrusion detection and prevention includes the following:

Images   Source and destination IP addresses

Images   Source and destination TCP or UDP ports or ICMP types and codes

Images   Number of packets and number of bytes transmitted in the session

Images   Timestamps for the start and end of the session

NBA technologies typically perform extensive logging of data related to detected events. This data can be used to confirm the validity of alerts, to investigate incidents, and to correlate events between the NBA solution and other logging sources. Data fields commonly logged by NBA software include the following:

Images   Timestamp (usually date and time)

Images   Event or alert type

Images   Rating (e.g., priority, severity, impact, confidence)

Images   Network, transport, and application layer protocols

Images   Source and destination IP addresses

Images   Source and destination TCP or UDP ports, or ICMP types and codes

Images   Additional packet header fields (e.g., IP time-to-live [TTL])

Images   Number of bytes and packets sent by the source and destination hosts for the connection

Images   Prevention action performed (if any)

Security architects also need to decide where the IDS sensors should be located. Sensors can be deployed in one of two modes:

Inline Sensor

An inline sensor is deployed so that the network traffic it is monitoring must pass through it, much like the traffic flow associated with a firewall. Inline sensors are typically placed where network firewalls and other network security devices would be placed—at the divisions between networks, such as connections with external networks and borders between different internal networks that should be segregated.

Passive Sensor

A passive sensor is deployed so that it monitors a copy of the actual network traffic; no traffic actually passes through the sensor. Passive sensors are typically deployed so that they can monitor key network locations, such as the divisions between networks, and key network segments, such as activity on a demilitarized zone (DMZ) subnet. Passive sensors can monitor traffic through various methods, including the following:

Spanning Port

Many switches have a spanning port, which is a port that can see all network traffic going through the switch. Connecting a sensor to a spanning port can allow it to monitor traffic going to and from many hosts.

Network Tap

A network tap is a direct connection between a sensor and the physical network media itself, such as a fiber optic cable. The tap provides the sensor with a copy of all network traffic being carried by the media.

IDS Load Balancer

An IDS load balancer is a device that aggregates and directs network traffic to monitoring systems, including IDS sensors. A load balancer can receive copies of network traffic from one or more spanning ports or network taps and aggregate traffic from different networks (e.g., reassemble a session that was split between two networks). The load balancer then distributes copies of the traffic to one or more listening devices, including IDS sensors, based on a set of rules configured by an administrator.

Network-based IDSs typically perform extensive logging of data related to detected events. This data can be used to confirm the validity of alerts, to investigate incidents, and to correlate events between the IDS and other logging sources. Data fields commonly logged by network-based IDSs include the following:

Images   Timestamp (usually date and time)

Images   Connection or session ID (typically a consecutive or unique number assigned to each TCP connection or to like groups of packets for connectionless protocols)

Images   Event or alert type

Images   Rating (e.g., priority, severity, impact, confidence)

Images   Network, transport, and application layer protocols

Images   Source and destination IP addresses

Images   Source and destination TCP or UDP ports, or ICMP types and codes

Images   Number of bytes transmitted over the connection

Images   Decoded payload data, such as application requests and responses

Images   State-related information (e.g., authenticated username)

Images   Prevention action performed (if any)

There are also Wireless IDSs (WIDS), which monitors wireless network traffic and analyzes it to identify suspicious activity involving the wireless networking protocols themselves. Unlike a network-based IDS, which can see all packets on the networks it monitors, a wireless IDS works by sampling traffic. There are two frequency bands to monitor (2.4 GHz and 5 GHz), and each band is separated into channels. Wireless sensors are available in multiple forms:

Dedicated

A dedicated sensor is a device that performs wireless IDS functions but does not pass network traffic from source to destination. Dedicated sensors are often completely passive, functioning in a Radio Frequency (RF) monitoring mode to sniff wireless network traffic. Some dedicated sensors perform analysis of the traffic they monitor, while other sensors forward the network traffic to a management server for analysis. The sensor is typically connected to the wired network (e.g., Ethernet cable between the sensor and a switch).

Bundled with an AP.

Several vendors have added IDS capabilities to APs. A bundled AP typically provides a less rigorous detection capability than a dedicated sensor because the AP needs to divide its time between providing network access and monitoring multiple channels or bands for malicious activity.

Wireless IDSs typically perform extensive logging of data related to detected events. This data can be used to confirm the validity of alerts, to investigate incidents, and to correlate events between the IDS and other logging sources. Data fields commonly logged by wireless IDSs include the following:

Images   Timestamp (usually date and time)

Images   Event or alert type

Images   Priority or severity rating

Images   Source MAC address (the vendor is often identified from the address)

Images   Channel number

Images   ID of the sensor that observed the event

Images   Prevention action performed (if any)

Intrusion Prevention System

Intrusion prevention systems (IPSs) can be considered to represent an evolution in security progress from IDS technology. Whereas an IDS represents a passive system, an IPS represents an active system that detects and responds to predefined events. Thus, the IPS represents technology built on an IDS system. This means that the ability of the IPS to prevent intrusions from occurring is highly dependent on the underlying IDS.

An IPS represents a software or a hardware appliance that monitors a network or system activities for malicious or unwanted behavior, such as repeated attempts to log onto a computer or gain access to a router’s command interface and will in real time react to either block or prevent those activities. Of course, it will also issue one or more alarms via a console, e-mail, or dialing a predefined telephone number to alert applicable persons of the event. A network-based IPS, for example, will operate in-line to monitor all network traffic for malicious code or attacks. When an attack on a router’s command port is detected, it can drop the offending packets while still allowing all other traffic to pass.

To operate effectively, an IPS must have an excellent intrusion detection capability. This also means that the software or hardware appliance itself should not become a liability by becoming subject to one or more types of network or computer attacks. Thus, some IPS products are designed to be installed without an IP network address. Instead, they operate promiscuously, examining each packet flowing on the network and responding to predefined attacks by dropping packets, changing equipment settings, and generating a variety of alerts. Thus, unlike a firewall that has an IP address, resides at the perimeter of a network, and will usually filter packets based on predefined packet addresses and packet content, the IPS can reside behind the firewall, has no IP address, and operates invisibly on the network.

In addition to identifying incidents and supporting incident response efforts, organizations have found other uses for IDS/IPSs, including the following38:

Identifying security policy problems. An IDS/IPS can provide some degree of quality control for security policy implementation, such as duplicating firewall rulesets and alerting when it sees network traffic that should have been blocked by the firewall but was not because of a firewall configuration error.

Documenting the existing threat to an organization. IDS/IPSs log information about the threats that they detect. Understanding the frequency and characteristics of attacks against an organization’s computing resources is helpful in identifying the appropriate security measures for protecting the resources. The information can also be used to educate management about the threats that the organization faces.

Deterring individuals from violating security policies. If individuals are aware that their actions are being monitored by IDS/IPS technologies for security policy violations, they may be less likely to commit such violations because of the risk of detection.

Security Information & Event Management Considerations (SIEM)

Security Information and Event Management (SIEM) tools emerged right around 2000 - 2001. Historically, SIEM consisted of two distinct offerings: SEM (security event management), which collected, aggregated and acted upon security events; and SIM (security information management), which correlated, normalized and reported on the collected security event data.

SIEM technology is typically deployed to support three primary use cases:

Images   Compliance through log management and compliance reporting

Images   Threat management through real-time monitoring of user activity, data access, and application activity and incident management

Images   A deployment that provides a mix of compliance and threat management capabilities

Security Information and Event Management (SIEM) systems are designed to accept log event and flow information from a broad range of systems, including traditional security systems, management systems, or any other systems which provide a relevant data output that, when correlated and analyzed, is relevant for the enterprise. The SIEM system establishes an early warning capability to take preventative actions. An effective early warning system detects threats based on a holistic perspective and provides in-depth information about them. The information collected by the SIEM is typically aggregated or put into a single stream and translated into a standardized format, to reduce duplicates and to expedite subsequent analysis. It is then correlated between data sources and analyzed against a set of human defined rules, or vendor supplied or security analyst programmed correlation algorithms, to provide real-time reporting and alerting on incidents and events that may require intervention. The subsequent data is typically stored in a manner that prevents tampering to enable its use as evidence in any investigations or to meet compliance requirements.

The key features in SIEM systems include:

Images   Log Aggregation — Collection and aggregation of log records from the network, security, servers, databases, identity systems, and applications.

Images   Correlation — Attack identification by analyzing multiple data sets from multiple devices to identify patterns not obvious when looking at only one data source.

Images   Alerting — Defining rules and thresholds to display console alerts based on customer-defined prioritization of risk and/or asset value.

Images   Dashboards — An interface which presents key security indicators to identify problem areas and facilitate investigation.

Images   Forensics — The ability to investigate incidents by indexing and searching relevant events.

Images   Reporting — Documentation of control sets and other relevant security operations and compliance activities.

In order to deploy a SIEM system successfully, the first thing that the security architect needs to do is identify which systems will be forwarding events, typically all switches, routers, servers, application, and security systems (Network/Host Intrusion Prevention, Firewalls, anti-malware, etc). The number of devices forwarding events to the SIEM will depend on how much money an organization is willing to spend on event collectors that receive and normalize events, and the storage necessary to keep all of the data secured.

Deciding what events to send to the SIEM is often challenging. The security architect needs to be aware of two capacity limits that SIEM systems have:

  1. Storage. How much space will the events take? To get a rough estimate go to every system that will be forwarding events and report on how much space they logged in a day then multiply that by the retention policy and add them all together. For instance take: (firewall logs for the day * 60 days) + (IPS logs for the day * 60 days) = required storage.

  2. Events per second. It is recommended to go to all of the devices that will be forwarding events and report on how many they generated in a day and divide that by 86,400 (number of seconds in a day). This will get an approximate number of total events per second which will determine the number and size of event collectors.

The other major area for the security architect to be aware of is rule sets. Due to the nature of SIEM systems, and the varied approaches that vendors take to rule creation, rules may need to be modified slightly to become effective. For example, a correlation rule monitoring for TCP port 31453 is going to trigger backdoor rules. Firewall events will trigger this occasionally because of an outbound connection. The reason for this is that when a computer initiates a connection to a web server on TCP port 80, it has to open a random port between 1024-65535 as well. This random port is what could trigger an alert. Modifying the rule to monitor for 31453 as a destination port would be a good way to tune this rule.

There are two specific areas that the security architect should begin to focus on as they look to deploy SEIM systems into the enterprise:

  1. Bandwidth Utilization

    The most common way to get this data would be to use switch and router flow events. There may be other ways depending on the environment, such as forwarding Network Intrusion Prevention events to the SIEM. Regardless, this can take some time to benchmark and tune because bandwidth utilization is typically somewhat sporadic.

    To detect potential DDoS attacks a good place to start would be with monitoring for ingress traffic targeted to a handful of critical systems that would prevent the organization from functioning should they become inaccessible. The security architect would create a rule that would look something like “if the bandwidth directed to my web servers is greater than 40Mb/s for 10 minutes or more, trigger an alert.”

  2. HTTP Tunneling

    If a network is enforcing a least privilege architecture, the user network will be able to send HTTP and HTTPS traffic from the inside network out to the Internet. All of the SMTP traffic should go to the internal mail relay. If users are tunneling other protocols through HTTP they are likely attempting to evade controls, or it could be malware attempting to evade controls. The security architect will need to create a rule that monitors for TCP port 80 or 443 traffic that is NOT HTTP protocol based. On the SIEM, monitor for one of these events to be received to trigger the alert.

    This rule would require a Network Intrusion Detection/Prevention System or Application Layer Firewall to be in place and feeding events to the SIEM system. The security architect should be aware of the need to tune the rule on the log generating device(s) and/or filter certain hosts from triggering the correlation rule in order to balance the system to the appropriate sensitivity level.

The deployment of a SIEM system needs to be carefully planned to meet clear and articulated requirements, and the architecture designed for the size and organization of the purchasing entity. Most of all, the security architect need to make sure that they are operationalizing the tool, which requires ongoing resources to keep the platform tuned, relevant, and complete. New devices and applications will be added, and those need to pump data into the SIEM system. New attacks will surface and new data types will emerge which must be integrated into the tool. SIEM is not a set-it-and-forget-it technology, and expecting the system to hum along without care and maintenance is a recipe for failure.

Wireless Considerations

No discussion of network security would be complete without discussing wireless networks.

Wireless LANs consist of computers with wireless adapters either built in or inserted into card slots, which collectively are referred to as stations, and one or more access points. An access point normally functions as a multiport bridge, with a wireless port and one or more wired ports. As data flows between wireless stations, from one wireless station to the Internet or from a wireless station onto the corporate LAN to a server, the access point operates as a bridge and broadcasts data onto all other ports, which makes it relatively easy for a person with a laptop with a promiscuous adapter to read traffic from or to other stations to include those stations residing on the LAN as they communicate with wireless stations.

Architectures

One or more stations and an access point are referred to as a Basic Service Set (BSS). To differentiate one BSS from another, each access point is assigned a Service Set identifier (SSID). The SSID is periodically broadcast by the access point, which enables a station to examine the names of networks within range and connect to the most appropriate one. One popular method to increase wireless security, which is not particularly practical when facing network-savvy hackers, is to turn off SSID broadcasting. While the network name is not shown, one can easily connect to the network by configuring their station to select the “unknown” network or configuring their station to use the network name “any.” In addition, others can also capture the SSID in cleartext by observing association frames from legitimate clients.

Images

Figure 2.14 - The Independent Basic Service Set

Figure 2.14 illustrates the formation of an independent BSS. Wireless LANs can communicate is two different ways referred to as peer-to-peer and infrastructure. In peer-to-peer mode, stations communicate directly with one another. In the infrastructure mode of operation, stations communicate via the use of an access point. Thus, Figure 2.14 shows how three stations can communicate with one another without an access point.

The wireless access point, which is more popularly referred to as a wireless router when used in a home or small business, is the most common communications product used to connect wireless stations to a corporate LAN. In actuality, the basic access point is a two-port bridge, with one port representing the wireless interface while the second port is the wired interface. When functioning as a bridge, the access point operates according to the three-F rule, flooding, filtering, and forwarding, as it builds a table of MAC addresses associated with each port. As the access point evolved, many manufacturers added a routing capability to the device as well as several Ethernet switch ports, referring to the device as a wireless router. In actuality, most wireless routers perform limited routing capability at layer 3 and primarily operate at layer 2 to perform bridging among its wired and wireless ports. Unfortunately, this device is similar to other networking hardware with respect to a security loophole many people fail to close. That is, it is configured at the factory with a default setting, such as “admin” or the name of the manufacturer for the password needed to access its configuration settings. Thus, the first thing a security architect should consider after the device is set up is to change the administrative password from its default setting.

Images

Figure 2.15 - The Infrastructure Basic Service Set

Figure 2.15 illustrates an Infrastructure BSS, which is the most common type of BSS used. In this example two wireless stations are shown communicating via a common access point which is in turn cabled to a wired hub or switch, which provides connectivity to the corporate network. If default settings are not changed on the access point, not only can a hacker easily access the administrative functions of the access point, but, in addition, he or she can change the settings to either cause havoc to the organization or silently transmit a stream of data flowing through the device to a third party address for analysis.

When two BSSs are connected via a repeater or wired connection, they form an Extended Service Set (ESS). The ESS has an identifier or network name referred to as an Extended Service Set Identifier (ESSID). The ESSID can be considered as the network identifier for the wireless network. Devices may be set to “any” or to a specific ESSID. When set, they will only communicate with other devices using the same ESSID. Figure 2.16 illustrates the relationship between the BSS and ESS for two BSSs linked via a wired LAN. In this example, each BSS could be located in different buildings on a campus, and the movement of a notebook user from one building to another would occur similar to cell phone roaming. The connection between the two BSSs is referred to as a distribution system (DS). The DS can be a wired LAN, a leased line, or even a wireless LAN repeater to extend the range between the two service sets.

Images

Figure 2.16 - The Extended Service Set and the distribution system

Security Issues

The original security for wireless LANs, referred to as Wired Equivalent Privacy (WEP), as its name implies, permits the equivalent of wired network privacy and nothing more. WEP was broken by several persons many years ago, and many improvements were made to the security technology to include rotating WEP keys and the use of RADIUS servers to strengthen wireless security. Other security enhancements include permitting only predefined MAC addresses via filtering, the use of better encryption beyond WEP, and level-3 security measures associated with Web browsing39. While these security techniques made it a bit more difficult for hackers to recover the WEP key in use, they still represented security vulnerabilities.

In an attempt to minimize the vulnerability of wireless transmissions, several additional security-related techniques were developed. These techniques included two versions of Wi-Fi Protected Access (WPA and WPA2), and two new wireless-security-related standards from the IEEE referred to as the 802.11i and 802.1X. Concerning the latter, this standard includes a security protocol referred to as the Temporal Key Integrity Protocol (TKIP).

WPA and WPA2

Both WPA and WPA2 represent security protocols created by the Wi-Fi Alliance to secure wireless transmission, and resulted from the security weakness of WEP. The protocols implement a large portion of the IEEE wireless security standard referred to as 802.11i, and WPA included the use of TKIP to enhance data encryption. TKIP was designed to add a level of security beyond that provided by WEP. To do so, TKIP added a key mixing function, a sequence counter that protects against replay attacks, and a 64-bit message integrity check to eliminate the potential of a man-in-the-middle attack. TKIP was launched during 2002 and has been superseded by more robust encryption methods, such as AES and CCMP40.

Under WPA2, two modes of operation are supported: Personal mode and Enterprise mode. Personal mode was developed to support wireless security in the home and small office environment that lacked access to an authentication server. This mode of operation is referred to as Pre-shared key (PSK), and its use requires wireless network devices to encrypt traffic using a 256-bit key. That key can be entered as a passphrase of 8 to 63 printable ASCII characters or as a string of 64 hex digits. Because WPA-PSK automatically changes encryption keys, a technique referred to as rekeying, it provides a level of security significantly beyond that of WEP.

It is important to note that although WPA and WPA2 are not IEEE standards, they implement the majority of the IEEE 802.11i standard, with WPA2 supporting the Advanced Encryption Standard (AES). AES supports three block ciphers; AES-128, AES-192, and AES-256. While each block size is 128 bits, the keys can be 128, 192, or 256 bits, resulting in the terms used to reference each portion of the standard. Today, most wireless products sold for use in the home or small office support WPA2. Because setup involves a few clicks within the operating system to enter a passphrase or string of hex digits, the major difficulty reported by users typically involves the failure to use the same passphrase or hex code on each wireless device.

IEEE 802.11i and 802.1X

While WPA and WPA2 represent a majority of the 802.11i standard, they are not fully compatible with it. While 802.11i makes use of the AES block cipher, both the original WEP and WPA use the RC4 stream cipher. Another difference is the fact that the 802.11i architecture includes support for the 802.1X standard as an authentication mechanism based on the use of the Extensible Authentication Protocol (EAP) and an authentication server as well as the use of AES-based Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP), the latter an encryption protocol based on AES that provides confidentiality, integrity, and origin authentication. These additions in the 802.11i standard are well suited for the enterprise.

802.1X41

The IEEE 802.1X standard provides port-based authentication, requiring a wireless device to be authenticated prior to gaining access to a LAN and its resources. Under this standard, the client node is referred to as a supplicant, while the authenticator is usually an access point or a wired Ethernet switch. By default, the authenticator bars the supplicant’s access to the network. The authenticator passes the supplicant’s request to access the network to an authentication server. Assuming the authentication server accepts the supplicant’s request, the authenticator opens the port to the supplicant’s traffic, otherwise it is blocked. Messages from the supplicant authenticator and server are transported via EAP.

In addition to the previously mentioned wireless enhancements, another technique commonly used to provide a high level of security is the use of a layer 3 VPN. Because this will be described through use of VPNs later in this chapter, a detailed discussion is not warranted however, it is worth mentioning that they provide an alternative security mechanism that can be valuable when users are traveling or their organization does not fully support the 802.1X standard.

Zones of Control

Through the use of virtual LANs, it becomes possible to partition switch-based networks into zones of control. Not only does this restrict who can access devices attached to specific switch ports, but in addition, this can enhance throughput by limiting broadcast traffic.

Figure 2.17 illustrates an 8-port LAN switch subdivided into two networks based on port associations. In this example, ports 1, 2, 3, and 4 are assigned as VLAN 1, while the other ports are assigned to VLAN2. Note that traffic in VLAN1 is never seen by users in VLAN2 and vice versa, which provides a degree of both administrative control and security. Concerning the former, all accounting personnel could be assigned to VLAN1, while all engineering personnel could be assigned to VLAN2. Concerning security, the partition of the switch into two VLANs would then preclude accountants from accessing the engineering server, and vice versa. Also note that the segmentation of a switch into two or more VLANs can enhance performance. This results from the fact that broadcasts are restricted to each VLAN. VLANs can be further strengthened by adding encryption endpoints that encrypt all traffic across a VLAN.

Images

Figure 2.17 - VLAN switch partitioned by ports.

Network Security

This section will examine some specific network security measures. In doing the use of generic products as well as different types of tunneling and endpoint security measures the security architect needs to be familiar with will be discussed.

Content Filtering

Content filtering represents a technique whereby the contents of packets are either blocked or allowed based on an analysis of its content, rather than its IP address or other criteria. The most prominent use of content filtering is in programs that operate as add-ons to Web browsers or at a corporate gateway, blocking unacceptable messages that might be pornographic or racist. In an e-mail environment, the use of content filtering is designed to place e-mail advertisements and similar types of junk mail based on subject, content, or both in a spam folder that most persons ignore.

Images

Figure 2.18 - The three-way handshake.

Anti-Malware

Anti-malware software can be considered as a special type of content filter. However, instead of examining the content of packets for pornography, racist remarks, and similar content, this software is focused on detecting viruses, worms, Trojans, and other potentially harmful software. Once detected, the antimalware software will, based on its configuration, either block the packets or quarantine them. Often, antimalware is sold as a virus-checking system that operates on a separate e-mail server in a corporate environment and checks for a variety of potentially malicious software.

One special type of software product that is incorporated into many routers is designed to block DoS attacks. One type of DoS attack occurs due to the manner in which TCP operates, which results in a three-way handshake. Figure 2.18 illustrates an example of the TCP three-way handshake process, which results in the exchange of SYN, SYN/ACK, and ACK messages. First, a client accessing a server transmits a SYN message to the server. The server responds with a SYN/ACK message, to which the client would normally respond with an ACK. However, if a hacker spoofs the IP source address, the SYN/ACK message will not receive an ACK. Although the server will eventually time out the connection, during the period it remains open it takes resources away from the server. When a hacker floods the server with spoofed IP addresses in a series of connection requests, the result is a DoS attack, which limits the ability of real clients to access the server.

Through the use of DoS prevention, most routers can be configured to restrict the number of open connections at a specific time or from a specific address.

Anti-Spam

Content filtering is the building block upon which anti-spam products operate. For example, an e-mail spam filter could examine the originator, subject, or content of e-mails to decide whether to pass the mail to the recipient or place it in his or her spam folder. Figure 2.19 illustrates a portion of a spam folder on Yahoo mail. Note that the “From” column has names instead of e-mail addresses, which makes it easy to differentiate span from genuine e-mail. Note that at the time this author’s spam file was captured, spam e-mailers were using improper dates, which represents another way to filter their junk mail. Between the two columns is the column labeled Subject, which can also be used to filter junk mail. Here, such keywords as lenders, free, and medication can be used to place e-mail in the spam filter.

Images

Figure 2.19 - Examining a Yahoo spam folder.

Outbound Traffic Filtering

There are several types of communications devices that can be used to perform outbound traffic filtering. Such devices are primarily used to control the use of e-mail and Web access. The security architect should be familiar with the issues that this can create with regards to the deployment of secure gateway and proxy solutions, and the advanced deep inspection capabilities that these products may have to “look into” encrypted traffic, such as HTTPS, as it transits the network42.

When filtering outbound e-mail, some organizations use a special server to encrypt messages to certain third parties, forcing the recipient to register to receive mail as well as to either set up a user ID and password to access the mail server or to use a private key to decrypt the message encrypted with a public key. In other situations, an organization may configure a mail server to block mail sent to certain addresses, such as those with the domain .xxx.

The filtering of outbound Web traffic is commonly employed by several well-known security programs. Although the primary goal of Web outbound traffic filtering is to block users from accessing predefined URLs, such as phishing sites or sites considered racist, sites offering gambling or pornography, a secondary goal is to enhance employee productivity by limiting the time that workers can access the Internet.

The ability to restrict outbound Web traffic to certain periods of time is commonly incorporated into routers, firewalls, and certain network appliances. For example, an organization might restrict outbound Web traffic to all locations other than servers at different organizational locations, with the exception of lunch hour, when employees are allowed to pay bills, shop, or perform other Internet-related chores.

Mobile Code

Another type of outbound traffic filtering involves blocking mobile code. This type of code is software obtained from remote system or systems, transmitted over a network, and then downloaded and executed on a local system, all without the computer operator being aware of the activity taking place. Some common examples of mobile code include code developed using script languages such as JavaScript and VBScript, Java applets, ActiveX controls, Flash animations, and even macros embedded within Microsoft Office documents such as Excel and Word documents. Mobile code can occur by a hacker scanning a network for holes in the perimeter and sending mobile code to specific addresses or using e-mail attachments that activate when clicked, executing code.

Some mobile code can be harmful, consisting of Distributed Denial of Service (DDoS) agents designed to attack a target or list of targets at specific times, viruses, worms, and other harmful software. By examining the content of outbound packets, the spread of malware may be contained; however, it does not rid the computer of the problematic software. To do so, a virus checker or the alert message of the device performing the outbound traffic filtering must be examined.

Policy Enforcement Design

Content-aware Data Loss Prevention (DLP) tools enable the dynamic application of policy based on the classification of content determined at the time of an operation. Content-aware DLP describes a set of technologies and inspection techniques used to classify information content contained within an object such as a file, email, packet, application or data store while at rest (in storage), in use (during an operation) or in transit (across a network); and the ability to dynamically apply a policy such as log, report, classify, relocate, tag and encrypt and/or apply enterprise data rights management protections. DLP technologies help organizations develop, educate and enforce better business practices concerning the handling and transmission of sensitive data. There are three broad categories of DLP that the security architect needs to be familiar with as they plan the deployment of a solution:

Images   Enterprise DLP solutions, which provide organizations with advanced content-aware inspection capabilities and robust management consoles.

Images   Channel DLP, which consists of content-aware DLP capabilities that are integrated within an existing application — typically email.

Images   DLP-lite, a new subcategory of offerings that group a specific set of capabilities in a way that addresses a niche market typically by requirement, such as discovery only, or for a specific use case, such as small or midsize business (SMB), where a need may exist to monitor only a few protocols and provide a simplified management console or workflow.

Gartner inquiry data through 2011 indicates several major observations that should help security architects to develop appropriate requirements and select the right technology for their needs43:

Images   About 30% of enterprises led their content-aware DLP deployments with network requirements — 30% began with discovery requirements, and 40% started with endpoint requirements. Enterprises that began with network or endpoint capabilities nearly always deploy data discovery functions next. The majority of large enterprises purchase at least two of the three primary channels (network, endpoint and discovery) in an initial purchase, but few deploy all of them simultaneously.

Images   Many enterprises struggle to define their strategic content-aware DLP needs clearly and comprehensively.

Images   The primary appeal of endpoint technologies continues to be the protection of IP and other valuable enterprise data from insider theft and accidental leakage (full disk encryption mitigates the external theft and compliance issues). The value of network and discovery solutions, by contrast, lies in helping management to identify and correct faulty business processes, in identifying and preventing accidental disclosures of sensitive data, and in providing a mechanism for supporting compliance and audit activities.

Images   DLP solution providers continue to focus on text-based data in their analysis of content. Although a few vendors are making inroads into identifying non-text data, such as images, video, audio and other media, these remain in the early stage.

Images   Many DLP deployments are sold on the basis of being a tool to assist in risk management activities; however, most DLP solution reporting capabilities do not provide dashboard or feedback relevant for this function.

It is imperative that the security architect continue to be aware of the absolute need to involve non-IT stakeholders in the planning and operationalization of DLP. Although IT/IT security can play a role in ensuring the day-to-day operation of a DLP system, ultimately, the business needs to decide when an event is a policy violation and what the appropriate remedies for the incident are. The security architect needs to then be able to act as a bridge for those conversations, and link the outputs directly back to the security architecture as required, incorporating the feedback from the organization in an ongoing way.

Application and Transport Layer Security

The TCP/IP protocol suite in effect combines the upper three layers of the OSI model (application, presentation, and session), as shown in Figure 2.20, into a single layer that is commonly referenced as a TCP/IP application. Over the past two decades, several application protocols were developed to support secure e-commerce and in turn safe browsing and purchasing from different Web sites. This section briefly discusses social media technologies, and describes and discusses some security-related protocols that enable safe credit card transactions and deposits and withdrawals from checking accounts.

Images

Figure 2.20 - Comparing the TCP/IP protocol suite to the ISO reference model

Social Media

Social media is defined as online content created by people using highly accessible and scalable publishing technologies. Twitter, Facebook, LinkedIn, MySpace, YouTube and even Wikis are all great examples of social media. Other sub categories of social media include social networking, blogging, micro-blogging and more. Social networking sites, which provide users the ability to connect, communicate, and share with others, also serve as a platform for the advertising industry. They allow businesses to become known globally with ease since the social networking sites users are distributed in different geographical locations. They also allow business owners to have a “personal” connection with customers and a place to find and to get to know potential employees.

One of the main challenges for the security architect with regards to social media and more broadly, social networking technologies in the enterprise, comes from the intersection of the tremendous increase in smart device capabilities and the Bring Your Own Device (BYOD) phenomenon that has become prevalent in recent years44. The increased capabilities that smart devices offer to end users are proving to be a boon for productivity in many cases. However, these gains are coming at the expense of security and access control to sensitive data. As more and more unregulated end points are allowed to connect to a network, the ability for the security architect to enforce access control policy and information security governance continues to erode.

For cybercriminals, the shift from desktop-based applications to Web-based ones, particularly those on social networking sites, presents a new vector for abuse. As more and more people communicate through social networks, the more viable social networks become malware distribution platforms. An example of the issues that the security architect faces from social network based communication in the enterprise is KOOBFACE, which is a revolutionary piece of malware, being the first to have a successful and continuous run propagating through social networks.

“KOOBFACE is composed of various components, each with specific functionalities. While most malware cram their functionalities into one file, KOOBFACE divides each capability into different files that work together to form the KOOBFACE botnet. A typical KOOBFACE infection starts with spam sent through Facebook, Twitter, MySpace, or other social networking sites containing a catchy message with a link to a “video.” KOOBFACE can also send messages to the inbox of a user’s social network friends. Clicking the link will redirect the user to a website designed to mimic YouTube (but is actually named YuoTube), which asks the user to install an executable (.EXE) file to be able to watch the video. The .EXE file is, however, not the actual KOOBFACE malware but a downloader of KOOBFACE components. The components may be subdivided into the following:

Images   KOOBFACE downloader

Images   Social network propagation components

Images   Web server component

Images   Ads pusher and rogue antivirus (AV) installer

Images   CAPTCHA breaker

Images   Data stealer

Images   Web search hijackers

Images   Rogue Domain Name System (DNS) changer

The KOOBFACE downloader is also known as the fake “Adobe Flash component” or video codec the fake YouTube site claims the user needs to view a video that turns out to be nonexistent. The downloader’s actual purpose includes the following:

Images   Determine what social networks the affected user is a member of.

Images   Connect to the KOOBFACE Command & Control (C&C).

Images   Download the KOOBFACE components the C&C instructs it to download.

In order to determine what social networks the affected user is a member of, the KOOBFACE downloader checks the Internet cookies in the user’s machine. The KOOBFACE downloader checks the cookies for the following social networking sites:

Images   Facebook

Images   Tagged

Images   MySpace

Images   Bebo

Images   Hi5

Images   Netlog

Images   Friendster

Images   fubar

Images   myYearbook

Images   Twitter

The presence of cookies means the user has logged in to any of the above-mentioned social networking sites. The KOOBFACE downloader then reports all found social networking site cookies to the KOOBFACE C&C. Apart from the necessary social network propagation components, the KOOBFACE C&C may also instruct the KOOBFACE downloader to download and install other KOOBFACE malware that act as Web servers, ads pushers, rogue AV installers, CAPTCHA breakers, data stealers, Web search hijackers, and rogue DNS changers.”45

The social network propagation components of KOOBFACE may be referred to as the actual KOOBFACE worm since these are responsible for sending out messages in social networking sites that eventually lead to the KOOBFACE downloader. The components of the KOOBFACE botnet owed their continued proliferation to gratuitous link-sharing behaviors seen commonly on social networking sites. It is this link sharing behavior that is the most problematic for the security architect with regards to access control and information security governance. Because most of the content that individuals share links to will be hosted outside of the security boundaries of the organization, the security architect has no direct control over the content, and as a result, cannot apply access control mechanisms, content filtering, malware and virus scanning, packet inspection, and / or any forms of screening and policy based content controls that would normally be applied to data as it transits through the organizations network and infrastructure. Finding ways to address these issues through the separation of social and work related activities via physical and logical controls, down to the device level, is one of the key challenges that security architects face in the area of social media today.

Secure E-Commerce Protocols

A protocol represents a set of rules that govern communication between two network entities. Some of the most widely used protocols are members of the TCP/IP family, such as the Internet Protocol (IP), the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Control Message Protocol (ICMP). A security protocol is a communication protocol that is specifically designed to provide secure communications. Several security protocols are either in use or being developed for use on the Internet. Such security protocols are designed for different applications ranging from the use of credit cards to spending micro dollars, which add up cumulatively to a large amount spread over hundreds to thousands of Web sites. In addition, each security protocol may provide different benefits, depending on where it is positioned in the TCP/IP protocol suite.

The most widely used security protocol is the Secure Socket Layer (SSL) because it is built into every popular Web server and browser. As its name implies, SSL does not secure a transaction directly; instead, it provides a secure connection for any information flowing between a browser and server via the HyperText Transmission Protocol (HTTP). SSL has been used over the past few years to migrate to a derivative IETF standard referred to as Transport Layer Security (TLS) that is very similar to SSL Version 3.0; these standards will be referred to interchangeably in this chapter. SSL resides just above TCP but below the application it protects and is transported by underlying protocols, so it does not require modification to the operating system’s networking software and does not affect data or document structures. Figure 2.21 illustrates the relationship of SSL to other layers in the TCP protocol stack.

Images

Figure 2.21 - SSL and the TCP/IP protocol stack

SSL/TSL and the TCP/IP Protocol Stack

As the name Secure Sockets Layer indicates, SSL connections act like sockets connected by TCP. Therefore, one can think of SSL connections as secure TCP connections because the place for SSL in the protocol stack is right above TCP. It is important to note, however, that SSL does not support some TCP features, such as out-of-band data.

The SSL protocol was developed by Netscape Communications Corporation in 1994. SSL allows clients, such as Web browsers and HTTP servers, to communicate over a secure communications connection. To accomplish this, SSL supports encryption, source authentication, and data integrity as key mechanisms that are used to protect information exchanged over insecure public networks such as the Internet. There are several versions of SSL, with SSL 3.0 being the latest version, which is universally supported. A newer “version” of SSL known as the Transport Layer Security (TLS) is an improvement over SSL 3.0, was promulgated as an Internet standard, and is supported by just about all recent software.

After building a TCP connection, the SSL handshake is started by the client. The client which can be a browser as well as any other program such as Windows Update or PuTTY sends a number of specifications: which version of SSL/TLS it is running, what ciphersuites it wants to use, and what compression methods it wants to use. The server checks what the highest SSL/TLS version is that is supported by them both, picks a ciphersuite from one of the client’s options (if it supports one), and optionally picks a compression method.

After this the basic setup is done, the server sends its certificate. This certificate must be trusted by either the client itself or a party that the client trusts. For example if the client trusts ABC, then the client can trust the certificate from 123.com, because ABC cryptographically signed 123’s certificate.

Having verified the certificate and being certain this server really is who he claims to be (and not a man in the middle), a key is exchanged. This can be a public key, a PresharedSecret, or simply nothing, depending on the chosen ciphersuite. Both the server and the client can now compute the key for the symmetric encryption. The client tells the server that from now on, all communication will be encrypted, and sends an encrypted and authenticated message to the server.

The server verifies that the MAC (used for authentication) is correct, and that the message can be correctly decrypted. It then returns a message, which the client verifies as well.

The handshake is now finished, and the two hosts can communicate securely. To close the connection, a close_notify ‘alert’ is used. If an attacker tries to terminate the connection by finishing the TCP connection (injecting a FIN packet), both sides will know the connection was improperly terminated. The connection cannot be compromised by this though, merely interrupted.

Encryption

Encryption is used to protect data from observation and potential use by converting it to an apparently meaningless form prior to transmission. The data is encrypted by one side (either the client or the server), transmitted, and then decrypted by the other side prior to being processed.

Authentication

Authentication represents a method of verifying the identity of the other party in a communications session. In e-commerce, this enables the client accessing a server to verify its identity and the server to verify the identity of the client.

There are several ways of configuring authentication. First, if an authentication method is not configured, no authentication will occur. Basic server authentication can also be enabled, which provides authentication of the server accessed by a client. A third authentication method is referred to as mutual authentication, which results in the server authenticating the client while the client authenticates the server.

The first time a browser or other client attempts to communicate with a Web server over a secure connection, the server presents the client with a set of credentials. Those credentials are in the form of a certificate.

Certificates and Certificate Authorities46

Certificates are issued and validated by trusted authorities referred to as certification authorities (CAs). A certificate represents the public-key identity of a person. It is a signed document that in effect says: “I certify that the public key in this document belongs to the entity named in this document.” One of the most widely used CAs are certificates issued by VeriSign.

Data Integrity

The function of data integrity is to ensure that data has not been modified. Implementing data integrity can include monitoring and modification detection of key files, regardless of whether the modification was malicious, or accidental. In a Windows environment, this can include looking for changes to the registry, changes to files’ security access permissions, changes to services, as well as changes to the contents of files.

SSL/TLS Features

The original design of SSL and its subsequent reincarnation as TLS were well thought out and resulted in the two protocols being used for secure e-commerce transactions. SSL represents a de facto standard, while TLS represents a formal standard promulgated by the IETF. At the very beginning, the designers of SSL were aware of the fact that not all parties would use the same client software. In addition, due to different hardware platform processing, clients would not embrace a single encryption algorithm. This is because an encryption algorithm suitable for one hardware platform might be unsuitable for another. The same was true for servers. Thus, under SSL and TLS, the client and server at the two ends of a connection negotiate the encryption and decryption algorithms (cipher suites) during their initial handshake.

Although SSL permits both the client and the server to authenticate each other, typically only the server is authenticated in the SSL layer. Clients are primarily authenticated in the application layer, through passwords sent over an SSL-protected communications link between client and server

Figure 2.22 illustrates the beginning of an SSL session when this author used a browser to access the Smith Barney Web site. In the background, note the lock in the upper right that is used to indicate a secure connection. Depending on the browser and version used, the lock can be at different locations on the Web page. In general, SSL- or TLS-enabled Web sites are recognizable by the lock or key icon displayed at the top or bottom of a browser window when visiting a site that supports transmission security.

Images

Figure 2.22 - Accessing a secure Web site

Limitations of SSL/TLS

A key limitation of SSL/TLS is the fact that information passed over a secure connection becomes nonsecure when the server being accessed stores the received data on a hard drive. In fact, this major limitation allowed hackers to obtain millions of credit card numbers and other information by hacking into an organization’s server and downloading the contents of various server files. Unfortunately, that server held account information for several brands, resulting in the credit card information of over a million persons who purchased items at over a thousand stores being compromised. Thus, for additional safety, SSL should be supplemented by the encryption of data stored on e-commerce servers.

In addition to its use for securing access to Web servers, SSL can be used to secure communications with mail servers via POP3 (Post Office Protocol Version 3), IMAP (Internet Message Access Protocol), and SMTP (Simple Mail Transfer Protocol), directory servers via LDAP (Lightweight Directory Assistance Protocol), CA servers, FTP servers, and many custom applications.

Other Security Protocols

Some additional security-related protocols include Secure Multimedia Internet Mail Extensions (S/MIME), used for securing e-mail; iKP, which represents a family of protocols that provides a model for secure credit card transactions; Millicent, which was developed as a method for micropayments; and Netcash and Digicash, the latter two developed for anonymous transactions.

S/MIME is an application-layer protocol that was developed to provide security for e-mail documents. It accomplishes this by securing the transmission of e-mail through store-and-forward processing and even during storage on a destination hard drive. S/MIME can be used to create signed orders and other types of e-commerce records. Currently, several popular e-mail programs support S/MIME; however, the requirement for a full Public Key Infrastructure (PKI) to deploy digital identities to users has hindered its widespread adoption

The iKP family of protocols was designed at IBM-Zürich to provide secure credit card payments over an insecure network, such as the Internet47. Millicent was designed by Digital Equipment’s Systems Research Center at Palo Alto, California, to enable secure micropayments, enabling transactions that cost a fraction of a cent to occur on the Internet48. Because the cost associated with a typical security protocol can exceed a micropayment, Millicent addresses this economic problem by providing lightweight secure transactions more suitable for micropayment transactions. Another series of security-related protocols such as IPSec, L2TP, and SOCKS are used for constructing virtual private networks (VPNs).

Secure Remote Procedure Calls

Prior to discussing the securing of Remote Procedure Calls (RPCs), it is important to understand what it is and how it is used49. An RPC represents a technique for building client–server-based applications. An RPC can be considered to be similar to a function call, with calling arguments passed to the remote procedure while the calling software waits for a response from the remote procedure operating on a server. At the client, the software thread that initiated the RPC is blocked from further processing until either a response is received from the server or a timeout occurs. At the server, a routine is initiated that performs the requested service and transmits a response to the client.

To secure RPCs, several steps are required. First, the client software must create an association with a server. The client then invokes appropriate security services to compute a checksum of the previously created checksum. Finally, the client initializes two 32-bit sequence numbers that are used to establish pairwise credentials between the client and server. At the server, upon receipt of an association request, the computer stores the association checksum. Next, the server creates two 32-bit sequence numbers. Although client and server sequence numbers are not transmitted, they are used to compute a variety of security checks, such as ensuring that data is transmitted and received in the same order.

The programming of the RPC permits various security options, such as defining a desired protection level and the algorithm used to protect data via an authentication service. Because some options are more CPU intensive than others, a distinction can be made between intranet and Internet RPCs. That is, in an intranet environment where the threat is substantially reduced, processing may be enhanced by reducing security. In comparison, when used over the Internet, RPCs should be configured for maximum security.

Network Layer Security and VPNs

VPN technology is based on a technique referred to as tunneling. Under VPN tunneling, a logical connection is established and maintained between two locations connected via a packet network. Over this connection, packets are formed and transmitted via a client according to a specific VPN protocol being used. The client typically places a header and possibly a trailer around each packet, which encapsulates each packet and, depending on the protocol used, may encrypt and add authentication to the packet. At its destination, the packet is stripped of its header and optional trailer, and may be authenticated and decrypted based on the protocol in use.

The primary purpose of a VPN is to enable clients to access servers via a public packet network such as the Internet in a secure manner. Another reason for the use of VPNs is economics. That is, a VPN enables two or more locations to use the Internet as a transmission facility, enabling companies to avoid the cost of expensive leased lines or dial charges. One key application of VPNs is to link or interconnect networks in two or more distributed locations to one another via the Internet. In fact, over the years, VPNs have progressed from being client–server tunneling protocols to developing network-to-network protocol capability; the introduction of network appliances that support VPNs enables network operators to purchase off-the-shelf hardware that facilitates interconnecting networks at multiple locations via the Internet in a secure manner.

Other reasons for the use of VPNs include securing wireless transmission from hot spots in airports and coffee shops back to a corporate server, perhaps reducing the need for third-party support, network scalability and, sometimes, ease of use. Concerning network scalability, while the cost associated with constructing a network using dial-up and leased lines may appear reasonable at first, as the need to add more branch offices expands, so does the cost. By using a VPN, all that is required to add additional locations is a line connecting the office to the Internet, making the Internet’s vast collection of interconnected lines and routers available to an organization both domestically and overseas.

Figure 2.23 illustrates the use of the Internet to interconnect four geographically distributed branch offices. Note that only four connections to the Internet are required, one from each location. In comparison, 6 leased lines would be required to interconnect the offices without the use of the Internet. If the organization expanded its branches by two, it would need six Internet connections; however, the number of leased lines would increase to 15 to provide a similar interconnectivity capability. In addition, the mesh structure of the Internet provides a high degree of alternate routing capability, which routes data around impairments such as network outages or traffic bottlenecks; this capability would be very costly to duplicate on an individual organizational basis. Thus, for some organizations, the ability to add branch connections at a nominal cost makes economics and scalability an important driver for the use of VPNs.

Images

Figure 2.23 - Using the Internet to connect four distributed locations to one another

Types of VPN Tunneling

A VPN interconnects two or more locations via tunneling. There are two basic types of VPN tunneling: voluntary and compulsory. Both types of tunneling are commonly used in different VPN protocols. In addition, depending on the VPN protocol used, additional differences may exist.

Under voluntary tunneling, the VPN client manages the connection setup process. The client first initiates a connection to the communications carrier, which is an Internet service provider (ISP), when establishing an Internet VPN. Then, the VPN client application creates the tunnel to a VPN server over the connection.

Under compulsory tunneling, the communications carrier network provider is responsible for managing the VPN connection setup process. Thus, when the client initiates a connection to the carrier, the carrier in turn immediately initiates a VPN connection between that client and a designated VPN server. As viewed by the client, the VPN connection is set up in just one step compared to the two-step procedure required for voluntary tunnels. Compulsory VPN tunneling automatically authenticates clients and associates them with specific VPN servers by using predefined programming in the carrier network. The predefined programming is commonly referred to as a VPN Front End Processor (FEP), Network Access Server (NAS), or Point of Presence Server. Note that compulsory tunneling hides the details of VPN server connectivity from VPN clients and transfers management control over the tunnels from clients to the ISP. In return, service providers become responsible for the installation and maintenance of VPN hardware and software in their network.

VPN Tunneling Protocols

Since the early 1980s, several computer network protocols were developed to support VPN tunnels. Some of the more popular VPN tunneling protocols include the Point-to-Point Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol (L2TP), IP Security (IPSec), a combination of L2TP and IPSec referred to as L2TP/IPSec, and TCP Wrappers.

Point-to-Point Tunneling Protocol (PPTP)

Although PPTP was bundled with most versions of Windows beginning with Windows 95, its actual development resulted from a joint effort between Microsoft Corporation and several other vendors, including Ascend Communications, a router manufacturer. The initial version of PPTP for Windows was for dial-up access, with later versions supporting tunneling via the Internet. Encryption is based on the RC4 algorithm, which Microsoft refers to as Microsoft Point-to-Point Encryption (MPPE) and is not part of the PPTP specification50. Instead, it is performed by the RAS server and is not supported by all vendors.

Operation

PPTP is built on top of the Point-to-Point Protocol (PPP), which is commonly used as the login protocol for dial-up Internet access. PPTP stores data within PPP packets, then encapsulates the PPP packets within IP datagrams for transmission through an Internet-based VPN tunnel. PPTP supports data encryption and compression and uses a form of General Routing Encapsulation (GRE) to get data to and from its final destination. PPTP VPN tunnels are created via the following two-step process. First, the PPTP clients connect to their ISP using PPP dial-up networking, typically via a modem or ISDN connection. Next, PPTP creates a TCP control connection between the VPN client and the destination VPN server to establish a tunnel. PPTP uses TCP port 1723 for these connections. PPTP also supports VPN connectivity via a LAN. If the VPN is localized to the LAN, then ISP connections are not required, allowing PPTP tunnels to be created directly, because PPTP will create a TCP control connection between the VPN client and the destination VPN server.

Images

Figure 2.24 - Using the Internet to create a PPTP tunnel from one location to another

A common security method implemented in routers is to employ access lists to allow IP datagrams containing a specified source and destination address that transports TCP with a destination port of 1723. In effect, this action creates a PPTP tunnel. Figure 2.24 illustrates the use of the Internet to create a tunnel between locations A and B. Assuming for simplicity that the IP address of the client at location A is 1.2.3.4 and the IP address of the VPN server at location B is 4.3.2.1, then the generic statements in an access list for the router at location B to allow PPTP datagrams from the VPN tunnel established by a client at location A becomes

Allow IP 1.2.3.4 4.3.2.1 TCP any 1723

This assumes that the access list format requires specifying a protocol (IP) followed by source and destination IP addresses followed by another protocol (TCP) and source and destination ports.

PPTP Security

PPTP supports authentication, encryption, and packet filtering. PPTP authentication uses PPP-based protocols such as the Password Authentication Protocol (PAP), the Challenge-Handshake Authentication Protocol (CHAP), and the Extensible Authentication Protocol EAP. PPTP supports packet filtering on VPN servers. Intermediate routers and other firewalls can also be configured to selectively filter PPTP traffic.

PPTP Advantages and Disadvantages

A key advantage of PPTP is its inclusion in just about every version of Windows. Thus, Windows servers also can function as PPTP-based VPN servers without having an organization bear any additional cost.

Unfortunately, PPTP has several vulnerabilities. First, it is vulnerable to man-in-the-middle attacks. Second, and perhaps most important, PPTP supports only single-factor, password-based authentication. As a result, if a hacker steals or guesses an employee’s password, that intruder can access the company’s network. It is quite common to walk through a floor in an organization and see sticky messages with passwords posted on cubicle walls or monitors; obviously, any simple password-based system represents a risk.

Another disadvantage of PPTP is its failure to embrace a single standard for authentication and encryption. Thus, two products that both fully comply with the PPTP specification can be totally incompatible with each other if they encrypt data differently. In addition, numerous concerns have arisen over the level of security PPTP provides compared to alternative VPN protocols. As a result of questions regarding its security, PPTP has been made obsolete by Layer 2 Tunneling Protocol and IPSec.

Layer 2 Tunneling Protocol (L2TP)

When PPTP was being developed for VPN tunneling by Microsoft and Ascend Communication, Cisco Corporation was supporting the development of an alternative VPN, referred to as Layer 2 Forwarding (L2F)51. L2F was primarily used in Cisco products and did not provide either encryption or authentication, relying on the protocol being tunneled to provide either or both. While L2F was specifically designed to tunnel PPP traffic, it was capable of carrying many other protocols. In an attempt to improve on L2F, the best features of it and PPTP were combined to create new standard called the Layer 2 Tunneling Protocol (L2TP).

Similar to PPTP, L2TP exists at the data link layer (Layer 2) in the OSI reference model; hence, the origin of its name. However, in actuality, L2TP is a Layer 5 protocol and operates at the session layer of the OSI model using UDP Port 1701.

L2TP does not actually provide encryption or authentication, relying on the protocol that passes within the tunnel it provides for this capability. The protocol was originally published in 1999 as proposed standard RFC 266152. A more recent version of L2TPv3 was published as proposed standard RFC 3931 in 200553. The key difference between the latter and earlier version is the fact that L2TPv3 provides additional security features, improved encapsulation, and the ability to transport data links such as Frame Relay, Ethernet, and ATM over an IP network, whereas the original L2TP was restricted to supporting PPP.

Operation

L2TP uses the User Datagram Protocol (UDP). In doing so, the entire L2TP packet, including payload and L2TP header, is sent within a UDP datagram. Although PPP sessions are commonly transported within an L2TP tunnel, as previously mentioned, Ethernet, Frame Relay, and other types of data can be transported under L2TPv3.

Images

Figure 2.25 - Using the Internet to create a L2TP tunnel from one location to another

The two endpoints of an L2TP tunnel are called the LAC (L2TP Access Concentrator) and the LNS (L2TP Network Server). The LAC is the initiator of the tunnel, while the LNS is the server, which waits for new tunnels to be established. Once a tunnel is established, network traffic is bidirectional. When higher-level protocols are then run through an L2TP tunnel, an L2TP session is established within the tunnel for each higher-level protocol, such as PPP, Frame Relay, or Ethernet. Either the LAC or LNS may initiate sessions. The traffic for each session is isolated by L2TP, so it becomes possible to set up multiple virtual networks across a single tunnel.

The packets exchanged within an L2TP tunnel can be categorized as either control packets or data packets. L2TP provides reliability features for the control packets, but no reliability for data packets. If reliability is required for data packets, it must be provided by protocols running within each session of the L2TP tunnel.

Figure 2.25 illustrates the equipment required to provide multiple tunnels from one location to another via the Internet. In the lower-left corner, a modem bank terminates calls from the PSTN and passes them to a network access server (NAS), which is normally combined with an L2TP Access Concentrator (LAC). The L2TP Access Concentrator encapsulates PPP frames with L2TP headers and transmits them over the Internet as UDP packets. Or, as previously mentioned, the LAC can transmit over an ATM, Frame Relay, or X.25 network. At the destination, the L2TP Network Server (LNS) terminates the PPP session and passes the IP packets to the LAN. Because L2TP software can execute in a PC, the tunnel can extend from a remote user dialing the modem bank through the NAS and LAC to the LNS and destination LAN.

Today, most L2TP deployments are used to support the creation of VPNs via LAN connections over the Internet. Because such use is primarily by businesses, such PPP authentication protocols as CHAP, PAP, and EAP are employed for corporate access authentication. To support such authentication methods, L2TP creates a tunnel between the client and the corporate network. Then, the users’ identification is verified, and they can proceed as if they were directly connected to the distant network.

There are two basic types of tunneling: compulsory and voluntary. Under L2TP, compulsory tunneling is ideal for a business environment. This is because the tunnel is created from the LAC via the Internet to the LNS on a distant corporate network, and neither remote client has knowledge of the tunnel nor needs L2TP client software. Instead, each remote client creates a PPP connection to the LAC and is then tunneled to the LNS. Another advantage of compulsory L2TP tunneling is the fact that remote clients need to access the LAC to gain access to the distant corporate network. Thus, network managers can configure a single point, the LAC, to control permissions to include the authentication of remote clients.

The major problem associated with compulsory L2TP tunneling is its difficulty to support mobility away from a remote LAN. Thus, in an L2TP environment, an individual client to LNS tunneling method is required to support mobility. This method of tunneling is referred to as voluntary L2TP tunneling.

L2TP does not provide any encryption. In addition, by itself, it lacks authentication and data integrity methods because it was designed primarily as a mechanism to extend a PPP tunnel. To overcome the previously mentioned security deficiencies, it is common to combine IPSec with L2TP. Using IPSec, the L2TP tunnel can be secured, either from the LAC to LNS under compulsory tunneling or from a remote client to the LNS under voluntary tunneling.

Under L2TP, authentication occurs via PPP at the LAC or the LNS. Authentication can occur using PPP authentication protocols such as CHAP, PAP, or EAP. When the PPP connection process is encrypted by IPSec, any PPP authentication method can be used, with mutual authentication occurring if EAP or CHAPv2 is used.

The type of encryption used is determined during the establishment of the IPSec security association. Available encryption algorithms include the original 56-bit Digital Encryption Standard (DES), 3DES, and certain versions of Advanced Encryption Standard (AES). Data authentication and integrity are accomplished by the use of a hash message authentication code, such as Message Digest 5 (MD5), which is a hash algorithm that generates a 128-bit hash of the authenticated payload or the Secure Hash Algorithm (SHA), which produces a 160-bit hash of the authenticated payload. Thus, the use of IPSec with L2TP can considerably strengthen the tunneling protocol.

L2TP Packet Exchange

The setup of an L2TP connection results in the exchange of a series of control packets between clients and servers to establish tunnels and sessions in each direction. During the setup process, a specific tunnel and session ID is assigned. Through the use of the assigned tunnel and session ID numbers, multiple tunnels can be established on the same path with data packets exchanged using compressed PPP frames as the payload. Because L2TP does not include encryption (as does PPTP), it is often used in combination with IPSec to provide VPN connections from remote users to the corporate LAN.

IPSec54

IP Security (IPSec) represents a family of security protocols promulgated as RFCs from the IETF that provides both authentication and encryption over the Internet55. Unlike SSL, which provides services at Layer 4 and secures two applications, IPSec operates at Layer 3 and secures everything in the network. Also, unlike SSL, which is typically built into every Web browser, IPSec requires a client installation. IPSec can provide security for both Web and non-Web applications, whereas SSL is primarily used for Web access, but with additional effort can be used to secure such applications as file sharing and e-mail.

The primary use of IPSec is for building VPNs. IPSec secures individual packets flowing between any two computers connected to an IP network. IPSec includes the ability to establish mutual authentication between computers at the beginning of the session and supports the negotiation of encryption keys to be used during the session. In addition to securing data flows between a pair of computers, IPSec can be used to secure communications to routers, firewalls, and other devices that are IPSec compliant.

Operation

IPSec operates at the IP layer (Layer 3) of the Internet Protocol Suite. The operation of IPSec at Layer 3 makes this security protocol more flexible than SSL/TLS and higher-layer protocols. This results from the fact that IPSec can be used for protecting all the higher-level protocols. This enables applications to avoid having to be designed to use IPSec, whereas the use of TLS/SSL or other higher-layer protocols must be incorporated into the design of an application.

IPSec represents a family of security-related protocols. Each protocol was designed to perform different security-related functions. Those protocols and their functions include:

  1. Authentication Header (AH): Provides authentication for IP datagrams as well as protection against replay attacks.

  2. Encapsulating Security Payload (ESP): Provides authentication, data integrity, and confidentiality of packets transmitted. While ESP supports encryption-only and authentication-only modes of operation, note that using encryption without authentication is strongly discouraged because it is insecure.

  3. Internet Key Exchange (IKE): It is an IPSec protocol that is used to set up a Security Association (SA) by handling negotiation of the encryption and authentication keys to be used by IPSec.

Security Association

The IP security architecture uses the concept of a security association as the basis for building security functions into IP. A security association represents the bundling of algorithms and parameters that are used to encrypt and authenticate a particular flow in one direction. Because data traffic is normal bidirectional traffic, the flows are secured by a pair of security associations. The actual choice of encryption and authentication algorithms can be selected from a predefined list by the IPSec administrator.

IPSec uses a Security Parameter Index (SPI), which points to a location in a Security Association Database (SADB), along with the destination address in a packet header, which together uniquely identify a security association for that packet. A similar procedure is performed for an outgoing packet, where IPSec gathers decryption and verification keys from the security association database.

Authentication Header (AH)

AH operates directly above IP, using IP protocol number of 51. AH is employed to authenticate the origin of data as well as provide for the data integrity of IP datagrams. In addition, it can optionally protect against replay attacks through the use of a sliding window technique and discarding old packets. AH protects the IP payload and all header fields of an IP datagram except for fields that might be altered in routing, such as router fields that are changed when data flows through the device.

Figure 2.26 illustrates where an AH packet resides within an IP datagram and the fields within the header.

The Next Header is an 8-bit field that identifies the type of the next payload after the Authentication Header. The value of this field is chosen from the set of IP Protocol Numbers defined in the most recent “Assigned Numbers” RFC from the Internet Assigned Numbers Authority (IANA). For example, hex 6 is used for TCP, whereas hex 11 designates UDP. The following field, Length, defines the size of the AH packet. The third field, shown padded to zero, represents a “Reserved” field that is currently not used. The fourth field, Security parameters index (SPI), identifies the security parameters, which, in combination with the IP address, identifies the security association (SA). The SA represents a simplex or one-way channel and logical connection that provides a secure data connection between network devices implemented with this packet. The fifth field, Sequence number, represents an increasing number that is used to prevent replay attacks. The sixth field is the Authentication Data field. This field contains the integrity check value (ICV) necessary to authenticate the packet.

Modes of Operation

There are two “modes” of operation that are supported by AH and ESP: tunnel mode and transport mode. Transport mode provides a secure connection between two endpoints as it encapsulates IP’s payload, while tunnel mode encapsulates the entire IP packet to provide a virtual “secure hop” between two gateways. Tunnel mode is used to form a traditional VPN, where the tunnel generally creates a secure “tunneled” path across a packet network, such as the Internet or extranet.

Images

Figure 2.26 - The AH header provides authentication and data integrity

Images   Transport Mode - Transport mode is used to protect end-to-end communications between two hosts. This protection can be either authentication or encryption or both, but it is not a tunneling protocol. Thus, it has nothing to do with a traditional VPN as it simply represents a secured IP connection. In AH transport mode, the IP packet is modified only slightly to include the new AH header placed between the IP header and the protocol payload, such as TCP, UDP, or another payload. In addition, there is a shuffling of the protocol code that links the various headers together, which allows the original IP packet to be reconstituted at the other end. At the destination, assuming the packet passes the authentication check, the AH header is removed and once again some protocol field values are shuffled, which results in the IP datagram reverting to its original state.

Images   Tunnel Mode - Tunnel mode provides a more familiar VPN type of functionality, where entire IP packets are encapsulated inside another and delivered to their destination. Similar to transport mode, each packet is sealed with a hash message authentication code (HMAC) that is usually created via the use of Message Digest 5 (MD5) or SHA1 (Secure Hash Algorithm 1). The HMAC is used to both authenticate the sender as well as to prevent the modification of data from occurring as it flows between source and destination. Under AH tunnel mode, the full IP header as well as payload data is encapsulated, which enables source and destination addresses to be different from those of the original packet. This encapsulation permits the packet to flow between two intermediary devices that form the tunnel, such as IPSec-compatible routers. At the destination router, an authentication check is performed, and packets that pass the check have their entire IP and AH readers removed, resulting in the recreation of the original datagram. That datagram is then routed to its original source IP address.

Figure 2.27 illustrates the manner by which an IP datagram is encapsulated within another IP header when AH is used in tunnel mode. Note that the wrapped or new header usually had the destination IP address of a network appliance representing a hardware box. That box is typically a computer with multiple fast processors that can rapidly compute integrity check values and different types of encryption and thus support AH, ESP, and iKey.

Images

Figure 2.27 - IPSec in an AH tunnel mode

Encapsulating Security Payload (ESP)

ESP represents the portion of IPSec that provides origin authentication, data integrity, and confidentiality of packets. ESP also supports encryption-only and authentication-only configurations, but using encryption without authentication is strongly discouraged because it is insecure.

Unlike AH, ESP does not protect the IP packet header. However, in tunnel mode, where the entire original IP packet is encapsulated with a new packet header added, ESP protection is afforded to the whole inner IP packet to include the inner header, while the outer header remains unprotected because it provides the unencrypted address information necessary for routing. ESP operates directly on top of IP, using IP protocol number 50.

Similar to AH, ESP can operate in transport or tunnel mode. In transport mode, the datagram’s payload is encrypted and transported via a new IPv4 header, which is essentially the same as the old header but has a few field values shifted while source and destination IP addresses are unchanged. Thus, similar to AH, ESP in transport mode is designed for host-to-host communications. In tunnel mode, ESP is similar to AH, where the encapsulation covers the original datagram, enabling the original IP header, TCP header, and payload to be encrypted. Thus, ESP in tunnel mode would be similar to Figure 2.27, with the AH replaced by an ESP header. Concerning that header, it is much simpler having just two fields: a security parameters index (SPI) and a sequence number. The SPI identifies the security parameters in combination with an IP address, while the sequence number represents a monotonically increasing number that is used to prevent replay attacks. Although replacing the AH with an ESP header shown in Figure 2.27 represents a majority of ESP tunnel mode connections, it should be mentioned a real VPN supporting both encryption and authentication can be constructed by adding authentication data to ESP in tunnel mode. This option is frequently used, with authentication data being added to a tunneled packet after the encryption of the IP header, TCP header, and payload. Another method is to use ESP+AH instead of AH+ESP. The reason ESP is not wrapped inside of AH is that most networks have routers that perform Network Address Translation (NAT), and by using AH+ESP, this tunnel becomes incapable of traversing a NAT device. Thus, ESP+AH is primarily used in tunnel mode to completely encapsulate and encrypt datagrams, adding authentication to protect the data and ensure its integrity as it flows across an untrusted network.

Cryptographic Algorithms

There are several cryptographic algorithms presently defined for use with IPSec. Some of the more popular algorithms include the Hash Message Authentication Code Secure Hash Algorithm (HMAC-SHA1) for data integrity protection and 3DES and AES for confidentiality. A list of algorithms is included in RFC 4835 56.

L2TP/IPSec

Due to the lack of encryption and authentication in the L2TP protocol, it is often implemented along with IPSec; the result is referred to as L2TP/IPSec, and is standardized as RFC 319357. The process of setting up an L2TP/IPSec VPN is a three-step process. First, a negotiation of the IPSec Security Association (SA) occurs, typically performed through the use of the Internet Key Exchange (IKE). This exchange occurs over UDP port 500, and commonly uses either a shared password (so-called “pre-shared keys”), public keys, or X.509 certificates on both ends, although other keying methods exist. Next, the establishment of an ESP transport mode occurs with the IP Protocol number for ESP inserted as 50. Once this occurs, a secure channel has been established, but no tunneling is taking place. Thus, the third step involves the negotiation and establishment of an L2TP tunnel between the SA endpoints. The actual negotiation of parameters takes place over the SA’s secure channel, within the IPSec encryption, with L2TP using UDP port 1701. This action results in L2TP packets between the endpoints being encapsulated by IPSec. Because the L2TP packet is both wrapped and hidden within the IPSec packet, no information about the content of the packet can be obtained from the encrypted packet. In addition, a side benefit is the fact that it is not necessary to open UDP port 1701 on firewalls between the endpoints, because the inner packets are not acted upon until after IPSec data has been decrypted, which only occurs at the endpoints of the connections.

Authentication Using EAP

An additional benefit from the use of IPSec with L2TP is the ability to enhance authentication via the use of EAP. Created as an extension of PPP, EAP is used when PPP peers negotiate to perform this authentication method during the connection authentication process. Technically, the negotiation of EAP is referred to as an EAP method, resulting in an exchange of messages between the client (referred to as the supplicant) and the authentication server, which is commonly a RADIUS server. Once an EAP method is agreed upon, an exchange of messages will occur between the supplicant and authentication server based on requests for authentication information.

Images

Figure 2.28 - Selecting the use of EAP

Figure 2.28 illustrates the selection and operation of EAP. In this example, a client is shown using PPP initially to communicate with an access point or network access server (NAS) to obtain EAP authentication prior to obtaining access to a network. Both wired and wireless devices can operate as the EAP authenticator, with the IEEE 802.1X standard defining how EAP operates when used for authentication by 802 devices, such as wireless access points and wired Ethernet switches. In this example, the NAS is shown communicating with a Remote Authentication Dial-In User Service (RADIUS) authentication server to negotiate the specific EAP method to use, with EAP messages flowing between the client and server.

TCP Wrapper58

TCP wrappers represent a host-based networking access control list (ACL) system that can also be considered as a filtering method. Through the use of ACLs, network access to Internet Protocol servers on (UNIX-like) operating systems such as Linux or BSD can be controlled. By filtering on destination or source IP address, one can control access to hosts, subnets, and query replies59.

The original code was written by Wietse Venema at the Eindhoven University of Technology, The Netherlands, between 1990 and 1995. As of June 1, 2001, the program was released under its own BSD-style license.

SOCKS60

SOCKS represents an abbreviation for “sockets,” which provides a reference to the Berkeley socket interface used in UNBIX. The protocol was originally developed by David Koblas, a system administrator at MIPS Computer Systems.

The SOCKS protocol is designed to route packets between client-server applications via a proxy server. The protocol operates at Layer 5, the Session Layer of the OSI reference model, between the presentation layer and the transport layer. Clients behind a firewall have to connect to a SOCKS proxy server to access external services provided by the server. The proxy server controls the ability of the client to access the external server in the client-server access attempt. If the client is approved by the proxy server, the latter will then pass the request on to the destination server. SOCKS is bidirectional; thus, it can also be used in the opposite way, allowing the clients outside the firewall to connect to servers inside the firewall.

Currently, the latest version of SOCKS is V.5. Under V.5, SOCKS supports several authentication methods to include EAP, one-time passwords, MD5-challenge, and token cards. Through additions to SOCKS V.5, several vendors now offer modules that work with Windows, intercepting WinSock communications requests issued by application programs and processing those requests based on a previous set of configurations. This enables network managers to specify the use of different types of authentication and encryption for different applications or the use of fixed methods.

Comparing SOCKS and HTTP Proxies

SOCKS employs a handshake protocol to inform the proxy software about the connection that a client initiated. The SOCKS protocol supports any form of TCP or UDP socket connection. In comparison, an HTTP proxy will analyze HTTP headers to determine the address of the destination server, which restricts its support to HTTP traffic.

VPN Technical Considerations

Topology supported

Authentication support

Encryption supported

Scalability

Management

VPN client software

Operating system and browser support

Performance

Table 2.4 - Technical features to consider when selecting a VPN

VPN Selection

The selection of an appropriate VPN can include a variety of factors ranging from technical issues to cost, with the latter including personnel and maintenance. Table 2.4 lists eight technical features readers should examine when considering the selection of a VPN.

Topology Supported

Many organizations that need to interconnect sites via a site-to-site topology will opt for a VPN that supports compulsory tunneling. In comparison, the need to support mobile workers in a secure environment will result in a requirement for voluntary tunneling. Fortunately, most, but not all VPN methods support both; however, the personnel cost associated with voluntary tunneling will increase as the number of remote users increases.

Authentication Supported

There are numerous authentication methods supported by different types of VPNs. Those methods can range from the simple use of passwords to digital certificates and two-factor authentication employing key fobs with numeric displays that change the digits periodically. While the latter two methods, for example, are considerably more viable than a simple password, the security architect needs to consider the cost of certificates and key fobs as well as maintenance issues.

Encryption Supported

Some VPNs by themselves do not perform encryption, which results in the ability of a third party to easily observe the contents of tunneled packets. If the organization requires encryption, then the security architect will need to consider either the use of a VPN protocol that natively supports encryption or a protocol that can be added to a VPN to provide encryption, such as adding IPSec to L2TP.

Scalability

As organizations add staff at different locations, the need for a VPN that provides scalability assumes importance. Thus, most organizations need to consider the scalability of different types of VPNs under consideration.

Management

The ease of configuration as well as the ability to generate reports are two key areas that the security architect should examine. In addition, the type of reports provided by a VPN management system can enhance the ability of the security architect to denote potential bottlenecks and take action before users complain about a sluggish network.

VPN Client Software

Certain types of VPNs, such as L2TP used in compulsory tunneling, do not require any additional client software. In addition to obvious cost saving, this also simplifies the configuration of the VPN in a site-to-site environment. Thus, the security architect needs to consider the role of client software when selecting a VPN.

Operating System and Browser Support

Often easily overlooked, both operating system and browser support are important criteria for the selection of a VPN. While there are many VPNs that support Windows, this is a generic term and does not mean that support for a specific version of Windows is available. Similarly, if an organization has a large base of Firefox users or uses Opera or another browser, the security architect needs to carefully check the support of the VPN for the type and versions of the browsers used or anticipated to be used.

Performance

As features are added to a VPN, additional processing can be expected. While the additional processing may not be noticeable on a new computer, users with legacy platforms may receive sluggish responses. Thus, a benchmark on various types of computers may provide important information concerning the ability of VPNs to appropriately operate on different hardware platforms used by the organization.

Endpoint Security

One of the more modern security problems security architects face is controlling the termination of endpoint locations within a network. In a modern computer, the floppy drive has been replaced by a variety of USB and sometimes FireWire ports. Thus, instead of having to contend with deactivating floppy drives with limited storage, security architects now have to consider disabling USB ports as well as preventing Zipped files being transferred via e-mail. Concerning USB ports, it becomes possible for an employee to make off with a significant amount of corporate data at an insignificant cost. In fact, because most smart phones include an SD or micro SD slot, even an employee who wants to set up his computer–phone relationship at work can now copy documents onto the memory card in his or her smart phone. Along with the previously mentioned threats, the availability of WinZip and similar programs makes it very easy for an employee to “zip” a number of documents and mail the zipped file as an attachment addressed to themselves at an alternate e-mail address or to the address of a third party. Due to these potential security breaches, many organizations use software to block the use of USB ports and automatically drop zip files from outbound e-mail.

Encryption

One of the chief means of enhancing endpoint security is encrypting traffic. Depending on the manner in which data is being transmitted from an endpoint, encryption may be built into the transmission method, such as with the use of several types of VPNs, or encryption can be added to enhance security. With the latter, the sender can select an encryption method from a variety of sources as long as the recipient uses the same encryption method and, when required, has the same key so that decryption occurs correctly. Encryption sources vary widely, from the addition of hardware-based datacryptors that can be used with both dial-up and dedicated circuits to various software add-ons, such as Pretty Good Privacy (PGP). PGP encryption uses public-key cryptography and includes a system that binds the public keys to a user name or an e-mail address, which results in both encryption and authentication being performed.

Network Security Design Considerations

This section focuses on a series of network security design considerations, including cross-domain attacks and methods to minimize such attacks, device and data flow interoperability, audits, monitoring, and remote network access.

Interoperability and Associated Risks

Interoperability in a networking environment references the ability of diverse systems to work together or interoperate. In a modern network environment where routers, firewalls, virus checkers, and other devices are employed, it becomes a relatively easy process for the security architect to overlook one or more coding or configuration settings. In doing so, the end result may be that a hole is opened in the network defense, or a legitimate opening necessary for an approved application to work properly may be closed. Thus, the use of security audits and monitoring can be an effective tool to determine risks associated with the configurations and parameter settings of various devices that need to interoperate with one another.

Cross-Domain Risks and Solutions

There are a variety of cross-domain attacks that can adversely affect the end user and his or her organization. Some of the more popular attack methods include cross-site request forgery, cross-site scripting, DNS rebinding, time of check/time of use, and wildcarding attack methods. This section focuses on each attack method as well as potential solutions to mitigate their effect.

A Cross-Site Request Forgery (CSRF) represents an attack method developed to fool a victim into loading a Web page that contains a malicious request. The page to be loaded is malicious because it inherits the identity and privileges of the victim, enabling the attacker to assume the victim’s identity. Examples of malicious actions can include changing the victim’s e-mail address, home page address, or even purchasing something. Most popular virus-checking software recognizes the potential of a CSRF attack and warns users before they arrive at a site where CSRF software is known to exist61.

A second type of popular Web-based attack is referred to as a cross-site scripting attack. This type of attack exploits the trust most users place in accessing a Website. Cross-site scripting attacks commonly occur in two basic forms, such as when an attacker embeds a script in data pushed to the user as a result of a GET or POST request (first order) or when the script is retained in long-term storage before being activated (second order). Similar to CSRF prevention, most modern comprehensive virus-checking software provides protection against cross-scripting attacks. In addition, under Windows Vista, Windows 7, Windows 8, Windows Server 2008, and Windows Server 2012 the operating system will prompt users to allow or deny the operation of programs, providing a mechanism to disable the program through the use of User Account Control (UAC). Figure 2.29 shows the User Account Control dialog in Windows 7 when the user attempts to load unsigned code. Figure 2.30 shows the User Account Control dialog in Windows 7 when the user attempts to load signed code.

Images

Figure 2.29 - User Account Control dialog in Windows 7 when the user attempts to load unsigned code

Images

Figure 2.30 - User Account Control dialog in Windows 7 when the user attempts to load signed code

DNS rebinding represents an attack on the insecure binding between DNS host names and network addresses. During a DNS rebinding attack, the attacker manipulates DNS records for a Web site he or she controls. The attack can at times use the host name at a server under his or her control, while at other times the host name can be used to point to a victim server or device, such as a router. Through a DNS rebinding attack, the attacker is able to bypass a same-origin-policy restriction because both the victim and attacker have the same host name, albeit at different points in time. This attack technique can also result in the circumvention of a firewall, as a victim server behind an organizational firewall is normally reachable by a browser operated by an employee of the organization. One solution to nullify the potential effect of DNS rebinding is to strengthen the client’s binding between a DNS host name and the network address. In addition, the use of HTTPS and verification of the host header on inbound requests can also be used to minimize the threat of DNS rebinding.

Time of Check/Time of Use (TOC/TOU) represents two types of attacks that are based on changes in principals or permissions. Such attacks occur in requests where principals or permissions have changed between the time of permission checking and the time of actual use of the permissions. Most such attacks result from the failure of server software to remove cached permissions after a reconfiguration that changes client permissions. Over the years, both SUN Microsystems and Microsoft have issued several patches to their server software designed to block such attacks. However, an organization either has to enable automatic software updates or manually apply such software patches to close such security-related holes.

Another attack that warrants attention is the wildcarding attack. This attack occurs when access controls are set in error and open a security hole for unintended access. For example, if access control rules are set to *.edu, any .edu site can access the user’s resources. Wildcard mistakes can occur due to typographical errors, when organizations merge, when contractors or employees make simple mistakes, and for numerous other reasons. As access-control rules become more complex, the likelihood of configuration errors increases. Thus, the use of configuration checking of software products may be justified for some organizations that operate a variety of communications equipment that perform access control.

Audits and Assessments

It is important to understand that routers, firewalls, and virus checkers as well as IDS and intrusion protection systems can together have hundreds to thousands of possible settings. While it is possible to perform a manual audit of equipment used within a small organization, as the organization expands and its network complexity increases, it becomes much harder to perform auditing without the use of software. Today, several vendors provide software designed to do the following: perform auditing as well as reporting, enable the changing of equipment configurations from a central management platform, employ a third-party database management system to track both hardware and software, etc. Such software typically includes a Simple Network Management Protocol (SNMP) and Remote MONitoring (RMON) capability, enabling the software to collect information on up to tens of thousands of assets, security, and configuration settings into a configuration database for reporting, auditing, baselining, and change tracking.

Monitoring

An SNMP-compatible system consists of one or more central monitoring systems and distributed agents operating on various types of hardware and even embedded into software. Today, most hardware and software products include a built-in RMON-compatible agent, allowing a central site SNMP system to monitor network activity, change device permissions and configurations as well as to gather statistics. For example, in a Cisco router environment, each router port can be enabled for SNMP monitoring, providing the network manager with detailed information about the use of router ports as well as denoting potential or actual bottlenecks. Through the use of SNMP systems available from many vendors, software overlaying the SNMP capability can even issue projections as to when, for example, a router port can be expected to operate at 75% of utilization or drop a certain percentage of packets.

Operating Environment

In an operational environment, it is quite common for security architects to create a “protected bench network” of equipment prior to its actual use. Here, the term protected bench network references a network of devices to be used that are not connected to an operational network. This then provides the security architect with the ability to perform penetration testing to uncover any inadvertent holes in the security architecture without compromising actual live data. Here, the term penetration test represents a method of evaluating the security of the networkand if desired the computers attached to the network—by simulating different types of attacks and observing the ability of the network and computers to fight those attacks. The penetration test is one of several components that make up a security audit. The other major components include monitoring traffic and certain logs available on both network hardware products and computers.

It has become possible for the security architect to set up both testing and production environments using virtualization technologies62. The use of virtualization technology is a very important element in the security architects design for the organizations infrastructure. Virtualization technologies are available from a number of vendors such as VMware, Microsoft, Citrix, and Oracle. Many organizations are moving aggressively to virtualize up and down their infrastructure stacks wherever and whenever possible63.

According to the 2012 State of the Data Center report, June 2012, by Information Week, “No technology in recent years has enabled IT to do more in a fixed hardware and data center footprint than server virtualization. When asking respondents what best describes their overall data center strategy, 33% say they seek to virtualize most applications, with an additional 25% standardizing on either discrete (14%) or integrated blade/network/storage (11%) systems running virtualized servers. IT’s commitment to VMs as the new standard application platform is readily apparent when looking at the degree of workload virtualization. Half of respondents report that half or more of their production servers will be virtualized by the end of 2013.”64

There are 3 main areas that the security architect needs to consider regarding security in the virtualized infrastructure that they manage.

  1. Oversight - One of the grey areas that virtualization has created is that of server oversight. Who’s ultimately responsible for virtual servers is not always clearly defined. While physical servers are, as a matter of course, under the direct purview of the data center, it’s not always as straightforward for virtual servers.

  2. Maintenance - Virtual servers tend to be launched and then their image is archived, and it may or may not be recreated when patches or configuration changes take place. Placing virtualized servers under the same change management controls as physical infrastructure is necessary to ensure uniformity fro configurations and patch management.
  3. Visibility - One of the risks involved with having significant virtualized infrastructure is that those network controls that are used to segment specific applications off due to reasons of compliance and security often are not virtualized. This, of course, can lead to issues with HIPPA and other security regulations and IT Governance compliance regimes.

Christopher Hoff sounded the alarm about the unintended consequences of virtualization and cloud computing in 2008 in a slide (slide # 23) he presented in his deck titled “The Four Horsemen of the Virtualization Security Apocalypse65 In his deck, Hoff outlined the many and varied potential vulnerabilities of virtualized systems and identified the various attack surfaces and threat avenues they face. “ He categorizes seven modes of attack:

  1. Guest OS to guest

  2. Guest to host or hypervisor
  3. Guest OS to self
  4. External to host or hypervisor
  5. External to guest OS
  6. Host or hypervisor to others
  7. Hardware to hypervisor

These vulnerabilities are addressed by new virtualized security products operating in three domains: intra-VM (within the hypervisor), inter-VM (between virtualized hosts) and guest OS (running on the hypervisor). Intra-VM or -hypervisor security products address threats 1, 2 and 5. Inter-VM or -hypervisor (sometimes known as edge) products address threats 4, 5 and 6, while guest/client endpoint products primarily address threat 3, but also assist in combatting threats 1, 2 and 5. Threat 7, in which the server hardware itself is compromised and used to attack the baremetal hypervisor or host VMs, is really the domain of hardware-based protection schemes. Perhaps the best known of these are Intel’s vPro and Trusted Execution Technology (TXT). “66

In the Interagency Report 7904 (Draft), December 2012, Trusted Geolocation in the Cloud: Proof of Concept Implementation (Draft), NIST has moved to address some of the issues discussed above by outlining a use case Proof of Concept that seeks to use Intel Trusted Execution Technology (Intel TXT), which provides a mechanism to enable visibility, trust, and control in the cloud. Intel TXT is a set of enhanced hardware components designed to protect sensitive information from software-based attacks. Intel TXT features include capabilities in the microprocessor, chipset, I/O subsystems, and other platform components. When coupled with an enabled operating system, hypervisor, and enabled applications, these capabilities provide confidentiality and integrity of data in the face of increasingly hostile environments.

According to the NIST Draft, “the motivation behind this use case is to improve the security of cloud computing and accelerate the adoption of cloud computing technologies by establishing an automated hardware root of trust method for enforcing and monitoring geolocation restrictions for cloud servers. A hardware root of trust is an inherently trusted combination of hardware and firmware that maintains the integrity of the geolocation information and the platform. The hardware root of trust is seeded by the organization, with the host’s unique identifier and platform metadata stored in tamperproof hardware. This information is accessed using secure protocols to assert the integrity of the platform and confirm the location of the host”67.

Intel has also presented their vision of where virtualization, the cloud and security merge, in the white paper, Intel® Cloud Builders Guide to Cloud Design and Deployment on Intel® Platforms: Power Management & Security within the Open Source Private Cloud with Intel & OpenStack, September 201168. In the white paper, Intel seeks to describe the concept of a trusted compute pool (TCP), which is a collection of physical platforms known to be trustworthy, using Intel® Trusted Execution Technology (Intel® TXT) available with Intel® Xeon® processors.

According to Intel, in order to minimize security risks, it is essential to protect and validate the integrity of the infrastructure on an ongoing basis. One approach is to establish a “root of trust” – where each server must have a component that will reliably behave in the expected manner, and contain a minimum set of functions enabling a description of the platform characteristics, and its trustworthiness.

The value of Intel TXT is in the establishment of this root of trust, which provides the necessary underpinnings for reliable evaluation of the computing platform and the platform’s level of protection. This root is optimally compact, extremely difficult to defeat or subvert, and allows for flexibility and extensibility to measure platform components during the boot and launch of the environment including BIOS, operating system loader, and virtual machine managers (VMM). Given the current nature of malicious threats prevalent in today’s environment and the stringent security requirements many organizations employ, a system cannot blindly trust its execution environment. The security architect needs to keep up with the most current developments in the area of virtualization security so that they are able to assess the organizations security posture against the latest threats and vulnerabilities at all times.

Remote Access

Before setting up a new application for remote access, the security architect should consider testing it with respect to various network security issues. Such testing can include configuring authentication and encryption methods as well as planning for the establishment of firewall and router configuration changes to enable the application to take effect.

Monitoring

The security architect can monitor both network traffic as well as logs maintained by communications devices and computers to validate the architectural design of the network as well as to ensure that any potential holes that could present a security risk are closed. For example, most modern computer operating systems, including Microsoft Windows, SUN’s Solaris, and various versions of Linux, support audit event logging, which provides a variety of valuable information. In the area of network equipment, routers, firewalls, virus checkers, IDS, and intrusion protection systems have a similar logging capability. In addition, such devices can be programmed to generate alerts upon the occurrence of predefined conditions.

Design Validation

When designing or modifying a network, it is extremely important to ensure that the design will work as expected. The process used to verify that the network will operate correctly is referred to as design validation. In a security environment, there are several methods that can be employed to ensure that the network design will perform correctly. Those methods include penetration testing, vulnerability assessments, and network monitoring.

Penetration Testing

Penetration testing is a critical method used to validate the security associated with a network. This type of testing literally can involve throwing the preverbal kitchen sink at a network, trying every known type of malicious software in an attempt to break into a system. Because most security architects do not have the resources to assemble a collection of such software, third-party products are commonly used. Such products perform an analysis of the network, including checking for the latest software releases designed to plug security holes as well as examining router, firewall, and even DNS configurations. Any security issues are reported usually with an assessment of the criticality of each issue.

Vulnerability Assessment

Penetration testing can be considered as the predecessor to a vulnerability assessment. That is, until the security architect determines what weaknesses exist in the network, it will not be possible to determine the vulnerability due to those weaknesses. For example, penetration testing could result in a report showing that a firewall was misconfigured, allowing all employees instead of just engineers to access the Internet at lunch time and that a server operating system does not have a patch to negate a well-known vulnerability. Here, the second vulnerability would be more critical to fix for most organizations, and they would prioritize their efforts by placing the server patch at the top of their list of fixes.

Monitoring and Network Attacks

Because nothing is static in the world of communications, security architects need to continuously monitor network traffic. Doing so may provide the opportunity to recognize that the network is being scanned for open ports, a hacker is attempting to run a password checker against a server, or some other security-related issue. Sometimes, a reconfiguration of a firewall to block a hacker from further attacks may suffice, while the apparent recognition of a new type of attack may require the security architect to contact a security organization for assistance.

Risk-Based Architecture

In concluding the examination of communications and network security, the next section will discuss the network as an enterprise where data flows on an end-to-end basis. In attempting to minimize network attacks, the security architect needs to minimize the risk of an attack.

The security architect can minimize the risk of an attack by identifying risk elements, risk metrics, and network controls and assessing the vulnerability level of the network. In addition, the security architect needs to verify that equipment such as routers and firewalls are correctly configured and both clients and servers are operating with the latest patches. By performing these functions as well as staying abreast of the latest security vulnerabilities and corrections, the security architect can develop and maintain a network architecture that minimizes threats to the enterprise.

Secure Sourcing Strategy

According to Gartner, the definition of IT Outsourcing is:

“The use of external service providers to effectively deliver IT-enabled business process, application service and infrastructure solutions for business outcomes.

Outsourcing, which also includes utility services, software as a service and cloud-enabled outsourcing, helps clients to develop the right sourcing strategies and vision, select the right IT service providers, structure the best possible contracts, and govern deals for sustainable win-win relationships with external providers.

Outsourcing can enable enterprises to reduce costs, accelerate time to market, and take advantage of external expertise, assets and/or intellectual property.”69

There is also the concept of rightsourcing, which is selecting the best way to procure a service and deciding whether an organization is best served by performing a business requirement in-house (insourcing) or contracting it out to a third-part service provider (outsourcing). Rightsourcing literally means “choosing the correct source.”

The security architect needs to be concerned with all aspects of the security posture of the organization. The need to identify risks and threats is as important as the need to identify solutions to mitigate them, as well as to improve the overall security architecture whenever possible. One area that is sometimes overlooked by the security architect is the need to know who they are doing business with, or more correctly, who the organization is doing business with. Given the continued drive towards virtualized computing environments, as well as the continued focus on the cloud that many organizations have, it becomes harder and harder for the security architect and the organizations that they represent to truly know who is on the other end of the service, or connection, that they are negotiating for.

Outsourcing strategies are rarely the same from one company to the next, as each organization has different needs to address through rightsourcing. The security architect needs to be able to ask a series of questions that will allow them to gain the insight necessary to pursue the right sourcing strategy for the organization. The following questions should be the starting point for the security architect’s discussion:

  1. Can the function be performed according to a set of rules and procedures?

  2. Is the application slated to be retired or moved to a SaaS provider?

  3. Is the application or infrastructure unstable?

  4. Is there a documented history for the change control and change management for the application or infrastructure?

  5. Does the organization have strong vendor management skills?

These questions will allow the security architect to begin to develop an accurate picture of the pros and cons involved in a potential outsourcing of the application, service, or infrastructure in question, versus keeping it in house. In addition, making sure that an outsourcing strategy is designed to address these key issues is also important to the success of the solution, and the smooth integration into the security architecture:

Images   The organizational culture.

Images   Making sure to use a competitive bid process (not using a sole sourcing solution).

Images   Being wary of a focus on cost as the primary factor driving the decision to award a contract.

Images   Too much information disclosed to a potential vendor vis-à-vis performance metrics and IT costs.

Images   The financial health of the potential outsource partner (how does the organization operationalize risk, and take into account the negative impact that poor financial health may have on an outsourced service or application with regards to availability and disaster recovery).

Images   Internal competition from IT for the service dollars that may be outsourced.

Images   The balance and risks associated with a public vs. a private cloud outsourcing solution.

Images   What are the public cloud provider’s liability limits for the outsourced solution.

If the security architect decides that outsourcing will be the best approach for the organization, then the following issues need to be considered with regards to the outsource partner:

Images   Access to confidential data – How much access will the service provider have to the organization’s data?

Images   Compliance with regulations – Where will the organization’s data reside, and what are the legal and regulatory issues that have to be addressed based on geolocation of the data?

Images   Constant churn and suitability or clearance of service provider staff – What standards are in place on the service providers side to assure stability and continuity of operations and data security?

Images   Lack of a comprehensive security program – The organization needs to create a “culture of security” and ensure that all users are educated with regards to responsible use and due care for all data that is hosted externally

The security architect will need to use a multiple step approach, as noted below, to ensure that the organization is as prepared as it can be for the outsourcing to occur successfully:

  1. Refine the enterprise IT architecture to improve security through:

    1. Data classification and masking

    2. Role classification

    3. Define enterprise security standards

  2. Carry out a detailed pre-assessment of each provider and its delivery site that includes these steps:

    1. Review service provider security policies for corporate information and for physical and facility security to ensure that all key risks are covered.

    2. Ensure that network security controls exist and the delivery site is certified, per industry standard security policies, which satisfy internationally recognized security compliance standards, including ISO 27001, BS17799 and others as applicable.

    3. Check that an SAS 70 (or equivalent third party audit) Type I and II assessment has been done for the delivery location70. (Now an SSAE 16, see footnote below)

    4. Carry out or review a detailed risk assessment for individual delivery locations followed by an onsite audit of each delivery site before “signoff.”

  3. Set up a regular audit and assessment program. This program should include:

    1. A review and audit of the remote service provider’s security policies, which must occur at least once a year.

    2. An on-site review of the specific site and area used to conduct client business, which should be conducted biannually or as project requirements and risks dictate.

  4. Build security obligations into the outsourcing contract. Clients should include all security-related controls in the contract. The contract should provide for:

    1. Non-disclosure agreements (NDA).

    2. Personal background checks.

    3. The understanding that service provider staff cannot work for a direct competitor until a specified amount of time elapses.

    4. Security assessments.

    5. Definitions of security breach and related liabilities.

    6. If required, the contract should insist that the service provider take insurance to cover liabilities arising from a security breach.

    7. Contract termination provisions, which a client can invoke as a last resort if a material security breach arises.

    8. Unilateral data portability. The organization must release information on demand and in a specified format and media.

The security architect will also need to ensure that there is a strong focus on IT Governance within the organization to help shape and guide the outsourcing strategy. The current ISO standard originated as AS8015-2005, which was published in January 2005. AS8015-2005, The Australian Standard for Corporate Governance of Information and Communication Technology was submitted for fast-track ISO adoption, and it was published as ISO/IEC 38500:2008 Corporate Governance of Information Technology in May 2008, largely unchanged71.

The standard for the Governance of IT provides a framework through which “Directors”, those to whom they turn to for advice or those to whom they delegate responsibilities for managing the operations of the organization, such as Senior managers, technical specialists, vendors and service providers, can understand their obligations and work more effectively to maximize the return and minimize the cost of using ICT in their organizations.

There are many other standards, by country, which may also be of interest to the security architect, such as King I, II, and III from South Africa. The revised Code of and Report on Governance Principles for South Africa (King III) were released on 1 September 2009, with an effective date of 1 March 2010. The Chapters of King III are72:

Images   Ethical leadership and corporate citizenship

Images   Boards and directors

Images   Audit committees

Images   The governance of risk

Images   The governance of information technology

Images   Compliance with laws, rules, codes and standards

Images   Internal audit

Images   Governing stakeholder relationships

Images   Integrated reporting and disclosure

The Organization for Economic Co-operation and Development (OECD) has published the OECD Principles of Corporate Governance, 200473. Following on these, corporate governance has been adopted as one of twelve core best-practice standards by the international financial community. The World Bank is the assessor for the application of the OECD Principles of Corporate Governance. Its assessments are part of the World Bank and International Monetary Fund (IMF) program on Reports on the Observance of Standards and Codes (ROSC).

The goal of the ROSC initiative is to identify weaknesses that may contribute to a country’s economic and financial vulnerability. Each Corporate Governance ROSC assessment benchmarks a country’s legal and regulatory framework, practices and compliance of listed firms, and enforcement capacity vis-à-vis the OECD Principles.

The assessments are standardized and systematic, and include policy recommendations and a model country action plan. In response, many countries have initiated legal, regulatory, and institutional corporate governance reforms.

The assessments focus on the corporate governance of companies listed on stock exchanges. At the request of policymakers, the World Bank can also carry-out special policy reviews that focus on specific sectors, in particular for banks and state-owned enterprises. Assessments can be updated to measure progress over time.

By the end of October 2010, 75 assessments had been completed in 59 countries around the world74.

It is important for the security architect to become familiar with the appropriate Standards and specific compliance requirements, based on geography and business vertical or mission, that may impact the organization and its drive to securely outsource.

Summary   Images

The Communications and Network Security domain requires the security architect to bring together all of the knowledge and theory from all other domains and examine the practical implications of networking designs from the ground up. The security architect needs to be able to apply all of the theoretical insights and knowledge gained from the BIA and the Risk Assessments done for the organization in order to be able to create secure designs that will both address and support the strategic goals of the organization while also mitigating as many risks as possible. At the same time, this design needs to also be functional for the end users, allowing them to be able to be productive and carry out their required activities on a daily basis with a minimal impact on performance and productivity. This domain, more than any other, highlights the need to use all of the knowledge that the security architect has access to from all other domains, as well as understanding where the potential gaps may be in their knowledge base with regards to certain areas, and seeking out the appropriate third party, or industry specific actor as required in order to create a secure and viable security architecture.

Images   Review Questions

1. Compare the frequency range of a person’s voice to the size of the passband in a voice communications channel obtained over the telephone. Which of the following accounts for the difference between the two?

  1. The telephone company uses Gaussian filters to remove frequencies below 300 Hz and above 3300 Hz because the primary information of a voice conversation occurs in the passband.

  2. The telephone company uses low-pass and high-pass filters to remove frequencies below 300 Hz and above 3300 Hz because the primary information of a voice conversation occurs in the passband.

  3. The telephone company uses packet filters to remove frequencies below 500 Hz and above 4400 Hz because the primary information of a voice conversation occurs in the passband.

  4. The telephone company uses low-pass and high-pass filters to remove frequencies below 500 Hz and above 4400 Hz because the primary information of a voice conversation occurs in the passband.

2. What is the data rate of a PCM-encoded voice conversation?

  1. 128 kbps

  2. 64 kbps

  3. 256 kbps

  4. 512 kbps

3. How many digitized voice channels can be transported on a T1 line?

  1. Up to 48

  2. Up to 12

  3. Up to 60

  4. Up to 24

4. How many T1 lines can be transported on a T3 circuit?

  1. 12

  2. 18

  3. 24

  4. 36

5. The three advantages accruing from the use of a packet network in comparison to the use of the switched telephone network are a potential lower cost of use, a lower error rate as packet network nodes perform error checking and correction, and

  1. the ability of packet networks to automatically reserve resources.

  2. the greater security of packet networks.

  3. the ability of packet networks to automatically reroute data calls.

  4. packet networks establish a direct link between sender and receiver.

6. Five VoIP architecture concerns include

  1. the end-to-end delay associated with packets carrying digitized voice, jitter, the method of voice digitization used, the packet loss rate, and security.

  2. the end-to-end delay associated with packets carrying digitized voice, jitter, attenuation, the packet loss rate, and security.

  3. the end-to-end delay associated with packets carrying digitized voice, jitter, the amount of fiber in the network, the packet loss rate, and security.

  4. the end-to-end delay associated with packets carrying digitized voice, jitter, the method of voice digitization used, attenuation, and security.

7. What is the major difference between encrypting analog and digitized voice conversations?

  1. Analog voice is encrypted by shifting portions of frequency, making the conversation unintelligible.

  2. Digitized voice is generated by the matrix addition of a fixed key to each digitized bit of the voice conversation.

  3. Analog voice is encrypted by shifting portions of amplitude to make the conversation unintelligible.

  4. Digitized voice is encrypted by the modulo-2 addition of a fixed key to each digitized bit of the voice conversation.

8. In communications, what is the purpose of authentication?

  1. Establishing a link between parties in a conversation or transaction.

  2. Ensuring that data received has not been altered.

  3. Securing wireless transmission.

  4. Verifying the other party in a conversation or transaction.

9. What is the purpose of integrity?

  1. Integrity is a process that ensures data received has not been altered.

  2. Integrity is a process that ensures a person stands by his beliefs.

  3. Integrity is a process that ensures that the amount of data sent equals the amount of data received.

  4. Integrity is a process that ensures data received has been encrypted.

10. The key purpose of the Session Initiation Protocol (SIP) is to

  1. define the protocol required to establish and tear down communications, including voice and video calls flowing over a packet network.

  2. define the signaling required to establish and tear down communications, including voice and video calls flowing over a PSTN.

  3. define the protocol required to establish and tear down communications, including voice and video calls flowing over a circuit-switched network.

  4. define the signaling required to establish and tear down communications, including voice and video calls flowing over a packet network.

11. Briefly describe the H.323 protocol.

  1. It represents an umbrella recommendation from the ITU that covers a variety of standards for audio, video, and data communications across circuit-switched networks.

  2. It provides port-based authentication, requiring a wireless device to be authenticated prior to its gaining access to a LAN and its resources.

  3. It defines the protocol required to establish and tear down communications, including voice and video calls flowing over a packet network.

  4. It represents an umbrella recommendation from the ITU that covers a variety of standards for audio, video, and data communications across packet-based networks and, more specifically, IP-based networks.

12. What is the difference between RTP and RTCP?

  1. RTP defines a standardized port for delivering audio and video over the Internet, while the RTCP provides out-of-band control information for an RTP port.

  2. RTP defines the protocol required to establish and tear down communications, including voice and video calls flowing over a packet network, while the RTCP provides out-of-band control information for an RTP port.

  3. RTP defines a standardized packet format for delivering audio and video over the Internet, while the RTCP provides out-of-band control information for an RTP flow.

  4. RTP defines a standardized port for delivering audio and video over the Internet, while the RTCP defines the protocol required to establish and tear down communications, including voice and video calls flowing over a packet network.

13. List the components defined by the H.323 standard.

  1. Terminal, gateway, gatekeeper, multipoint control unit (MCU), multipoint controller, multipoint processor, and H.323 proxy

  2. Path, gateway, gatekeeper, multipoint control unit (MCU), multipoint controller, multipoint processor, and H.323 proxy

  3. Terminal, gateway, gatekeeper, multipoint control unit (MCU), multipoint transmitter, multipoint receiver, and H.323 proxy

  4. Protocol, terminal, gatekeeper, multipoint control unit (MCU), multipoint controller, multipoint processor, and H.323 proxy

14. What are some of the major functions performed by a security modem?

  1. Allows remote access to occur from trusted locations, may encrypt data, and may support Caller ID to verify the calling telephone number.

  2. Allows remote access to occur from any location, may encrypt data, and may support Caller ID to verify the calling telephone number.

  3. Allows remote access to occur from a mobile location, may encrypt data, and may support Caller ID to verify the calling telephone number.

  4. Allows remote access to occur from trusted locations, may encrypt data, and may identify the calling telephone number.

15. The major difference between a router and firewall lies in three areas:

  1. The transfer of packets based on routing tables, the degree of packet inspection, and ensuring that the header data is correct.

  2. The transfer of packets based on absolute addresses, the degree of packet inspection, and acting as an intermediate device by hiding the address of clients from users on the Internet.

  3. The transfer of packets based on routing tables, the degree of packet inspection, and acting as an intermediate device by hiding the address of clients from users on the Internet.

  4. The transfer of packets based on routing tables, the degree of packet inspection, and creating a DMZ behind Internet-facing applications.

16. What is the purpose of an intrusion detection system (IDS)?

  1. To hide the address of clients from users on the Internet.

  2. To detect unwanted attempts to access, manipulate, and even disable networking hardware and computers connected to a network.

  3. To detect and respond to predefined events.

  4. To prevent unauthorized access to controlled areas within a site or a building.

17. What are the two methods that can be used for wireless LAN communications?

  1. Peer-to-peer and infrastructure

  2. Peer-to-peer and cloud

  3. Cloud and infrastructure

  4. Peer-to-peer and remote

18. What is the benefit of WPA over WEP for enhancing wireless LAN security?

  1. WPA permits the equivalent of wired network privacy and includes the use of TKIP to enhance data encryption.

  2. WPA implements a large portion of the IEEE 802.11i and includes the use of TKIP to enhance data encryption.

  3. WPA implements a large portion of the IEEE 802.11i and includes the use of IKE to enhance data encryption.

  4. WPA implements IEEE 802.11a and g and includes the use of IKE to enhance data encryption.

19. What is the purpose of the IEEE 802.1X standard?

  1. To provide port-based authentication.

  2. To provide port-based authorization.

  3. To detect and respond to predefined events.

  4. To secure wireless transmission.

 

1   See the following for the full Verizon 2012 Data Breach Investigations Report: http://www.verizonbusiness.com/resources/reports/rp_data-breach-investigations-report-2012_en_xg.pdf

1   See the following for background information on low-pass and high-pass filtering:

Low-pass: http://www.allaboutcircuits.com/vol_2/chpt_8/2.html

High-pass: http://www.allaboutcircuits.com/vol_2/chpt_8/3.html

2   See the following for an overview of FDM and how it works: http://zone.ni.com/devzone/cda/ph/p/id/269

3   See the following for an overview article on Alec Reeves, the creator of PCM. The article profiles the work Reeves did to invent PCM and puts PCM into historical context, as well as describing the details of how the technology works. http://www.todaysengineer.org/2012/Jun/history.asp

4   See the following for a complete overview of the T-Carrier solution: http://www.dcbnet.com/notes/9611t1.html

5   See the following for some examples of more recent activity and investments around the building of digital highway infrastructure around the world:

http://www.unpan.org/PublicAdministrationNews/tabid/115/mctl/ArticleView/ModuleID/1467/articleId/34306/default.aspx

http://www.eso.org/public/announcements/ann12082/

http://www.booz.com/media/uploads/Digital_Highways_Role_of_Government.pdf

http://www.fas.org/sgp/crs/misc/RL30719.pdf

http://www.broadbandcommission.org/

http://ec.europa.eu/digital-agenda/en

6   See the following videos for overviews of TDM and how it works:

http://www.youtube.com/watch?v=Fjw_nj5UU64

http://www.youtube.com/watch?v=o8VBV6v2Tcs

7   See the following for an overview comparison between packet switched and circuit switched networks: http://voip.about.com/od/voipbasics/a/switchingtypes.htm

8   See the following for a thorough discussion and overview of X.25 networking: http://www.farsite.com/X.25/X.25_info/X.25.htm

9   See the following for an overview of Jitter and Jitter Buffers: http://www.voiptroubleshooter.com/indepth/jittersources.html

10   See the following for an overview of Modulo encryption: https://sites.google.com/site/dtcsinformation/encryption/modulo-encryption

11   See the following as examples of issues associated with the move from traditional communication platforms to fiber optics:

  1. http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB326/doc06.pdf

  2. http://archive.newsmax.com/archives/articles/2001/2/22/213115.shtml

12   See the following for overview information about H.323: http://www.packetizer.com/ipmc/h323/

13   See the following to access copies of all previous versions of the H.323Standards: http://www.packetizer.com/ipmc/h323/standards.html

14   See the following to download a copy for the current version of H.225.0 in force: http://www.itu.int/rec/T-REC-H.225.0-200912-I/en

15   See the following to download a copy for the current version of Q.931 in force: http://www.itu.int/rec/T-REC-Q.931/en

16   See the following to download a copy for the current version of H.245 in force: http://www.itu.int/rec/T-REC-H.245/en

17   See the following for the RTP RFC 3550 : http://www.ietf.org/rfc/rfc3550.txt

See the following for updates to RFC 3550:

  1. Support for Reduced-Size Real-Time Transport Control Protocol (RTCP): Opportunities and Consequences: http://tools.ietf.org/html/rfc5506

  2. Multiplexing RTP Data and Control Packets on a Single Port: http://tools.ietf.org/html/rfc5761

  3. Rapid Synchronisation of RTP Flows: http://tools.ietf.org/html/rfc6051

  4. Guidelines for Choosing RTP Control Protocol (RTCP) Canonical Names (CNAMEs): http://tools.ietf.org/html/rfc6222

18    See the following for the web page of the IETF SIP Working group: http://datatracker.ietf.org/wg/sip/charter/

See the following for the SIP RFC 3261: http://tools.ietf.org/html/rfc3261

See the following for updates to RFC 3261:

  1. Session Initiation Protocol (SIP)-Specific Event Notification: http://tools.ietf.org/html/rfc3265

  2. S/MIME Advanced Encryption Standard (AES) Requirement for the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc3853

  3. Actions Addressing Identified Issues with the Session Initiation Protocol’s (SIP) Non-INVITE Transaction: http://tools.ietf.org/html/rfc4320

  4. Connected Identity in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc4916

  5. Subscriptions to Request-Contained Resource Lists in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc5367

  6. Addressing an Amplification Vulnerability in Session Initiation Protocol (SIP) Forking Proxies: http://tools.ietf.org/html/rfc5393

  7. Message Body Handling in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc5621

  8. Managing Client-Initiated Connections in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc5626

  9. The Use of the SIPS URI Scheme in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc5630

  10. Change Process for the Session Initiation Protocol (SIP) and the Real-time Applications and Infrastructure Area: http://tools.ietf.org/html/rfc5727

  11. Domain Certificates in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc5922

  12. Essential Correction for IPv6 ABNF and URI Comparison in RFC 3261: http://tools.ietf.org/html/rfc5954

  13. Correct Transaction Handling for 2xx Responses to Session Initiation Protocol (SIP) INVITE Requests: http://tools.ietf.org/html/rfc6026

  14. Re-INVITE and Target-Refresh Request Handling in the Session Initiation Protocol (SIP): http://tools.ietf.org/html/rfc6141

  15. Session Initiation Protocol (SIP) Event Notification Extension for Notification Rate Control: http://tools.ietf.org/html/rfc6446

19   See the following for overviews of SS7:

  1. http://www.aws.cit.ie/personnel/dpesch/notes/msc_sw/ss7_protocol_overview.pdf

  2. http://www.syrus.ru/files/docs/control/tech/Introduction%20to%20SS7.pdf

  3. http://www.informit.com/library/library.aspx?b=Signaling_System_No_7

20   The G3 Protocol standard was originally published in the following work: International Telegraph and Telephone Consultative Committee (CCITT), Red Book, October, 1984.

The Red Book is not available on line directly, but can be searched in its entirety here: http://catalog.hathitrust.org/Record/000592639

21   See the following to download a copy for the current versions in force:

  1. T.4 Standard:   http://www.itu.int/rec/T-REC-T.4/en

  2. T.30 Standard:   http://www.itu.int/rec/T-REC-T.30/en

22   See the following for an overview of TEMPEST solutions: http://www.fas.org/irp/program/security/tempest.htm

See the following for a document from December of 1990, “Engineering and Design - Electromagnetic Pulse (EMP) and Tempest Protection for Facilities Proponent: CEMP-ET”, detailing the specifications and requirements to build out a secure facility with TEMPEST shielding: http://www.fas.org/nuke/intro/nuke/emp/toc.htm

23   See the following for the Address Allocation for Private Internets RFC: http://tools.ietf.org/html/rfc1918

24   See the following for RFC 2460: http://tools.ietf.org/html/rfc2460

25   See the following for RFC 5952, A Recommendation for IPv6 Address Text Representation: http://tools.ietf.org/html/rfc5952

26   See the following for RFC 5156: http://tools.ietf.org/html/rfc5156

27   See the following for RFC 4193: http://tools.ietf.org/html/rfc4193

28   Secure modem solutions protect network assets from attacks by illegal users and malicious hackers. Secure Modem systems and products are designed with rigorous security solutions that meet the requirements of mission-critical network applications, such as service provisioning, banking and financial communication, transportation and governmental agencies. These solutions give security architects a suite of protection options for their large or small network environments, including Security, Authentication, Encryption and Disaster Recovery. Security modem solutions protect network assets by blocking disruption of service due to data theft, data corruption, illegal intrusions, shutdowns, line failure and component failure.

See the following for the General DataComm web site: http://www.gdc.com/products/prod_m_1.shtml

29   Modems are also used for “Out of Band Management” (OBM) of systems as well. Traditional OBM solutions using non-secure modems are unsuitable for any application regarding sensitive information or mission integrity. The security of traditional OBM solutions typically contain the following weaknesses:

- There is no proper authentication, authorization and audit trail.

- A simple password is often used to block access to a cluster of remote devices.

- The user names, passwords, and stored telephone numbers are often widely published and distributed.

30   See the following for security guidance that is broadened to include securing the PBX that modems will be connected through from NIST. NIST developed a PBX vulnerability analysis that can be use to understand these systems. NIST SP 800-24 PBX Vulnerability Analysis: http://csrc.nist.gov/publications/nistpubs/800-24/sp800-24pbx.pdf

Also see the following from ATT. A checklist for securing a PBX system is provided by ATT’s “ATT Security Statement” document. ATT PBS Security Checklist: http://images.bimedia.net/documents/ATT_Security_Statement.doc

31   The Department of Homeland Security in the United States has developed the following guide: Recommended Practice for Securing Control System Modems, January 2008: http://www.us-cert.gov/control_systems/practices/documents/SecuringModems.pdf

32   See the following for the official announcement press release from the FCC: http://www.fcc.gov/document/fcc-chairman-announces-technology-transitions-policy-task-force

33   See the following for the SANS Security Resources Information Security Policy Templates web site: http://www.sans.org/security-resources/policies/

34   For a dated, but very interesting review of the historical literature surrounding early firewall design and architecture up through 2001, see the following: http://www.cs.unm.edu/~treport/tr/02-12/firewall.pdf

35   See the following for the PCI DSS v2.0 Standard: https://www.pcisecuritystandards.org/security_standards/documents.php

36   The most recent guidance on IDS/IPS solutions can be found in NIST Special Publication 800-94 Revision 1 (Draft): Guide to Intrusion Detection and Prevention Systems (IDPS) (Draft), July 2012: http://csrc.nist.gov/publications/drafts/800-94-rev1/draft_sp800-94-rev1.pdf

37   

Netflow:

See the following for RFC 3954, Cisco Systems NetFlow Services Export Version 9: http://www.ietf.org/rfc/rfc3954.txt

The Cisco website for NetFlow Version 9: http://www.cisco.com/en/US/products/ps6645/products_ios_protocol_option_home.html

sFlow:

The sFlow website:

http://www.sflow.org/

IPFIX:

See the following for RFC 5101, Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information: http://tools.ietf.org/html/rfc5101

38   For information on establishing an effective incident response capability, see NIST Special Publication (SP) 800-61Revison 2, Computer Security Incident Handling Guide, August 2012: http://csrc.nist.gov/publications/nistpubs/800-61rev2/SP800-61rev2.pdf

39   See the following for an overview of the various security issues uncovered with WEP: http://www.isaac.cs.berkeley.edu/isaac/wep-faq.html

40   While TKIP has not been broken, it has known vulnerabilities, such as a susceptibility to dictionary-based attacks for short keys (eight characters), and some very clever ways to insert packets through manipulating a flaw in the packet integrity protocol. TKIP and WEP are not being allowed in new devices with the Wi-Fi stamp in a staged elimination over three years starting in 2011, and scheduled to be completed by 2014.

41   See the following to download a copy of the current 802.1x -2010 Standard: http://standards.ieee.org/getieee802/download/802.1X-2010.pdf

42   By encrypting HTTP communication with SSL a client can establish a secure and private communication channel with a web server. Using HTTPS the security architect can provide essential protection for passing authentication credentials and prevent the disclosure of sensitive information.

While the end-to-end secure encrypted channel provided by HTTPS enables important security and privacy protection, the protocol is often abused. The root of the problem is that most firewalls are unable to inspect HTTPS communication because the application-layer data is encrypted with SSL. Knowing this, attackers frequently leverage HTTPS to deliver malicious payloads to a user confident that even the most intelligent application-layer firewalls are completely blind to HTTPS and must simply relay HTTPS communication between hosts. Frequently end users will leverage HTTPS to bypass access controls enforced by their corporate firewalls and proxy servers, using it to connect to public proxies and for tunneling non-HTTP protocols through the firewall that might otherwise be blocked by policy.

HTTPS inspection allows a firewall to terminate outbound HTTPS sessions at the firewall. Essentially it provides a true proxy for HTTPS, instead of simply just tunneling HTTPS communication blindly. This is accomplished by acting as a trusted man-in-the-middle. When a request is made of the firewall for an HTTPS protected resource, the firewall will establish a new connection to the destination server and retrieve its SSL certificate. The firewall then copies the information from the certificate and creates its own certificate using these details and provides that to the client. As long as the client trusts the root certificate of the firewall the process is completely transparent to the end user.

By enabling forward (outbound) HTTPS inspection the firewall can now provide complete protection for all web-based protocols. With the firewall terminating outbound SSL sessions, the firewall can now decrypt and inspection HTTPS communication, allowing for the enforcement of HTTP policy, more accurate application of URL filtering, and inspection of files transferred over HTTPS.

43   See the following for the Gartner Research Magic Quadrant for Content-Aware Data Loss Prevention Study, ID Number G00213871, Publication Date, 10 August 2011: http://www.aptsecure.com/wp-content/uploads/2011/08/Gartner_DLP_MQ_2011.pdf

44   There is a trend in the BYOD security arena that security architects need to be aware of. As of late 2012, and into 2013, there are apps making there way into Smartphones and mobile platforms that present a significant potential challenge to the security architect’s ability to successfully integrate Mobile Device Management (MDM) into their overall security architecture.

The deployment and use of applications that allow users to send messages to each other and to others outside the organization that are deleted by default, as well as applications that allow users to send voice, text and audio messages, all of which delete themselves after a period of time presents some very interesting and unique challenges.

Snapchat allows users to text self-destructing photos in real time, as well as being able to update the sender if the recipient takes a screen capture of the content prior to deletion from the system.

A similar app called Wickr takes the concept to the next level. Wickr lets users share more than just photos -- they can send encrypted multimedia messages that self-destruct after a set amount of time. Wickr encrypts everything and it also scrubs content from the file system, making it hard for anybody to know what was sent or if anything was sent. Wickr employs military-grade encryption of text, picture, audio and video messages relying on both the Advanced Encryption Standard (AES) symmetric block cipher implemented with random 256-bit keys and the asymmetric RSA-4096 algorithm. Wickr deletes all metadata from your pictures, video and audio files, like your device info, your location, and any personal information captured during the creation of those files via a process that while running, works continuously to wipe areas of main memory and device storage recently used to display text or multimedia content. The Wickr app does not require you to tie an email address to your account, allowing you to be as private and discreet as needed.

Snapchat can be found here:   http://www.snapchat.com/

Wickr can be found here:   https://www.mywickr.com/en/index.php

45   See the following for the TREND MICRO whitepaper, The Real Face of KOOBFACE: The Largest Web 2.0 Botnet Explained: http://www.trendmicro.com/cloud-content/us/pdfs/security-intelligence/white-papers/wp_the-real-face-of-koobface.pdf

46   See the following for an example of one of the potential issues associated with the use of Certificates and Certificate Authorities based on incorrect issuance of certificates by a CA to a third party:

  1. http://googleonlinesecurity.blogspot.com/2013/01/enhancing-digital-certificate-security.html

  2. http://www.networkworld.com/community/blog/chrome-firefox-ie-block-fraudulent-digital-certificate?source=NWWNLE_nlt_security_2013-01-07

  3. http://www.networkworld.com/news/2013/010313-google-finds-unauthorized-certificate-for-265479.html?source=NWWNLE_nlt_security_2013-01-04

  4. http://technet.microsoft.com/en-us/security/advisory/2798897

  5. http://blog.mozilla.org/security/2013/01/03/revoking-trust-in-two-turktrust-certficates/

  6. https://freedom-to-tinker.com/blog/sjs/turktrust-certificate-authority-errors-demonstrate-the-risk-of-subordinate-certificates/

47   See the following for overview information on the iKP family of protocols: http://www.zurich.ibm.com/security/past-projects/ecommerce/iKP_overview.html

48   See the following for overview information on Millicent: http://sellitontheweb.com/blog/millicent-micropayment-product-review/

49   See the following for the RPC: remote Procedure Call Protocol Specification Version 2 RFC 5531: http://tools.ietf.org/html/rfc5531

50   See the following for the Microsoft Point-To-Point Encryption (MPPE) Protocol RFC: http://www.ietf.org/rfc/rfc3078.txt

51   See the following for the Cisco Layer Two Forwarding (Protocol) “L2F” RFC: http://tools.ietf.org/html/rfc2341

52    See the following: https://tools.ietf.org/html/rfc2661

53   See the following: https://tools.ietf.org/html/rfc3931

54   See the following for a thorough overview of IPSec: http://www.unixwiz.net/techtips/iguide-ipsec.html

55   See the following list for the RFC’s pertaining to IPSec:

  1. Security Architecture for Ipsec: http://tools.ietf.org/html/rfc4301

  2. AH: Authentication Header: http://tools.ietf.org/html/rfc4302

  3. Use of HMAC-MD5-96 within ESP and AH: http://tools.ietf.org/html/rfc2403

  4. Use of HMAC-SHA-1-96 within ESP and AH: http://tools.ietf.org/html/rfc2404

  5. HMAC: Keyed-Hashing for Message Authentication: http://tools.ietf.org/html/rfc2104

  6. The ESP DES-CBC Cipher Algorithm With Explicit IV: http://tools.ietf.org/html/rfc2405

  7. ESP: Encapsulating Security Payload: http://tools.ietf.org/html/rfc4303

  8. The Internet IP Security Domain of Interpretation for ISAKMP: http://tools.ietf.org/html/rfc2407

  9. Internet Security Association and Key Management Protocol (ISAKMP): http://tools.ietf.org/html/rfc2408

  10. The Internet Key Exchange (IKE) Protocol: http://tools.ietf.org/html/rfc4306

  11. The NULL Encryption Algorithm and Its Use With IPsec: http://tools.ietf.org/html/rfc2410

  12. IP Security Document Roadmap: http://tools.ietf.org/html/rfc2411

  13. The OAKLEY Key Determination Protocol: http://tools.ietf.org/html/rfc2412

  14. Use of IPsec Transport Mode for Dynamic Routing: http://tools.ietf.org/html/rfc3884

  15. Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH): http://tools.ietf.org/html/rfc4835

  16. Securing L2TP Using IPSEC: http://tools.ietf.org/html/rfc3193

56   See the following for RFC 4835: http://tools.ietf.org/html/rfc4835

57   See the following for RC 3193: http://tools.ietf.org/html/rfc3193

58   See the following for a collection of work by Wietse Venema, including the original paper on TCP Wrappers, presented at The 3rd UNIX Security Symposium in Baltimore, September 1992: ftp://ftp.porcupine.org/pub/security/index.html

59   See the following for an overview of TCP Wrappers functionality and configuration: http://www.freebsd.org/doc/handbook/tcpwrappers.html

60   See the following for the SOCKS Protocol Version 5 RFC: http://bit.ly/19yERRq

61   See the following for the Dynamic Cross-Site Request Forgery: A Per-request Approach to Session Riding paper by Shawn Moyer and Nathan Hamiel, presented at Defcon 17: http://voices.washingtonpost.com/securityfix/Moyer-Hamiel-DC17-Dynamic-CSRF.pdf

62   See the following for the NIST Special Publication 800-125, January 2011, Guide to Security for Full Virtualization Technologies: http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf

63   Christofer Hoff, chief security architect at Juniper, has been exploring the frontiers of computer and network security for a couple of decades now; see the following for his blog: http://www.rationalsurvivability.com/blog/

64   See the following for the full report (Report ID: R5000612): http://reports.informationweek.com/

65   See the following for the full PDF, The Four Horsemen of the Virtualization Security Apocalypse: http://www.packetfilter.com/presentations/FHOTVA-SecTor.pdf

66   See the following for the Next-Generation VM Security, Kurt Marko, InformationWeek Report, June 2012 (ReportID:S5200612): http://reports.informationweek.com/

67   See the following for NIST draft 7904: http://csrc.nist.gov/publications/drafts/ir7904/draft_nistir_7904.pdf

68   See the following for the Intel white paper: http://www.intel.com/content/dam/www/public/us/en/documents/guides/cloud-builders-xeon-5500-5600-openstack-power-mgmt-security-guide.pdf

69   See the following for the Gartner IT Glossary: http://www.gartner.com/it-glossary/it-outsourcing/

70   Statement on Auditing Standards No.70 (SAS 70) is an internationally recognized auditing standard developed by the American Institute of Certified Public Accountants (AICPA) in 1992. It is used to report on the “processing of transactions by service organizations”. A SAS 70 Type I is known as “report on controls placed in operation”, while a SAS 70 Type II is known as “report on controls placed in operation” and “tests of operating effectiveness”.

To adopt to the globally accepted changes in accounting principles certain amendments were required to be made in SAS 70. This led to the introduction of SSAE 16 by the Auditing Standards Board (ASB) of American Institute of Certified Public Accountants (AICPA). These changes helped in aligning companies with the new international service organization reporting standards – ISAE 3402.

SSAE 16 stands for Statement of Standards for Attestation Engagements No. 16. It is the new attestation and auditing standard. It addresses the engagements conducted by service providers on service organization for reporting design control and operational effectiveness. It requires the companies to report the description of the system along with an Assertion from the Management. These are the two major changes from SAS 70 in this standard. For the reporting period ending on or after 15 June 2011 it has become the new standard for control reporting at service organizations.

Germany

The German standard report in this section is called IDW PS 951. It is similar to SAS 70 Type II. IDW PS 951 is released by Institut der Wirtschaftsprüfer.

United Kingdom

A SAS 70 is similar to the United Kingdom guidance provided by the Audit and Assurance Faculty of the Institute of Chartered Accountants in England and Wales. The technical release is titled AAF 01/06 which supersedes the earlier FRAG 21/94 guidance.

Canada

In Canada, a similar report known as a Section 5970 report may be issued by a service organization auditor. It usually gives two separate audit opinions on the controls in place. Furthermore, it may also give an opinion on the operating effectiveness over a period. These reports tend to be quite long, with descriptions of the controls in place.

India

Similar to the SAS 70 Report in the United States of America, reporting requirements are defined in India’s Audit and Assurance Standards 24 “Audit Consideration Relating to Entities Using Service Organizations”. The AAS 24 is issued by the Institute of Chartered Accountants of India, and is operative for all audits relating to periods beginning on or after 1 April 2003.

71   See the following for the ISO/IEC 38500:2008 Corporate Governance of Information Technology Standard: http://www.iso.org/iso/catalogue_detail?csnumber=51639

72   See the following for overview information and the full text of King III: https://www.saica.co.za/TechnicalInformation/LegalandGovernance/King/tabid/626/language/en-ZA/Default.aspx

73   See the following for the OECD principles: http://www.oecd.org/corporate/corporateaffairs/corporategovernanceprinciples/31557724.pdf

74   See the following web site for a list of participating countries, and their ROSC reports: http://www.worldbank.org/ifa/rosc_cg.html

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset