CHAPTER 18


Preventive Tools and Techniques


The aphorism “An ounce of prevention is worth a pound of cure” points out that preventive effort may bring larger benefits when compared to fixing a broken or compromised system. Prevention is clearly better than cure if there are demonstrable benefits in choosing a preventive effort and the implementation is cost justified. Frequently, it is more cost and time effective to implement prevention steps against a computer virus infection rather than spending time recovering from (or curing) the virus attacks.

In information assurance management, this aphorism is not always true. Following a risk assessment (refer to Chapter 11), an organization may decide to choose neither prevention nor cure. This is feasible since an organization may decide to avoid or transfer the risk.

This chapter discusses the tools and techniques for cases in which an organization chooses to prevent undesirable impact. Recall that preventive mechanisms are not entirely technical. Some of the choices are managerial or at least a combination of technical and managerial approaches. For example, to reduce the risk of an insider threat stealing corporate information, an organization may choose to implement credit and suitability background checks prior to hiring and throughout an employee’s tenure. (Refer to Chapter 13.)

Preventive Information Assurance Tools

In the global information environment, communications and network security seem to dominate management concerns. Some solutions may have privacy concerns in some legal systems. For example, in the United States, monitoring of employees by companies is acceptable in most jurisdictions, while monitoring by the government is more closely regulated. It is advisable to notify everyone connected to a network that monitoring may be used. Notify individuals by warning banners at login and using signed acceptable use and behavior agreements. The following sections discuss tools used to establish preventive controls.

Content Filters

Content filters control the access of end users to portions of the Internet. These tools allow network administrators to block access selectively to certain types of web sites based on predefined local policy. Content filters may be used to change employees’ productivity and to increase an organization’s information assurance profile by reducing user access to web sites that have no organizational value, improper content, or malicious code. Content filters not only block access to web sites but also are capable of monitoring activity and generating reports on usage. This feature is useful for following trends for employee network usage or for detecting suspicious behavior of employees. Content filters may control bandwidth use. For example, video streaming sites can be blocked to conserve bandwidth. Content filters are implemented by several industries with differing levels of success and acceptance. In some areas, this practice has been controversial. Typically, the benefit of blocking malicious web sites outweighs the social cost of restricting browsing. The most successful implementations of content filters contain a process through which users can request web sites be unblocked after it has been analyzed to determine whether it should be opened for use.

Cryptographic Protocols and Tools

Cryptography is a technique for hiding information by transforming it so that only authorized individuals can access it in its original form. All others are denied access since they cannot decrypt the information. Cryptographic tools also provide confidentiality, integrity, and nonrepudiation protection as defined by the MSR model discussed earlier. Encryption techniques for hosts range from encryption of the entire hard disk, database encryption, selective folder (group of files) encryption, or individual file encryption.

res_300_image

Specially designed secure network protocols are used to secure data traveling over networks such as the Internet. Examples of protocols that implement network services include Secure Sockets Layer (SSL), Transport Layer Security (TLS), and IP Security (IPSec) protocols. SSL and TLS are preferred information security protocols in web environments, while IPSec protocols are preferred for implementing virtual private networks (VPNs).

As noted in Chapter 16, if the employee is allowed to encrypt data, then the key must be controlled by management or held in an appropriate key escrow system.

Firewalls

Firewalls act as a primary control for information assurance technology. They may be implemented as hardware, software, or a combination of both. They exist at the host (desktop, user level) and the network or server level. They enforce access control policies for network segments. They are not a panacea; they do not solve all problems. Access control policies may be implemented in the firewall and are important for controlling information traffic and movement from public accessible networks to private networks. Traffic movement from source to destination may be managed by a firewall using filtering rules to verify, permit, or obstruct data movement. Network protocol types can also be determined by and included in these rules.

Firewalls are widely used throughout organizations; however, recently there has been an increase in the usage of personal firewalls. Note that there are limitations in firewalls because they can only inspect and filter the traffic that flows through them; they cannot protect from internal threats unless appropriately implemented. Additionally, firewalls are often multifunction devices that may also contain solutions for remote users to connect to an organization’s intranet and web content filtering.

Network Intrusion Prevention System

A network intrusion prevention system (NIPS) inspects network traffic based on organizational information assurance policy and configuration. It may reduce the exploitation of a network with its capability to manage network packets and identify attacks. This system uses application content, behavior, and context, and not IP addresses or ports, to formulate decisions on access control.

There are two types of network intrusion prevention systems: content based and anomaly based. To detect attacks, the contents of the network packets are checked for distinctive sequences called signatures. An anomaly-based NIPS may be used to prevent denial of service (DOS) attacks by monitoring and learning normal network behavior. It uses a statistical approach to determine whether the network behavior is deviating from the normal traffic. Of course the real trick is to define normal.

NIPS can be problematic if they are configured incorrectly or if they are unable to detect legitimate changes to an organization’s network. As noted, they are preventive devices and can often automatically shut off traffic or redirect it based on the rules provided to it. This can cause unplanned outages and confusion if NIPS configuration and modification is not part of an organization’s change management and configuration management process. Organizations may choose to use network intrusion detection systems (NIDSs) instead of NIPS because of the disruptions NIPS may create.

Proxy Servers

Proxy servers act as an intermediary between clients and the Internet by allowing clients to make indirect connections to other network services through them. Proxy servers can be configured to require authentication of the end user, restricting communication to a defined set of protocols, applying access control restrictions, and carrying out auditing and logging. Care should be used with proxy servers since they can be used to disguise sources of traffic (anonymized). The simplest form of a proxy server is called a gateway. They can also be used to cache web content.

Public Key Infrastructure

The use of public key infrastructure (PKI) implementation is growing worldwide. PKI enables a secure method for exchanging confidential information over unsecured networks and is the de facto standard for implementing trust online. PKI provides a secure electronic business environment. With faster growth of e-commerce, e-business, and e-government applications, the adoption of PKI has increased in recent years. The implementation of PKI combines software, hardware, policy, and procedure to support business needs.

PKI uses technology known as public key cryptography (also known as asymmetric cryptography). Public key cryptography scales better (works well on large systems) than a private key cryptography (also known as symmetric cryptography). The use of public key cryptography reduces the key distribution problem associated with symmetric systems. By associating a unique private key and unique public key with each participant, public key cryptography provides a way for secure protocols to link actions to individuals. It enables digital signatures and nonrepudiation information assurance services noted in the MSR model.

A common way of associating public keys with their owners is to use digital certificates. In PKI, user credentials take the form of a digital certificate, think of it as an electronic passport. Digital certificates may contain names, e-mail addresses, the dates the certificates were issued, and the names of certificate authorities that issued them, among other things. A digital certificate is an electronic message that links a public key to the name of the owner in a secure way by using a trusted third party, known as a certificate authority (CA), that guarantees the relationship. Authentication using PKI technology means proving one’s identity by proving knowledge of the associated private key (as indeed the owner of that private key).

Figure 18-1 illustrates the components of a PKI and how they work together. Most PKI implementations include a CA component, a registration authority (RA) component, and a directory component, e.g., lightweight directory access protocol (LDAP).

res_300_image

Figure 18-1 Components of PKI

Since the CA signs certificates and revokes information, it is the most important and essential component. It is customary to deploy or establish a root CA, since the root CA is the server that forms the foundation for the PKI infrastructure. The root CA has a self-signed certificate, and the public key is published in multiple available public directories. In some situations, the public key is issued (made public) only where necessary. Subscriber certificates, which are signed by other signing CAs, are at first signed by the root CA.

Certificates may be published and stored in public directories such as an LDAP server. The function of the directory component in PKI infrastructure is to publish certificates and revoke invalid access and signatures. Protocols such as LDAP retrieve certificate information from the directory. In addition to LDAP, there are other ways to make certificates available to applications. For example, SSL/TLS allows the server to send an X.509 certificate to the client, binding a name to a public key value certificate to the client and requesting a certificate from the client. IPSec uses the Internet Key Exchange (IKE) protocol to exchange certificates as part of the key exchange procedure.

One of the most egregious attacks on PKI occurred in the spring of 2014. This attack was called the heartbleed attack and focused on the implementation of PKI called OPEN Secure Sockets Layer or OpenSSL. OpenSSL is used by two-thirds of the secure web traffic on the Web. The protocol is used whenever a web browser needs an HTTPS connection for activities, such as secure banking and other sensitive transactions. If the server was running OpenSSL, it would follow instructions very closely, and because of the way OpenSSL was written, a simple flaw allowed an attacker to arbitrarily read the memory of a server far beyond what they should have had access to. This meant if a user’s password or other sensitive information was sitting in memory at the time of an attacker “bleeding” it out, it was subject to compromise. Numerous web site owners and vendors had to contact their customers and request they change their passwords.

Virtual Private Networks

A virtual private network is a secure network that uses a public network (usually the Internet) to allow users to interconnect. It uses cryptographic means (encryption) to provide secure communications on public networks. Various types of VPN protocols are IPSec, SSL, Point-to-Point Tunneling Protocol (PPTP), and others. A VPN provides cost-effective solutions to organizations spread over wide areas. Various out-of-the-box VPN solutions are readily available. Organizations should be vigilant and do pre-implementation research about key management and types of encryption algorithms used and their strengths before employing any VPN technology. The strongest VPN solutions use multifactor authentication and have their cryptography certified by independent parties such as the U.S. National Institute of Standards and Technology’s Cryptographic Module Validation Program (CVMP).

Preventive Information Assurance Controls

image

Network and computing environments constantly change. Organizations need to ensure that proper mechanisms exist to complement the use of technology. Although a firewall protects network assets, organizational systems will be at a higher risk of compromise if patch management is implemented inappropriately. The full suite of preventive information assurance mechanisms that can be used follows.

Backups

A backup is a copy of information assets: data, software, or hardware. It is an essential preventive process for information assurance; it mitigates risks and helps to ensure business continuity. A backup makes restoration (restitution) possible when needed, ensuring that data can be recovered when needed, software can be recovered during application corruption, and hardware is replaceable during disaster.

An organization should have a policy on what to back up (data, software, and hardware), when to back up (depending on the frequency of changes that occur), and how to back up (the process of backup). The backup process should be fully supported by the baseline process.

There are also different types of backup, such as full backup, differential backup, incremental backup, and mirror backup, which can be conducted at different times. Frequency and type of backups to be performed in any organization should be determined by the organization, depending on the risk tolerance and objectives of the organization.

Backing up systems is important, but more important is the correct restoration of the backup. It is thus critical that restoration and integrity tests are performed frequently. Another good practice is to document each restoration step. Refer to Chapter 25 for more information on backup and related matters.

Change Management and Configuration Management

If an organization is to remain competitive, it should be prepared to change continuously since the environment is not static. Change comes from a variety of sources. The following are sources of change drivers that should be addressed and managed effectively in the business and IT environment:

      • Alliances and partnerships

      • Business market demands

      • Competitive markets

      • Operational issues

      • Regulations changes

Change management is a disciplined process that organizations apply to ensure standardized methods and procedures are employed when implementing changes to their organization, information systems, and IT environments. The change management processes sustain and improve organizational operations while minimizing risks involved in making changes. It ensures all changes (permanent, temporary, new, or modified) to the IT infrastructure are assessed, approved, implemented, and reviewed in a controlled manner. This activity may eliminate or minimize disruptions to business or mission.

Configuration management controls hardware, software, and their associated documentation. Organizations should track all changes to configuration items throughout the life cycle of the components and system with tracking records. Configuration management is closely related to asset management (refer also to Chapter 10); it represents the detailed configuration information for each identified asset. Configuration is also closely related to contingency planning (refer to Chapters 24 and 25) because restored systems must comply with configuration baseline standards.

Change management and configuration management work hand-in-hand. Organizations should not implement configuration management without having a change management process in place. The result will be wasted resources.

The change management system defines and controls a configuration. For example, maintaining accurate configuration information for all the constituent parts of the IT service and infrastructure involves identifying, recording, and tracking all IT components. In addition, it includes versions, constituent components, and relationships of configurations. This tracking ensures that all necessary steps are taken so that changes to the IT components do not adversely affect system performance, reliability, or availability. Accountability and nonrepudiation are also included in change management since all changes must be authorized and assigned to a change agent. The assigned change agent is responsible for the implementation of the change and accountable for the results. Individuals making changes without authorization should be warned and later disciplined if the process is not followed.

Implementation of change and configuration management ensures that a clear and complete picture of the IT environment is always available, thus together serving as a strong preventive control to counter risk. Guide both change and configuration management with a properly documented set of policies. Refer also to Chapter 12 for details on policy.

IT Support

During day-to-day operations, the IT support or help-desk employees encounter myriad problems. Information technology support should be able to identify the nature of the problem and determine whether the problem should be raised to a higher level.

Well trained IT support technicians should respond to IT security problems, inform appropriate individuals, and prevent an actual security incident or breach. IT support employees should focus their attention when IT security complaints come repeatedly from the same user or system. If the problem is an information assurance problem, then the manager of this area should take the necessary policy-based action. On the other hand, the problem may represent AT&E failure. Maintenance of logs and appropriate SPC activities may narrow down the problem.

Media Controls and Documentation

Ensuring information confidentiality, integrity, and availability is limited not only to server-based information. Organizations must safeguard all media, including tapes, disks, and printouts. They are equally important and should be properly secured. Remember, the requirement for protection follows the data, not the media. Operational controls addressing media protection may include the following:

      • Environmental protection against problems relating to fires, air conditioning, and humidity

      • Logging of usage (for example, users should check in and check out the media)

      • Maintenance of the media including overwriting or erasing of data and disposal of media

      • Prevention of unauthorized access

      • Proper labeling of media providing information such as the owner’s name, date of creation, version, and classification

      • Storage considerations, such as off-site locations or in locked server rooms

Documentation is an effective means of mitigating information assurance risks. The availability of documentation is vital. It allows assigned personnel to understand the architecture of information system functions and associated controls. Documentation should be written such that individuals have a good idea about things to be done or avoided. It reduces the probability that information assurance is not compromised. Information assurance documentation should be customized to user needs. Keep documentation current to ensure business processes are operated as expected and agreed upon. The certification and accreditation (C&A) process requires thorough documentation and is one of the best approaches to ensuring an organization has comprehensive documentation for its systems and practices. You can find more information regarding C&A in Chapter 14.

Patch Management

Patch management requires performing planned and timely system patches to maintain operational efficiency and effectiveness, mitigate information security vulnerabilities, and maintain the stability of IT systems. It is part of configuration management. From this perspective, patch management can also be viewed as part of change management. This is so important that in 2013 the European Union Agency for Network and Information Security (ENISA) published a report regarding patching in industrial control systems and other critical network components. A successful and effective patch management program combines well-defined processes, effective software, and training into a strategic program for assessing, obtaining, testing, and deploying patches. Common practices for an effective patch management include the following: standardized patch management policies, procedures, and tools. Employees of organizations have to be made aware of the availability of a patch management policy. This ensures that patch management requirements are understood by all. Organizations without standardized policies and procedures in place may allow each subgroup within an entity to implement patch management differently or not at all. A good patch management policy should contain provisions for patch deployment, describing how and when new patches should be applied to the organization and an acceptable “discovery-to-patch” timeframe. Examine tools to assist, facilitate, and automate the patch management process.

      Establishing dedicated resources One of the most important items in a patch management process is to ensure that roles and responsibilities are identified and defined for those involved in maintaining an organization’s systems and applications. Their task would be to ensure that these systems and applications are updated with the current released patches. This group of people will also be looking into related information assurance issues. Some organizations may establish a dedicated patch management team. Others may assign responsibility based on related duties.

      Monitoring and identifying relevant vulnerabilities and patches Currently, vulnerabilities and patches appear on a daily basis. Organizations must identify and monitor vulnerabilities proactively. Associate the vulnerabilities with their respective patches using various tools and services available in the market. Use free services only after vetting the quality, reliability, and integrity. Ensure software in use is supported by the vendor and the vendor is contractually required to address security issues discovered.

      Identifying risk in applying a patch Apart from considering the criticality of vulnerability, an organization should consider the importance of the system in question to operations and the risk of applying a patch. The organization has the option not to follow the vendor’s advice. This is to ensure that the patch management process does not disrupt the systems’ operations. Mission owners and business owners should always have representatives available to test patches. They should test functionality in a test environment and provide feedback before going live. Organizations should also consider a phased patching approach. If the patch fails or causes undesired results, systems of less importance can be impacted first. Organizations should also consider what compensating controls are available for a particular vulnerability. If vulnerabilities can be blocked completely at a firewall, the organization would be wise to consider blocking it and taking extra time testing and deploying the patch.

      Testing a patch before installing Implementing the patch management process assures the information of the IT infrastructure; however, organizations should first assess the patches in a test environment. This is to determine the impact of installing the patch and making certain that it does not disrupt the IT operations. Such testing will help determine whether a patch functions as intended and does not have an adverse effect on the existing system. If a test environment is not available, the organization should consider a phased roll-out. Doing so ensures a patch won’t disable an entire organization. Test environments should mirror production environments as closely as possible. While this may not always be possible because of cost and complexity, organizations must be aware of testing limitations of nonparity systems. Organizations should receive patches only from known and trusted sources. Organizations should demand patches be digitally signed and hashed for verification prior to deployment.

Further Reading

      • Aiello, B. “How to Implement CM and Traceability in a Practical Way.” September 2013. www.cmcrossroads.com/article/how-implement-cm-and-traceability-practical-way.

      • Do’s and Don’ts for Effective Configuration Management, TechTarget. http://blogs.pinkelephant.com/images/uploads/pinklink/Dos_Donts_For_Effective_Configuration_Management.pdf.

      • Friedlob, T., et al. An Auditor’s Introduction to Encryption. A monograph published by the Institute of Internal auditors, 1998.

      • G Data Development. G Data TechPaper #0271, 2013, G Data, Germany, Patch Management Best Practices, www.cpni.gov.uk/Documents/Publications/2006/2006029-GPG_Patch_management.pdf.

      • Good Practice Guide Patch Management. NISCC National Infrastructure Security Co-ordination Center, 2006. www.docstoc.com/docs/7277421/Good-Practice-Guide-Patch-Management.

      • NIST FIPS 140 Series. http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm.

      • NIST FIPS 140-1. http://csrc.nist.gov/publications/fips/fips1401.htm.

      • NIST FIPS 140-2. http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf.

      • Pauna, Adrian, and K. Moulinos. “Window of Exposure…A Real Problem for SCADA Systems?” ENISA, December 2013. www.enisa.europa.eu/activities/Resilience-and-CIIP/critical-infrastructure-and-services/scada-industrial-control-systems/window-of-exposure-a-real-problem-for-scada-systems.

      • Schmidt, Howard A. Patrolling Cyberspace: Lessons Learned from a Lifetime in Data Security. Larstan Publishing, December 15, 2006.

      • Conklin, Wm. Arthur, et al. Introduction to Principles of Computer Security: Security+ and Beyond. McGraw-Hill Education, March 2004.

      • Schou, Corey D., and D.P. Shoemaker. Information Assurance for the Enterprise: A Roadmap to Information Security. McGraw-Hill Education, 2007.

      • Security Tools to Administer Windows Server 2012. Microsoft, October 2012 http://technet.microsoft.com/en-us/library/jj730960.aspx.

      • Stamp, M. Information Security Principles and Practice. Wiley-Interscience, 2005.

      • Tipton, Harold F., and S. Hernandez, ed. Official (ISC)2 Guide to the CISSP CBK 3rd edition. ((ISC)2) Press, 2012.

      • U.S. General Accounting Office. “Report to the Ranking Minority Member, Subcommittee on 21st Century Competitiveness, Committee on Education and the Workforce, House of Representatives, EMPLOYEE PRIVACY – Computer-Use, Monitoring Practices, and Policies of Selected Companies.” www.gao.gov/new.items/d02717.pdf. GAO-02-717, 2002.

      • Wen, J., D. Schwieger, and P. Gershuny. “Internet Usage Monitoring in the Workplace: Its Legal Challenges and Implementation Strategies.” Information Systems Management Archive, January 2007)., Volume 24, Issue 2. pp. 185–196.

Critical Thinking Exercises

        1. An organization is changing the way it works. For the past ten years, the organization has operated out of a downtown office, and all employees were expected to report onsite for work. Because of the increased costs of real estate, the executive management has identified substantial savings if all employees worked remotely from their homes and the organization maintained only a small office for meetings and executives downtown. The organization has never allowed outside access to its networks and has never allowed equipment off-premises prior to this change. Now employees are being issued laptops, tablets, and smartphones to do their work. What preventive information assurance controls and tools should the organization be concerned with as part of this change?

        2. In addition to a near 100 percent remote working situation, the organization decides it is also going to outsource several business functions to “cloud” Software as a Service (SaaS) providers. One function the organization wants to move first is e-mail. The organization has a statutory requirement to ensure all e-mail is encrypted with a U.S. FIPS 140-2 validated encryption process. What precautions should the organization take prior to committing to an e-mail cloud provider?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset