Chapter 4. Protection of Information Assets

Key concepts you will need to understand:

  • ✓ The processes of design, implementation, and monitoring of security (gap analysis baseline, tool selection)

  • ✓ Encryption techniques (DES, RSA)

  • ✓ Public key infrastructure (PKI) components (certification authorities, registration authorities)

  • ✓ Digital signature techniques

  • ✓ Physical security practices

  • ✓ Techniques to identify, authenticate, and restrict users to authorized functions and data (dynamic passwords, challenge/response, menus, profiles)

  • ✓ Security software (single sign-on, intrusion-detection systems [IDS], automated permission, network address translation)

  • ✓ Security testing and assessment tools (penetration testing, vulnerability scanning)

  • ✓ Network and Internet security (SSL, SET, VPN, tunneling)

  • ✓ Voice communications security

  • ✓ Attack/fraud methods and techniques (hacking, spoofing, Trojan horses, denial of service, spamming)

  • ✓ Sources of information regarding threats, standards, evaluation criteria, and practices in regard to information security

  • ✓ Security monitoring, detection, and escalation processes and techniques (audit trails, intrusion detection, computer emergency response team)

  • ✓ Viruses and detection

  • ✓ Environmental protection practices and devices (fire suppression, cooling systems)

Techniques you will need to master:

  • ✓ Evaluate the design, implementation, and monitoring of logical access controls to ensure the integrity, confidentiality, and availability of information assets

  • ✓ Evaluate network infrastructure security to ensure integrity, confidentiality, availability, and authorized use of the network and the information transmitted

  • ✓ Evaluate the design, implementation, and monitoring of environmental controls to prevent and/or minimize potential loss

  • ✓ Evaluate the design, implementation, and monitoring of physical access controls to ensure that the level of protection for assets and facilities is sufficient to meet the organization’s business objectives

The IT organization is responsible for ensuring the protection of information assets through effective policy, controls, and standardized procedures and control testing. The security controls implemented within the organization will probably use a defense-in-depth strategy. Defense-in-depth strategies provide layered protection for the organization’s information systems and data. Realization of this strategy reduces the overall risk of a successful attack in the event of a single control failure using multiple layers of controls to protect an asset. These controls ensure the confidentiality, integrity, and availability of the systems and data, as well as prevent financial losses to the organization.

The organization should have a formalized security function that is responsible for classifying assets and the risks associated with those assets, and mitigating risk through the implementation of security controls. The combination of security controls ensures that the organization’s information technology assets and data are protected against both internal and external threats.

The security function protects the IT infrastructure through the use of physical, logical, environmental and administrative (that is, policies, guidelines, standards, and procedures) controls. Physical controls guard access to facilities, computers, and telecommunications equipment, and ensure that only authorized users have access to facilities and equipment. Physical security controls can include security guards, biometric devices (retina scanners, hand geometry, fingerprint scanners), keys and locks, and electronic card readers. Physical access controls should be monitored and reviewed periodically to ensure their effectiveness. Physical security controls can be defeated through social engineering, whereby unauthorized persons gain access to the facility by posing as someone they are not. As stated earlier, social engineering involves playing psychological tricks on authorized users to gain access to the system. These might include “shoulder surfing,” or looking over the shoulder of authorized users to identify key codes that access the building; claiming to have “lost” badges or key cards and persuading an authorized user to permit access; or piggybacking behind an authorized user with a valid key card.

Logical security controls are more complex to implement and maintain. Access controls are security features that control how users and systems communicate or interact with other users and systems. Furthermore, logical controls are the hardware and software tools that are used to restrict access to resources such as the following:

  • System access

  • Network architecture

  • Network access

  • Encryption and protocols

  • Systems auditing

Authorization according to the principle of least privilege (need to know) should be applied, meaning that authorized users should have access to only the applications and data they need to perform authorized tasks. In addition, the IT organization should regularly log and monitor logical access to the systems and data. Policies and procedures also should include segregation of duties and access and transaction logs.

Environmental security controls are designed to mitigate the risk associated with naturally occurring events such as storms, earthquakes, hurricanes, tornadoes, and floods. The controls might vary according to the type of event, but the process of classification, mitigation, and monitoring is similar in nature to that of physical and logical security controls.

It is important to remember that unauthorized users can gain access to applications and data from both inside and outside the organization. Unauthorized users might include the following:

  • Internal employees

  • Contracted employees

  • Suppliers or vendors

  • Cleaning and maintenance contractors

  • Partners

  • Remote users

  • Entities who have access to external information systems (such as general public)

To ensure the effectiveness of the security program and its associated controls, regular penetration tests should be performed. These tests might include breaking into access points through persuasion or brute force, or gaining admission as a visitor and trying to access areas for which someone is not authorized. The combination of regular review, monitoring, and testing of physical, logical, and environmental security controls will identify weaknesses and areas for improvement. In addition to monitoring, the IT organization should define incident response and reporting procedures to react to disruptive events when they occur. The incident response procedures should provide detailed procedures for the identification, notification, evidence collection, continued protection, and reporting of such disruptive events.

Understanding and Evaluating Controls Design, Implementation, and Monitoring

Per ISACA, key elements and roles/responsibilities of security management lead to the successful protection of information systems and assets, reducing losses to the organization:

  • Senior management commitment and support—A successful security-management program requires the full support of senior management.

  • Polices and procedures—Policies and procedures should be created and implemented in alignment with the organizational strategy, and a clear definition of sensitive and critical assets should be created. The confidentiality, integrity, and availability of these assets should be protected through proper risk management and mitigation, including specific guidelines/practices and procedures.

  • Organization—The organization should have both general and specific responsibilities defined for the protection of information assets, as well as clear communication and definition of security roles and responsibilities.

  • Security awareness and education—All employees (internal and external) and third parties should receive appropriate and regular training, as well as updates on the importance of security in organizational policies and procedures. This includes security requirements, legal responsibilities, legal controls, and training on the correct use of information technology resources and organizational data.

  • Monitoring and compliance—The IT organization should implement monitoring and compliance controls that allow for the continuous assessment of the effectiveness of the organization’s security programs.

  • Incident handling and response—A formal incident handling and response capability should be established and should include planning and preparation, detection, initiation, response, recovery, closure, post-incident review, and defined key roles and responsibilities.

In addition, the organization should define security management roles and responsibilities. These responsibilities should be considered:

  • Process owners—Ensure that appropriate security measures, consistent with organizational policy, are maintained

  • Users—Follow procedures set out in the organization’s security policy

  • Information owners—Are ultimately accountable for how assets and resources are protected, and, therefore, make security decisions, such as determining data-classification levels for information assets so that appropriate levels of control are provided related to their confidentiality, integrity, and availability. Executive management such as the board of directors is an example of information owners.

  • IS security committee—Should constitute a formalized IS security committee formed to support the input of users, executive management, security administration, IS personnel, and legal counsel

  • Security specialists/advisors—Assist with the design, implementation, management, and review of the organization’s security policy, standards, and procedures

  • IT developers—Implement information security

  • IS auditors—Provide independent assurance to management of the appropriateness and effectiveness of information security objectives

Logical Access Controls

As described earlier, logical access controls are security features that control how users and systems communicate and interact with other users or systems. These are often the primary safeguards for systems software and data. Three main components of access control exist:

  • Access is the flow of information between a subject and an object.

  • A subject is the requestor of access to a data object.

  • An object is an entity that contains information.

A subject’s access rights should be based on the level of trust a company has in a subject and the subject’s need to know (principle of least privilege). As a rule, access-control mechanisms should default to “no access,” to provide intentional (explicit) access and to ensure that security holes do not go unnoticed.

The access-control model is a framework that dictates how subjects can access objects and defines three types of access:

  • Discretionary—. Access to data objects is granted to the subjects at the data owner’s discretion.

  • Mandatory—Access to an object is dependent upon security labels.

  • Nondiscretionary—A central authority decides on access to certain objects based upon the organization’s security policy.

In implementing mandatory access control (MAC), every subject and object has a sensitivity label (security label). A mandatory access system is commonly used within the federal government to define access to objects. If a document is assigned a label of top secret, all subjects requesting access to the document must contain a clearance of top-secret or above to view the document. Those containing a lower security label (such as secret or confidential) are denied access to the object. In mandatory access control, all subjects and objects have security labels, and the decision for access is determined by the operating or security system. Mandatory access control is used in organizations where confidentiality is of the utmost concern.

Nondiscretionary access control can use different mechanisms based on the needs of the organization. The first is role-based access, in which access to an object(s) is based on the role of the user in the company. In other words, a data entry operator should have create access to a particular database. All data entry operators should have create access based on their role (data entry operator). This type of access is commonly used in environments with high turnover because the access rights apply to a subject’s role, not the subject.

Task-based access control is determined by which tasks are assigned to a user. In this scenario, a user is assigned a task and given access to the information system to perform that task. When the task is complete, the access is revoked; if a new task is assigned, the access is granted for the new task.

Lattice-based access is determined by the sensitivity or security label assigned to the user’s role. This scenario provides for an upper and lower bound of access capabilities for every subject and object relationship. Consider, for example, that the role of our user is assigned an access level of secret. That user may view all objects that are public (lower bound) and secret (upper bound), as well as those that are confidential (which falls between public and secret). This user’s role would not be able to view top-secret documents because they exceed the upper bound of the lattice. Figure 4.1 depicts this access.

Lattice-based access control.

Figure 4.1. Lattice-based access control.

Another method of access control is rule-based access. The previous discussion of firewalls in Chapter 3, “Technical Infrastructure and Operational Practices and Infrastructure,” demonstrated the use of rule-based access implemented through access control lists (ACLs). Rule-based access is generally used between networks or applications. It involves a set of rules from which incoming requests can be matched and either accepted or rejected. Rule-based controls are considered nondiscretionary access controls because the administrator of the system sets the controls rather than the information users.

Note

Lattice-based access control.

Restricted interfaces are used to control access to both functions and data within applications and through the use of restricted menus or shells. They are commonly used in database views. The database view should be configured so that only that data for which the user is authorized is presented on the screen. A good example of a restricted interface is an Automatic Teller Machine (ATM). When you access your bank account via an ATM, you can perform only certain functions (such as withdraw funds or check an account balance); all of those functions are restricted so that transactions are applied to only your account.

An access-control matrix is a single table used to cross-reference rights that have been assigned by subject (subject capabilities) with access rights that are assigned by object (access-control list). The matrix is essentially a combination of a capabilities table and access control list(s). The capability table specifies the rights a subject possesses pertaining to specific objects, bound by subject. The capability corresponds to the subject’s row in the access-control matrix. The access-control list (ACL) is a list of subjects that are authorized to access a specific object. The rights are bound by object. The ACL corresponds to a column of the access-control matrix. Figure 4.2 outlines a simple access-control matrix for a single database and a group of users. It is important to keep in mind that ACLs are generally more granular than the figure.

Access control list.

Figure 4.2. Access control list.

In Figure 4.2, John, who is a data entry operator, is responsible for address updates within the test database. He is allowed access to read and update records but does not have access to create new records. Jane is responsible for entering new customers in the database and, therefore, has the capability to read and create new records. Neither John nor Jane can delete records within the database.

The administration of access control can be either centralized or decentralized and should support the policy, procedures, and standards for the organization. In a centralized access control administration system, a single entity or system is responsible for granting access to all users. In decentralized or distributed administration, the access is given by individuals who are closer to the resources.

As an IS auditor, you will most likely see a combination of access-control and administration methods. It is important to understand what type of access methods and administration are being used within the organization, to determine whether they are providing the necessary control over information resources. In gaining an understanding of the methods used, you will be able to determine the access paths to computerized information. An access path is the logical route an end user or system takes to get to the information resource. A normal access path can include several software and hardware components, which might implement access controls differently. Per ISACA, the IS auditor should evaluate each component for proper implementation and proper physical and logical security. Logical access controls should also be reviewed to ensure that access is granted on a least-privilege basis per the organization’s data owners.

Techniques for Identification and Authentication

In gaining access to information resources, the system must know who you are (identification) and verify that you are who you say you are (authentication). As a user gaining access, you provide a claimed identity (credentials), and the system authenticates those credentials before you have authorization to utilize the requested object. The most common form of identification is a login ID, in conjunction with a password (authentication), which is used to validate your (subject’s) identity. When you provide your credentials (login ID and password), the system can check you (subject) against the system or network you are trying to access (object) and verify that you are allowed access (authorization). The IT organization also should have a method of logging user actions while accessing objects within the information system, to establish accountability (linking individuals to their activities).

Note

Techniques for Identification and Authentication
  1. Identification

  2. Authentication

  3. Authorization

The most common form of authentication includes the use of passwords, but authentication can take three forms:

  • Something you know—A password.

  • Something you have—A token, ATM bank card, or smart card.

  • Something you are—Unique personal physical characteristic(s) (biometrics). These include fingerprints, retina scans, iris scans, hand geometry, and palm scans.

These forms of authentication can be used together. If two or more are used together, this is known as strong authentication or two-factor authentication. Two-factor authentication is commonly used with ATMs. To access your account at an ATM, you need two of three forms of authentication. When you walk up to the ATM, you enter your ATM card (something you have); the ATM prompts you for your PIN (something you know). In this instance, you have used two-factor authentication to access your bank account.

Note

Something you are—

As stated earlier, passwords are the most common form of authentication. Coincidentally, they are also the weakest. Passwords should be implemented in such a way that they are easily remembered but hard to guess. If passwords are initially allocated by an administrator or owner, they should be randomly generated and assigned on an individual basis. If user account and password information is shared between users, all individual accountability for any actions performed under the authority of a shared username has been lost. This is especially critical in a transaction-based environment, such as within financial institutions.

In addition to using randomly generated passwords, administrators should implement alert thresholds within systems to detect and act upon failed login events. The implementation of alert thresholds ensures that if a password is entered incorrectly a predefined number of times, the login ID associated with the password automatically is disabled, either for a specific period of time or permanently. As an IS auditor, you will typically see such a threshold set to 3 (incorrect password attempts); the account will be disabled for a specific period of time (such as 30 minutes) or permanently, in which case the user must contact the security administrator to reactivate the account. Terminating access after three unsuccessful logon attempts is a common best practice for preventing unauthorized dial-up access.

In generating user accounts and passwords, the administrator should have policies regarding password length, how often passwords are required to be changed, and the password lockout policies. As an example, administrators might create user accounts that automatically expire on a predetermined date. This is an effective control for granting temporary access to vendors and external support personnel. Administrators also should ensure that all passwords created are known only to the user. Users should have the authorization to create and change their own passwords.

As a common form of authentication, passwords can be subject to attacks (either internal or external). A common form of password attacks is the dictionary attack, in which an individual uses a dictionary of common words and a program to guess passwords. The dictionaries and programs are widely available on the Internet and are easy to use. The program employed for attack uses each of the words from the dictionary in sequence to guess the password of the logon ID being attacked. Security administrators can mitigate the risk associated with dictionary attacks by enforcing password complexity in the creation of passwords, and also can enforce failed logon attempt password-lockout policies, password length, and periodic password changes. When enforcing password complexity, administrators should extend the required length of passwords (six or more characters) and require the use of numeric characters, upper and lower case, and special characters.

Different types of passwords exist, depending on the implementation. In some systems, the passwords are user created; others use cognitive passwords. A cognitive password uses de facto or opinion-based information to verify an individual’s identity. Cognitive passwords are commonly used today as security questions associated with an account, in case the user has forgotten the password. During the creation of the user account, a system that uses cognitive passwords might ask one or more security questions: What is your mother’s maiden name? What is the name of your favorite pet? What is the elementary school you attended? The user chooses a question and provides the answer, which is stored in the system. If the user forgets the password, the system asks the security question. If it is answered correctly, the system resets the password or sends the existing password via email.

Another type of password is a one-time, or dynamic, password. One-time passwords provide maximum security because a new password is required for each login. Conversely, a static password is the same for each login. One-time passwords are usually used in conjunction with a token device, which is essentially a password generator. The token can be either synchronous or asynchronous. When using a synchronous token, the generation of the password can be timed (the password changes every n seconds or minutes) or event driven (the password is generated on demand with a button). The use of token-based authentication generally incorporates something you know (password) combined with something you have (token) to authenticate. A token device that uses asynchronous authentication uses a challenge-response mechanism to authenticate. In this scenario, the system displays a challenge to the user, which the user then enters into the token device. The token device returns a different value. This value then is entered into the system as the response to be authenticated.

Passwords are used to authenticate users to provide access and authorization. They are the mechanism that allows subjects to access objects within the system. To provide authorization to objects, those objects need to have defined owners that classify the objects or data. Establishing data ownership is an important first step for properly implementing data classification. The data owners are ultimately responsible and accountable for access control of data. Data owners should require written authorization for users to gain access to objects or data. Security administrators should work with the data owners to identify and implement access rules stipulating which users or group of users are authorized to access data or files, along with the level of authorized access (read or update).

Information systems security policies are used as the framework for developing logical access controls. Information systems security policy should be developed and approved by the top management and then should be implemented utilizing access-control lists, password management, and systems configuration files. In addition, data owners might use file encryption to protect confidential data residing on a PC. As stated earlier, authorization of access to objects or data is based on least privilege (need to know) and should incorporate proper segregation of duties. As an example, if a programmer has update access to a live system, IS auditors are more concerned with the programmer’s capability to initiate or modify transactions and the capability to access production than the programmer’s capability to authorize transactions.

Note

Something you are—

Network Infrastructure Security

As an IS auditor performing detailed network assessments and access control reviews, you first must determine the points of entry to the system and then must review the associated controls. Per ISACA, the following are controls over the communication network:

  • Network control functions should be performed by technically qualified operators.

  • Network control functions should be separated, and the duties should be rotated on a regular basis, when possible.

  • Network-control software must restrict operator access from performing certain functions (such as the capability to amend or delete operator activity logs).

  • Operations management should periodically review audit trails, to detect any unauthorized network operations activities.

  • Network operations standards and protocols should be documented and made available to the operations, and should be periodically reviewed to ensure compliance.

  • Network access by the system engineers should be closely monitored and reviewed to detect unauthorized access to the network.

  • Analysis should be performed to ensure workload balance, fast response time, and system efficiency.

  • The communications software should maintain a terminal identification file, to check the authentication of a terminal when it tries to send or receive messages.

  • When appropriate, data encryption should be used to protect messages from disclosure during transmission.

Note

Network Infrastructure Security

As stated in Chapter 3, the firewall is a secured network gateway. The firewall protects the organization’s resources from unauthorized users (internal or external). As an example, firewalls are used to prevent unauthorized users (usually external) from gaining access to an organization’s computer systems through the Internet gateway. A firewall can also be used as an interface to connect authorized users to private trusted network resources. Chapter 3 discussed the implementation of a firewall that works closely with a router to filter all network packets, to determine whether to forward them toward their destination. The router can be configured with outbound traffic filtering that drops outbound packets that contain source addresses from other than the user’s organization. Firewalls and filtering routers can be configured to limit services not allowed by policy and can help prevent misuse of the organization’s systems. An example of misuse associated with outbound packets is a distributed denial-of-service attack (DDoS). In this type of attack, unauthorized persons gain access to an organization’s systems and install a denial-of-service (DoS) program that is used to launch an attack against other computers. Basically, a large number of systems on different hosts await commands from a central client (unauthorized user). The central client (DDoS client) then sends a message to all the servers (DDoS server program) instructing them to send as much traffic as they can to the target system. In this scenario, the DDoS program distributes the work of flooding the target among all available DoS servers, creating a distributed DoS. The application gateway firewall can be configured to prevent applications such as FTPs from entering the organization’s network.

Note

Network Infrastructure Security

A screened-subnet firewall can be used to create a demilitarized zone (DMZ). This type of firewall utilizes a bastion host that is sandwiched between two packet-filtering routers and is the most secure firewall system. This type of firewall system supports both network and application-level security, while defining a separate demilitarized zone network.

Note

Network Infrastructure Security

Employees of the organization, as well as partners and vendors, can connect through a dial-up system to get access to organizational resources. One of the methods implemented for authenticating users is a callback system. The callback system works to ensure users are who they say they are by calling back a predefined number to establish a connection. An authorized user calls a remote server through a dial-up line first. Then the server disconnects and dials back to the user machine, based on the user ID and password, using a telephone number from its database. However, it should be noted that callback security can easily be defeated through simple call forwarding.

Note

Network Infrastructure Security

Encryption Techniques

The use of encryption enables companies to digitally protect their most valuable assets: information. The organization’s information system contains and processes intellectual property, including organizational strategy, customer lists, and financial data. In fact, a majority of information as well as the transactions associated with this information are stored digitally. This environment requires companies to use encryption to protect the confidentiality and the integrity of information. Organizations should utilize encryption services to ensure reliable authentication of messages, the integrity of documents, and the confidentiality of information that is transmitted and received.

Cryptography is the art and science of hiding the meaning of communication from unintended recipients by encrypting plain text into cipher text. The process of encryption and decryption is performed by a cryptosystem that uses mathematical functions (algorithms) and a special password called a key.

Encryption is used to protect data while in transit over networks, protect data stored on systems, deter and detect accidental or intentional alterations of data, and verify the authenticity of a transaction or document. In other words, encryption provides confidentiality, authenticity, and nonrepudiation. Nonrepudiation provides proof of the origin of data and protects the sender against a false denial by the recipient that the data has been received, or to protect the recipient against false denial by the sender that the data has been sent.

The strength of a cryptosystem lies in the attributes of its key components. The first component is the algorithm, which is a mathematical-based function that performs encryption and decryption. The second component is the key that is used in conjunction with the algorithm; each key makes the encryption/decryption process unique. To decrypt a message that has been encrypted, the receiver must use the correct key; if an incorrect key is used, the message is unreadable. The key length, which is predetermined, is important to reduce the possibility of a brute-force attack to decrypt an encrypted message. The longer the key is, the more difficult it is to decrypt a message because of the amount of computation required to try all possible key combinations (work factor). Cryptanalysis is the science of studying and breaking the secrecy of encryption algorithms and their necessary pieces. The work factor involved in brute-forcing encrypted messages relies significantly on the computing power of the machines that are brute-forcing the message.

Note

Encryption Techniques

As an example, the Data Encryption Standard (DES) was selected as an official cipher (method of encrypting information) under the Federal Information Processing Standard (FIPS) for the United States in 1976. When introduced, DES used a 56-bit key length. It is now considered to be insecure for many applications because they have been broken into in less than 24 hours. A 24-hour time frame to break a cryptographic key is considered a very low work factor. In 1998, the Electronic Frontier Foundation (EFF) spent approximately $250,000 and created a DES-cracker to show that DES was breakable. The machine brute-forced a DES key in a little more than two days, proving that the work factor involved was small and that DES was, therefore, insecure. Fortunately, there is a version of DES named Triple DES (3DES) that uses a 168-bit key (three 56-bit keys) and provides greater security than its predecessor. As a point of interest, you should note that the U.S. Federal Government has ended support for the DES cryptosystem in favor of the new Advanced Encryption Standard (AES).

The cryptographic algorithms use either symmetric keys or asymmetric keys. Symmetric keys are also known as secret keys or shared secret keys because both parties in a transaction use the same key for encryption and decryption. The ability of users to keep the key secret is one of the weaknesses in a symmetric key system. If a key is compromised, all messages using this key can be decrypted. In addition, the secure delivery of keys poses a problem when adding new devices or users to a symmetric key system. Acceptable methods of delivery can include placing the key on a floppy and hand delivering it or delivering the key through the use of a secure courier or via postal mail. Protecting the exchange of symmetric shared keys through the use of asymmetric or hybrid cryptosystems is another option that is described in more detail later in this chapter.

A variety of symmetric encryption algorithms exists, as shown in Table 4.1.

Table 4.1. Symmetric Encryption Algorithms

Algorithm

Notes

Data Encryption Standard (DES)

Low work factor—has been broken once

Provides confidentiality but not nonrepudiation

Low work factor

Provides confidentiality but not nonrepudiation

Advanced Encryption Standard (AES)

High work factor

Provides confidentiality but not nonrepudiation

International Data Encryption Algorithm (IDEA)

High work factor

Provides confidentiality but not nonrepudiation

Rivest Cipher 5 (RC5)

High work factor

Provides confidentiality but not nonrepudiation

Symmetric keys are fast because the algorithms are not burdened with providing authentication services, and they are difficult to break if they use a large key size. However, symmetric keys are more difficult to distribute securely.

Figure 4.3 shows the symmetric key process. Both the sending and receiving parties use the same key.

Symmetric encryption process.

Figure 4.3. Symmetric encryption process.

Symmetric encryption’s security is based on how well users protect the private key. If the private key is compromised, all messages encrypted with the private key can be decrypted by an unauthorized third party. The advantage of symmetric encryption is speed.

Asymmetric Encryption (Public-Key Cryptography)

In using symmetric key encryption, a single shared secret key is used between parties. In asymmetric encryption, otherwise known as public-key cryptography, each party has a respective key pair. These asymmetric keys are mathematically related and are known as public and private keys. When messages are encrypted by one key, the other key is required for decryption. Public keys can be shared and are known to everyone, hence the definition public. Private keys are known only to the owner of the key. These keys make up the key pair in public key encryption.

Before public-key cryptography can be used, both the sender and the recipient need to exchange one another’s public keys. If a sender wants to encrypt a message to another recipient, the sender encrypts a message using his private key (known only to him), and the recipient decrypts the message using the sender’s public key (known to everyone). Because the keys are mathematically linked, the recipient is assured that the message truly came from the original sender. This is known as authentication or authenticity because the sender is the only party who should have the private key to encrypt content in a way that can be decrypted by the sender’s public key. It is important to keep in mind that anyone who has the sender’s public key can decrypt this message at this point, so this initial encryption does not provide confidentiality. If the sender wants the message to be confidential, he should then re-encrypt his message using the recipient’s public key. This requires the recipient to use his own private key (known only to him) to initially decrypt the message and then to use the sender’s public key to decrypt the remainder. In this scenario, the sender is assured that only the recipient can decrypt the message (protecting confidentiality), and the recipient is assured that the message came from Waylon (proof of authenticity). This type of data encryption provides message confidentiality and authentication.

Note

Asymmetric Encryption (Public-Key Cryptography)

The following is a review of basic asymmetric encryption flow:

  1. A clear-text message is encrypted by the sender with the sender’s private key, to ensure authenticity only.

  2. The message is re-encrypted with the recipient’s public key, to ensure confidentiality.

  3. The message is initially decrypted by the recipient using the recipient’s own private key, rendering a message that remains encrypted with sender’s private key.

  4. The message is then decrypted by the recipient using the sender’s public key. If this is successful, the receiver can be sure that the message truly came from the original sender.

Figure 4.4 outlines asymmetric encryption to ensure both authenticity and confidentiality.

Asymmetric encryption process.

Figure 4.4. Asymmetric encryption process.

The advantages of an asymmetric key encryption system are the ease of secure key distribution and the capability to provide authenticity, confidentiality, and nonrepudiation. The disadvantages of asymmetric encryption systems are the increase in overhead processing and, therefore, cost.

Note

Asymmetric encryption process.

A variety of asymmetric encryption algorithms are used, as shown in Table 4.2.

Table 4.2. Encryption Algorithms

Algorithm

Use

Notes

Rivest, Shamir, Adleman (RSA)

Encryption Digital signature

Security comes from the difficulty of factoring large prime numbers.

Elliptic Curve Cryptosystem (ECC)

Encryption Digital signature

Rich mathematical structures are used for efficiency. ECC can provide the same level of protection as RSA, but with a key size that is smaller than what RSA requires.

Digital Signature Algorithm (DSA)

Digital signature

Security comes from the difficulty of factoring discrete algorithms in a finite space.

Note

Encryption Algorithms

We have examined both symmetric and asymmetric cryptography, and each has advantages and disadvantages. Symmetric cryptography is fast, but if the shared secret key is compromised, the encrypted messages might be compromised. There are challenges in distributing the shared secret keys securely. Asymmetric processing provides authenticity, confidentiality, and nonrepudiation but requires higher overhead because processing is slower. If we combine the two encryption methods in a hybrid approach, we can use public key cryptography.

Public and private key cryptography use algorithms and keys to encrypt messages. In private (shared) key cryptography, there are significant challenges in distributing keys securely. In public key cryptography, the challenge lies in ensuring that the owner of the public key is who he says he is, and trusted notification if the key is invalid because of compromise.

Public Key Infrastructure (PKI)

A public key infrastructure (PKI) incorporates public key cryptography, security policies, and standards that enable key maintenance (including user identification, distribution, and revocation) through the use of certificates. The goal of PKI is to answer the question “How do I know this key is truly your public key?” PKI provides access control, authentication, confidentiality, nonrepudiation, and integrity for the exchange of messages through use of Certificate Authorities (CA) and digital certificates. PKI uses a combination of public-key cryptography and digital certificates to provide some of the strongest overall control over data confidentiality, reliability, and integrity for Internet transactions.

The CA maintains, issues, and revokes public key certificates, which ensure an individual’s identity. If a user (Randy) receives a message from Waylon that contains Waylon’s public key, he can request authentication of Waylon’s key from the CA. When the CA has responded that this is Waylon’s public key, Randy can communicate with Waylon, knowing that he is who he says he is. The other advantage of the CA is the maintenance of a certificate revocation list (CRL), which lists all certificates that have been revoked. Certificates can be revoked if the private key has been comprised or the certificate has expired. As an example, imagine that Waylon found that his private key had been compromised and had a list of 150 people to whom he had distributed his public key. He would need to contact all 150 and tell them to discard the existing public key they had for him. He would then need to distribute a new public key to all those he communicates with. In using PKI, Waylon could contact the CA, provide a new public key (establish a new certificate), and place the old public key on the CRL. This is a more efficient way to deal with key distribution because a central authority is providing key maintenance services.

Note

Public Key Infrastructure (PKI)

The certificates used by the CAs incorporate identity information, certificate serial numbers, certificate version numbers, algorithm information, lifetime dates, and the signature of the issuing authority (CA). The most widely used certificate types are the Version 3 X.509 certificates. The X.509 certificates are commonly used in secure web transactions via Secure Sockets Layer (SSL).

This example provides a view of the contents of an X.509 certificate and includes the lifetime dates, who the certificate is issued to, who the certificate is issued by, and the communication and encryption protocols that are used. Digital certificates are considered the most reliable sender-authentication control. In this case, PKI provides nonrepudiation services for e-commerce transactions. This e-commerce hosting organization uses asymmetric encryption along with digital certificates to verify the authenticity of the organization and its transaction communications for its customers.

A Certifying Authority (CA) can delegate the processes of establishing a link between the requesting entity and its public key to a Registration Authority (RA). An RA performs certification and registration duties to offload some of the work from the CAs. The RA can confirm individual identities, distribute keys, and perform maintenance functions, but it cannot issue certificates. The CA still manages the digital certificate life cycle, to ensure that adequate security and controls exist.

Digital Signature Techniques

Digital signatures provide integrity in addition to message source authentication because the digital signature of a signed message changes every time a single bit of the document changes. This ensures that a signed document cannot be altered without being detected. Depending on the mechanism chosen to implement a digital signature, the mechanism might be capable of ensuring data confidentiality or even timeliness, but this is not guaranteed.

A digital signature is a cryptographic method that ensures data integrity, authentication of the message, and nonrepudiation. The primary purpose of digital signatures is to provide authentication and integrity of data. In common electronic transactions, (a digital signature is created by the sender to prove message integrity and authenticity by initially using a hashing algorithm to produce a hash value, or message digest, from the entire message contents. The sender provides a mechanism to authenticate the message contents by encrypting the message digest using the sender’s own private key. If the recipient can decrypt the message using the sender’s public key, which has been validated by a third-party Certificate Authority, the recipient can rest assured that the message digest was indeed created by the original sender. Upon receiving the data and decrypting the message digest, the recipient can independently create a message digest from the data using the same publicly available hashing algorithm for data comparison and integrity validation.

The following is the flow of a digital signature:

  1. The sender and recipient exchange public keys:

    • These public keys are validated via a third-party Certificate Authority (CA).

    • A Registration Authority (sometimes separate from the CA) manages the certificate application and procurement procedures.

  2. The sender uses a digital signature hashing algorithm to compute a hash value of the entire message (called a message digest).

  3. The sender “signs” the message digest by encrypting it with the sender’s private key.

  4. The recipient validates authenticity of the message digest by decrypting it with the sender’s validated public key.

  5. The recipient then validates the message integrity by computing a message digest of the message and compares the message digest value to the recently decrypted message digest provided by the sender.

Note

Digital Signature Techniques

A key distinction between encryption and hashing algorithms is that hashing algorithms are irreversible. A message digest is the result of using a one-way hash that creates a fingerprint of the message. If the message is altered, a comparison of the message digest to the hash of the altered message will show that the message has been changed. The hashing algorithms are publicly known and differ from encryption algorithms in that they are one-way functions and are never used in reverse. The sender runs a hash against a message to produce the message digest, and the receiver runs the same hash to produce a second message digest. The message digests are compared; if they are different, the message has been altered. The sender can use a digital signature to provide message authentication, integrity, and nonrepudiation by first creating a message digest from the entire message by using an irreversible hashing algorithm, and then “signing” the message digest by encrypting the message digest with the sender’s private key. Confidentiality is added by then re-encrypting the message with the recipient’s public key.

Note

Digital Signature Techniques

Within the Digital Signature Standard (DSS) the RSA and Digital Signature Algorithm (DSA) are the most popular.

Each component of cryptography provides separate functions. An encrypted message can provide confidentiality. If the message contains a digital signature, this is a guarantee of the authentication and integrity of the message. As an example, a message that contains a digital signature that encrypts a message digest with the sender’s private key provides strong assurance of message authenticity and integrity.

Network and Internet Security

As an IS auditor, you need to understand network connectivity, security, and encryption mechanisms on the organization’s network. The use of layer security (also known as defense-in-depth) reduces the risks associated with the theft of or damage to computer systems, data, or the organization’s network. Proper security policies and procedures, combined with strong internal and external access-control mechanisms, reduce risk to the organization and ensure the confidentiality, integrity, and availability of services and data.

As stated earlier in Chapter 3, firewalls can be used to protect the organization’s assets against both internal and external threats. Firewalls can be used as perimeter security between the organization and the Internet, to protect critical systems and data from external hackers or internally from untrusted users (internal hackers).

Per ISACA, organizations that have implemented firewalls face these problems:

  • A false sense of security, with management feeling that no further security checks or controls are needed on the internal network (that is, the majority of incidents are cause by insiders, who are not controlled by firewalls).

  • Circumventing firewalls through the use of modems might connect users directly to Internet service providers. Management should ensure that the use of modems when a firewall exists is strictly controlled or prohibited altogether.

  • Misconfigured firewalls might allow unknown and dangerous services to pass through freely.

  • What constitutes a firewall might be misunderstood (companies claiming to have a firewall might have merely a screening router).

  • Monitoring activities might not occur on a regular basis (for example, log settings might not be appropriately applied and reviewed).

  • Firewall policies might not be maintained regularly.

An initial step in creating a proper firewall policy is identifying network applications, such as mail, web, or FTP servers to be externally accessed. When reviewing a firewall, an IS auditor should be primarily concerned with proper firewall configuration, which supports enforcement of the security policy.

Working in concert with firewalls are the methods of access and encryption of data and user sessions on the network. A vast majority of organizations have users who are geographically dispersed, who work from home, or who travel as part of their job (road warriors). In addition, organizations allow vendors, suppliers, or support personnel access to their internal network. Virtual private networks (VPNs) provide a secure and economical method for WAN connectivity. This access can be provided via a VPN or via a public site for which traffic is encapsulated or encrypted.

VPNs use a combination of tunneling encapsulation and encryption to ensure communication security. The protocols used to provide secure connectivity might vary by the vendor and implementation. A tunneling protocol creates a virtual path through public and private networks. Network protocols such as IPSec often encrypt and encapsulate data in the OSI network layer.

Note

Network and Internet Security

PPTP is a protocol that provides encapsulation between a client and a server. PPTP works at the data link layer of the OSI model and provides encryption and encapsulation over the private link. Because it is designed to work from a client to a server, it sets up a single connection and transmits only over IP networks. In negotiating a PPTP connection, the client initiates a connection to a network either by using dial-in services or coming across the Internet. A weakness associated with PPTP is that the initial negotiation of IP address, username, and password are sent in clear text (not encrypted); after the connection is established, the remainder of communication is encapsulated and encrypted. This is a weakness in the protocol and might allow unauthorized parties to use a network sniffer to see initial negotiations passed in the clear.

Note

Network and Internet Security

IPSec works at the network layer and protects and authenticates packets through two modes: transport and tunnel. IPSec transport mode encrypts only the data portion of each packet, not the header. For more robust security, tunnel mode encrypts both the original header and the data portion of the packet. IPSec supports only IP networks and can handle multiple connections at the same time.

In addition to protocols associated with establishing private links, tunneling, and encrypting data, protocols are used to facilitate secure web and client/server communication. The Secure Sockets Layer (SSL) protocol provides confidentiality through symmetric encryption such as the Data Encryption Standard (DES) and is an application/session-layer protocol used for communication between web browsers and servers. When a session is established, SSL achieves secure authentication and integrity of data through the use of a public key infrastructure (PKI). The services provided by SSL ensure confidentiality, integrity, authenticity, and nonrepudiation. SSL is most commonly used in e-commerce transactions to provide security for all transactions within the HTTP session.

The complexity associated with the implementation of encryption and secure transmission protocols requires the IT organization to pay careful attention to ensure that the protocols are being configured, implemented, and tested properly. In addition, careful attention should be paid to the secrecy and length of keys, as well as the randomness of key generation.

Security Software

Intrusion-detection systems (IDS) are used to gather evidence of system or network attacks. An IDS can be either be signature based or statistical anomaly based. Generally, statistical anomaly–based IDSs are more likely to generate false alarms. A network-based IDS works in concert with the routers and firewalls by monitoring network usage to detect anomalies at different levels within the network.

The first type of IDS is a network-based IDS. Network IDSs are generally placed between the firewall and the internal network, and on every sensitive network segment to monitor traffic looking for attack patterns or suspicious activity. If these patterns are recognized, the IDS alerts administrators or, in later generations of IDS, protects the network by denying access to the attacking addresses or dropping all packets associated with the attack. Host-based IDSs operate on a host and can monitor system resources (CPU, memory, and file system) to identify attack patterns.

The latest generation of IDSs can detect either misuse or anomalies by gathering and analyzing information and comparing it to large databases of attack signatures. In this case, the specific attack or misuse must have already occurred and been documented. This type of IDS is only as good as the database of attack signatures. If the IDS is using anomaly detection, the administrator should identify and document (within the IDS) a baseline; when the IDS detects patterns that fall outside the baseline (anomalies), it performs a certain action (alerts, stops traffic, shuts down applications or network devices). If the IDS is not baselined or configured correctly, the system could detect and alert on false positives. A false positive occurs when a system detects and alerts on an act that does not really exist. If a system detects a high number of false positives, the risk is that either the alerts will be ignored or that the particular rule associated with the alert will be turned off completely. If the IDS is a passive system, it will detect potential anomalies, log the information, and alert administrators. In a reactive system, the IDS takes direct action to protect assets on the network. These actions can include dropping packets from the attacking IP address, reprogramming the firewall to block the offending traffic or all traffic, or shutting down devices or applications that are being attacked.

Note

Security Software

The firewall and IDS work together to achieve network security. The firewall can be viewed as a preventative measure because the firewall limits the access between networks to prevent intrusion but does not signal an attack. An IDS evaluates a suspected intrusion after it has taken place and sends an alert.

Single sign-on (SSO) systems are used to centralize authentication and authorization access within an information system. With specialization of applications, the average user accesses multiple applications while performing his duties. Some applications might allow users to authenticate once and access multiple applications (usually the same vendor), but most do not. SSO allows users to authenticate once, usually with a single login ID and password, and get authorization to work on multiple applications. SSO can apply to one network or can span multiple networks and applications. It is sometimes referred to as federated identity management. When implementing a single sign-on system, the organization must ensure that the authentication systems are redundant and secure.

Single sign-on authentication systems are prone to the vulnerability of having a single point of failure for authentication. In addition, if all the users internal and external to the organization and their authorization rights are located in one system, the impact of compromised authentication and subsequent unauthorized access is magnified. If one user within a single sign-on system or directory is compromised, all users, passwords, and access rights might have been compromised.

Voice Communications Security

Most people use the phone in day-to-day business and do not think about the security required within the telecommunications network. In fact, for many years, both telecommunications companies and organizations focused on the aspect of availability and did not consider integrity and confidentiality. When someone places a phone call from home or work, the call moves through any number of telephone switches before reaching its destination. These switches connect businesses within cities, cities to states, and countries to countries.

One of the systems in use in most business is the Private Branch Exchange (PBX). The PBX is similar to the telecommunications company switches, in that it routes calls within the company and passes calls to the external telecommunication provider. Organizations might have a variety of devices connected to the PBX, including telephones, modems (remote-access and vendor-maintenance lines), and computer systems. The lack of proper controls within the PBX and associated devices increases both unauthorized access vulnerabilities and outages (availability) in the organization’s voice telecommunications network.

These vulnerabilities include the following:

  • Theft of service—An example is toll fraud, in which attackers gain access to the PBX to make “free” phone calls.

  • Disclosure of information—Organizational data is disclosed without authorization, through either malicious acts or error. Telephone conversations might be intentionally or unintentionally overheard by unauthorized individuals, or access might be gained to telephone routing or address data.

  • Information modification—Organizational data contained within the PBX or a system connected to the PBX might be altered through deletion or modification. An unauthorized person might alter billing information or modify system information to gain access to additional services.

  • Unauthorized access—Unauthorized users gain access to system resources or privileges.

  • Denial of service—Unauthorized persons intentionally or unintentionally prevent the system from functioning as intended.

  • Traffic analysis—. Unauthorized persons observe information about calls and make informed guesses based on source and destination numbers or call frequency. As an example, an unauthorized person might see a high volume of calls between the CEO of an organization and a competitor’s CEO or legal department, and infer that the organizations are looking to merge.

To reduce the risk associated with these vulnerabilities, administrators should remove all default passwords from the PBX system and ensure that access control within the system applies the rule of least privilege. All modems associated with maintenance should be disabled unless they are needed, and modems that employees use for remote access should employ additional hardware or software for access control. All phone numbers that are not in use should be disabled, and users who need to access voice mail should have a password policy that requires the use of strong passwords and periodic password changes. In addition, administrators should enable logging on the system and review both the PBX access and telephone call logs periodically.

Environmental Protection Practices and Devices

Environmental controls mitigate the risks associated with naturally occurring events. The most common of these are power sags, spikes, surges, and reduced voltage, but they also include tornadoes, hurricanes, earthquakes, floods, and other types of weather conditions. Per ISACA, power failures can be grouped into four distinct categories, based on the duration and relative severity of the failure:

  • Total failure—A complete loss of electrical power, which might involve a single area building up to an entire geographic area. This is often caused by weather conditions (such as a storm or earthquake) or the incapability of an electrical utility company to meet user demands (such as during summer months).

  • Severely reduced voltage (brownout)—The failure of an electrical utility company to supply power within an acceptable range (108–125 volts AC in the United States). Such failure places a strain on electrical equipment and could limit its operational life or even cause permanent damage.

  • Sags, spikes, and surges—Temporary and rapid decreases (sags) or increases (spikes and surges) in voltage levels. These anomalies can cause loss of data, data corruption, network transmission errors, or even physical damage to hardware devices such as hard disks or memory chips.

  • Electromagnetic interference (EMI)—. Interference caused by electrical storms or noisy elective equipment (such as motors, fluorescent lighting, or radio transmitters). This interference could cause computer systems to hang or crash, and could result in damages similar to those caused by sags, spikes, and surges.

To reduce the risks associated with power sags, spikes, and surges, the organization should deploy surge protectors for all electrical equipment. The additional implementation of an uninterruptible power supply (UPS) can provide enough power to either shut down systems gracefully in the event of a power failure or provide enough power to keep mission-critical systems operating until power returns. A UPS can be either implemented on a system-by-system basis (portable) or deployed as part of the overall IT infrastructure. A UPS contains batteries that continue to charge as the system has power and provides battery backup power in case of a failure. Generally, smaller portable UPS systems provide between 30 minutes and 3 hours of power; larger systems (a permanent UPS) can provide power for multiple days.

The organization can provide a complete power system, which would include the UPS, a power conditioning system (PCS), and a generator. The PCS is used to prevent sags, spikes, and surges from reaching the electrical equipment by conditioning the incoming power to reduce voltage deviations and provide steady-state voltage regulation. The PCS ensures that all power falls within acceptable levels for the electrical devices it is serving. The organization might employ a generator in concert with the UPS. In most cases, the generator and UPS are controlled by the same system, allowing the generator to power up when the battery power in the UPS falls below a certain threshold.

In addition to the issues surrounding electrical power, organizations must deploy environmental controls for the overall health of the hardware and software, as well as preventative, detective, and corrective measures in case of an emergency. Within the design of the IT infrastructure, the organization must determine the best place for the core servers and network devices. This location is sometimes referred to as the LAN room or computer room. It should be implemented with climate controls, fire-suppression systems, and power-control systems. The computer room should be located in a place that is not threatened by electromagnetic interference (EMI) or the possibility of flooding.

Electrical equipment must operate in climate-controlled facilities that ensure proper temperature and humidity levels. Relative humidity should be between 40% and 60%, and the temperature should be between 70°F and 74°F. Both extremely low and extremely high temperatures can cause electrical component damage. High humidity can cause corrosion in electrical components, reducing their overall efficiency or permanently damaging the equipment; low humidity can introduce static electricity, which can short out electrical components. Proper ventilation should be employed to maintain clean air free of contaminants. A positive pressurization system ensures that air flows out of instead of into the computer room. If you have ever entered a building and opened the door to feel the air pushing out toward you, you have entered a building that is positively pressurized. This pressurization ensures that contaminants from the outside do not flow into the room or building. Water detectors should be placed near drains in the computer room to detect water leaks and sound audible alarms.

One of the most serious threats facing both computing equipment and people is fire. A variety of systems are available to prevent, detect, and suppress fire.

A number of fire-detection systems are activated by heat, smoke, or flame. These systems should provide an audible signal and should be linked to a monitoring system that can contact the fire department.

  • Smoke detectors—Placed both above and below the ceiling tiles. They use optical detectors that detect the change in light intensity when there is a presence of smoke.

  • Heat-activated detectors—Detect a rise in temperature. They can be configured to sound an alarm when the temperature exceeds a certain level.

  • Flame-activated detectors—Sense infrared energy or the light patterns associated with the pulsating flames of a fire.

Fire-suppression systems can be either automatic (chemical or water) or manual (fire extinguishers) and are designed to suppress fire using different methods. Table 4.3 outlines suppression agents and their method of extinguishing different types of fires.

Table 4.3. Fire-Suppression Agents

Suppression Agent

Used to Control

Method of Extinguishing

Water

Common combustibles

Reducing temperatures

CO2

Liquid and electrical fires

Removing fuel and oxygen

Soda acid

Liquid and electrical fires

Removing fuel and oxygen

Gas

Chemical fires

Interfering with the chemical reaction necessary for fire

The following are automatic fire suppression systems:

  • Water sprinklers—These are effective in fire suppression, but they will damage electrical equipment.

  • Water dry pipe—A dry-pipe sprinkler system suppresses fire via water that is released from a main valve, to be delivered via a system of dry pipes that fill with water when the fire alarm activates the water pumps. A dry-pipe system detects the risk of leakage.

    Water-based suppression systems are an acceptable means of fire suppression, but they should be combined with an automatic power shut-off system.

    Note

    Water dry pipe—
  • Halon—. Pressurized halon gas is released, which interferes with the chemical reaction of a fire. Halon damages the ozone and, therefore, is banned, but replacement chemicals include FM-200, NAF SIII, and NAF PIII.

  • CO2—. Carbon dioxide replaces oxygen. Although it is environmentally acceptable, it cannot be used in sites that are staffed because it is a threat to human life.

The threat of a fire can be mitigated through the use of detection and suppression systems, but personnel also should be properly trained on how to react in case of a fire. This should include the use of manual fire alarms, fire extinguishers, and evacuation procedures.

Physical Access

Physical security supports confidentiality, integrity, and availability by ensuring that the organization is protected from unauthorized persons accessing the physical facility. The type of physical security controls depends on the risk associated with the asset.

In auditing a facility, the IS auditor should ensure that there are physical access restrictions governing employees, visitors, partners/vendors, and unauthorized persons (intruders). All facilities associated with the organization, including off-site computing and storage facilities, should be reviewed. An organization’s facilities are similar to those of a city, in that there are different physical access controls based on the assets being protected. In a city, the physical access controls for a corner store, for instance, might include a lock and key, whereas a bank might employ physical security guards, lock and key, and additional stronger internal controls, such as a vault. This type of layered security can include administrative controls such as access policies, visitor logging, and controlled visitor access.

  • Access policies—Individuals internal and external to the organization are identified, along with the areas of the facility to which they are allowed access.

  • Visitor logging—Visitors provide identification and are signed into the facility; they indicate the purpose of their visit, their name, and their company.

  • Controlled visitor access—Individuals must be escorted by an employee while in the facility.

Note

Controlled visitor access—

In addition to administrative controls, the facility might employ biometric access controls, physical intrusion detection (alarms, motion sensors, glass break alarms, and so on), and electronic surveillance (cameras, electronic logging, and so on). Technical controls often provide the capability to create audit logs that show access attempts into the facility. Audit logs should include the point of entry, date and time of access, ID use during access, and both successful and unsuccessful attempts to access. These logs should be reviewed periodically to ensure that only authorized persons are gaining access to the facility and should take note of any modifications of access rights.

Similar to the review of the organization’s network, the IS auditor should review facilities to determine paths of physical entry and should evaluate those paths for the proper level of security. Access paths include external doors, glass windows, suspended ceilings (plenum space), and maintenance access panels and ventilation systems.

Physical Security Practices

The previous section discussed authentication methods that are used in gaining access to IT systems. Authentication can be in the form of something you know (passwords), something you have (smart card), something you are, or unique personal characteristics (fingerprints, retina patterns, iris patterns, hand geometry, and palm patterns). The “what you are” of authentication is referred to as biometrics. This involves authenticating an individual’s identity by a unique personal attribute. When implementing biometric systems, the individuals provide a sample of a personal attribute (known as enrollment), such as a fingerprint, which will be used for comparison when access is requested. Although biometrics provides only single-factor authentication, many consider it to be an excellent method for user authentication and an excellent physical access control.

A biometric system by itself is advanced and very sensitive. This sensitivity can make biometrics prone to error. These errors fall into two categories:

Most biometric systems have sensitivity levels associated with them. When the sensitivity level is increased, the rate of rejection errors increases (authorized users are rejected). When the sensitivity level is decreased, the rate of acceptance (unauthorized users are accepted) increases. Biometric devices use a comparison metric called the Equal Error Rate (EER), which is the rate at which the FAR and FRR are equal or cross over. In general, the lower the EER is, the more accurate and reliable the biometric device is. Organizations with a higher need for confidentiality are more concerned with a biometric access control False Acceptance Rate (FAR) than its False Rejection Rate (FRR) or Equal Error Rate (EER).

Note

False Acceptance Rate (FAR) Type II error—

Intrusion Methods and Techniques

Most organizations today have opened their systems or a portion of their systems to partners, vendors, and the general public. The explosive growth of the Internet has enabled organizations to provide information, sell goods and services, exchange and update information, and transmit data between geographically dispersed offices. This “openness” provides the perfect opportunity for hackers or intruders to gain unauthorized access to private networks and data.

The terms hacker and cracker are commonly used today to describe individuals who use either social engineering or technical skills to gain unauthorized access to networks. In the not-too-distant past, a hacker was someone who was interested in the way things worked (such as computers and programs) and used skills to find out very detailed information on what made them tick. These individuals, called hackers, were not malicious, but curious. Today the term hacker refers to an individual who is trying to gain unauthorized access to or compromise the integrity and availability of computer systems and data. For the purposes of this book, we replace the term hacker with the term intruder because it is more appropriate. Intruders can be either internal or external to the organization and might try to gain access to systems with the intent of causing harm to the systems or data, invading others’ privacy, or stealing proprietary information. The IS auditor should understand both the internal and external risks to ensure that proper security controls are in place to protect the organization’s assets.

Passive and Active Attacks

Intruders have access to detailed instructions, tools, and methods via the Internet. Intruders use this collection of information and programs to gain a better understanding of an organization’s computer systems and network topography, and to circumvent access controls. Attack types include both passive and active attacks, and can be either internal or external to the organization’s network. Passive attacks are generally used to probe network devices and applications, in an attempt to learn more about the vulnerabilities of those systems. An intruder might utilize scanning tools, eavesdropping, and traffic analysis to create a profile of the network:

  • Scanning—This attack uses automated tools to scan systems and network devices, to determine systems that are on the network and network ports (services) that are listening on those systems.

  • Eavesdropping—In this attack, also known as sniffing or packet analysis, the intruder uses automated tools to collect packets on the network. These packets can be reassembled into messages and can include email, names and passwords, and system information.

  • Traffic analysis—In traffic analysis, an intruder uses tools capable of monitoring network traffic to determine traffic volume, patterns, and start and end points. This analysis gives intruders a better understanding of the communication points and potential vulnerabilities.

    Note

    Traffic analysis—

    Active attacks involve using programs to either bypass access controls or negatively impact the availability of network devices and services. Active attacks include brute-force attack, masquerading, packet replay, message modification, unauthorized access through the Internet or web-based services, denial of service, dial-in penetration attacks, email bombing and spamming, and email spoofing:

  • Brute-force attack—. An intruder uses automated tools and electronic dictionaries to try to guess user and system passwords. These automated tools try thousands of words or character combinations per hour in an attempt to gain unauthorized access to the system.

  • Denial of service—Any method an intruder uses to hinder or prevent the delivery of information services to authorized users is considered a denial-of-service (DoS) attack. As an example, an intruder inundates (floods) the system with requests. In the process of responding to a high volume of requests, the system is rendered useless to authorized users. These types of attacks generally intend to exhaust all available CPU or memory.

    The “ping of death” is a common denial-of-service (DoS) attack that entails using a ping with a packet size higher than 65Kb with the “no fragmentation” flag on. When the system receives the oversize packet that exceeds the acceptable length (higher than 65Kb), it causes the system to freeze, reboot, or crash.

  • Spamming—Spam is common on the Internet today, but the act of spamming or email bombing is the capability of sending messages in bulk. Spamming can be used to overload individual email boxes on servers, which fills up the hard drives and causes system freezes and crashes.

When an intruder gains access to the system, he might tamper with existing programs to add a Trojan horse. A Trojan horse is a program that masquerades as another program or is even embedded within a program. Trojan horse programs or code can delete files, shut down the systems, or send system and network information to an email or Internet address. Trojan horse programs are a common form of Internet attack.

In addition to active and passive attacks, intruders might use social engineering to gain information that opens access to physical facilities and network systems. Social engineering is the use of psychological tricks on authorized users to gain access. Intruders might use techniques such as calling on the phone to authorized users and posing as help-desk personnel, to coerce an authorized user into divulging his password. Social engineering is the art of using social “con” skills to obtain passwords without the use of computer tools or programs.

Note

Spamming—

Viruses

A virus is computer program that infects systems by inserting copies of itself into executable code on a computer system. In addition to damaging computer systems through reconfiguration and file deletion, viruses are self-replicating, similar to a biological virus. When executed, a virus spreads itself across computer systems. A worm is another type of computer program that is often incorrectly called a virus. The difference between a virus and a worm is that the virus relies on the host (infected) system for further propagation because it inserts itself into applications or programs so that it can replicate and perform its functions. Worms are malicious programs that can run independently and can propagate without the aid of a carrier program such as email. Worms can delete files, fill up the hard drive and memory, or consume valuable network bandwidth.

Viruses come in many shapes and sizes. As an example, the polymorphic virus has the capability of changing its own code, enabling it to have many different variants. The capability of a polymorphic virus to change its signature pattern enables it to replicate and makes it more difficult for antivirus systems to detect it. Another type of malicious code is a logic bomb, which is a program or string of code that executes when a sequence of events or a prespecified time or date occurs. A stealth virus is a virus that hides itself by intercepting disk access requests.

Adopting and communicating a comprehensive antivirus policy is a fundamental step in preventing virus attacks. Antivirus software is considered a preventive control. Antivirus software products are applications that detect, prevent, and sometimes remove all the virus files located within a computing system. IS auditors should look for the existence of antivirus programs on all systems within the organization. In addition, users within the IT infrastructure should understand the risks of downloading programs, code, and ActiveX and Java applets from unknown sources. The primary restlessness seeded with virus programs is their ability to replicate across a variety of platforms very quickly.

Integrity checkers are programs that detect changes to systems, applications, and data. Integrity checkers compute a binary number for each selected program called a cyclical redundancy check (CRC). When initially installed, an integrity checker scans the system and places these results in a database file. Before the execution of each program, the checker recomputes the CRC and compares it to the value stored in the database. If the values do not match, the program is not executed because the integrity checker has determined that the application file might have been modified. Similar to antivirus programs, integrity checkers can be used to detect and prevent the use of virus-infected programs.

Security Testing and Assessment Tools

To ensure that the organization’s security controls are functioning properly, both the IT organization and the IS auditor should use the same techniques that hackers use in an attempt to bypass access controls.

A vulnerability assessment is used to determine potential risks to the organization’s systems and data. Penetration testing is used to test controls implemented as countermeasures to vulnerabilities. Penetration tests performed by the organization are sometimes called intrusion tests or ethical hacking. The penetration test team uses public sources to gain information on an organization’s network, systems, and data. Known as discovery, this includes passive scanning techniques to discover the perimeter systems’ OS and applications that are listening for network connections (ports). It might also include the review of public websites, partner websites, and news groups, to discover information on applications and network connectivity. One example of discovery is the use of newsgroups. System administrators often post questions to newsgroups on the Internet to solve problems they are having with applications or network devices. An intruder can search newsgroups using the domain name of the organization to find potential vulnerabilities.

When the discovery process is complete, the penetration test team should develop a list of potential vulnerabilities on the network. They should then systematically attempt to bypass the access controls by attempting to guess passwords (using automated password-cracking tools and dictionaries), searching for back doors into the system, or exploiting known vulnerabilities based on the type of servers and applications within the organization. Penetration testing is intended to use the same techniques and tools intruders use. Penetration testing can be performed against both internal (applications) and external (firewalls) devices. It should be performed by qualified and authorized individuals. The penetration team should develop a penetration test plan and use caution when performing penetration tests on production systems. The penetration test plan should include methods by which vulnerabilities will be identified, documented, and communicated upon conclusion of the penetration testing period.

Note

Security Testing and Assessment Tools

The IT organization should implement regular vulnerability scanning in addition to penetration testing. Similar to virus-protection programs, vulnerability scanners combined with firewall and IDS logs ensure that the IT infrastructure is protected against both new and existing vulnerabilities. Vulnerability scanning is implemented using automated tools that periodically scan network devices looking for known vulnerabilities. These tools maintain a vulnerability database that is periodically updated as new vulnerabilities are discovered. The vulnerability scans produce reports and generally categorize vulnerabilities into three categories of risk (high, medium, low). The more sophisticated scanning tools provide a list of the vulnerabilities found on the network by device or application, as well as the remediation of that risk. One of the more popular tools used for vulnerability scanning is Nessus (www.nessus.org), an open-source scanner that maintains a vulnerability database (which can be updated via the Internet). An example of a Nessus vulnerability report is shown here (this example does not include the entire report):

Scan Details

 

Hosts that were alive and responding during test

9

Number of security holes found

54

Number of security warnings found

113

Host List

 

Host(s)

Possible Issue

10.163.156.10

Security hole(s) found

Security Issues and Fixes: 10.163.156.10

  

Type

Port

Issue and Fix

Warning

echo (7/tcp)

The echo port is open. This port is not of any use nowadays and could be a source of problems because it can be used along with other ports to perform a denial of service. You should really disable this service.

  

Risk factor: Low

Solution: Comment out ’echo’ in /etc/inetd.conf

CVE: CVE-1999-0103

Nessus ID: 10061

Informational

echo (7/tcp)

An echo server is running on this port

Nessus ID: 10330

Vulnerability

telnet (23/tcp)

The Telnet server does not return an expected number of replies when it receives a long sequence of “Are You There” commands. This probably means that it overflows one of its internal buffers and crashes. It is likely that an attacker could abuse this bug to gain control over the remote host’s superuser. For more information, see www.team-teso.net/advisories/teso-advisory-011.tar.gz.

  

Solution: Comment out the telnet line in /etc/inetd.conf.

Risk factor: High

CVE: CVE-2001-0554

BID: 3064

Nessus ID: 10709

Vulnerability

ssh (22/tcp)

You are running a version of OpenSSH that is older than 3.0.2.

Versions older than 3.0.2 are vulnerable to an environment variable’s export, which can allow a local user to execute a command with root privileges. This problem affects only versions earlier than 3.0.2 and when the UseLogin feature is enabled (usually disabled by default).

  

Solution: Upgrade to OpenSSH 3.0.2 or apply the patch for older versions. (Available at ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH.)

Risk factor: High (if UseLogin is enabled, and locally)

CVE: CVE-2001-0872

BID: 3614

Nessus ID: 10823

The Nessus report shows the machine address, vulnerability (port/service), a text description of the vulnerability, the solution, and the Common Vulnerability and Exposure (CVE) ID. As public vulnerabilities are discovered, they are maintained in databases to provide naming and documentation standards. One such free public database is maintained by the MITRE Corporation (http://cve.mitre.org) and can be used to review known vulnerabilities and their remediation.

In addition to vulnerability testing, the organization can employ tools that are designed to entice and trap intruders. Honey pots are computer systems that are expressly set up to attract and trap individuals who attempt to penetrate other individuals’ computer systems. Honey pots generally are placed in an area of the network that is publicly accessible and that contain known vulnerabilities. The concept of a honey pot is to learn from an intruder’s actions by monitoring the methods and techniques employed by a hacker attempting to gain access to a system.

Note

Security Testing and Assessment Tools

The most significant vulnerability in any organization is the user. The use of appropriate access controls can sometimes be inconvenient or cumbersome for the user population. To ensure that the organization’s security controls are effective, a comprehensive security program should be implemented. The security program should include these components:

  • Continuous user awareness training

  • Continuous monitoring and auditing of IT processes and management

  • Enforcement of acceptable use policies and information security controls

Sources of Information on Information Security

Security professionals use a variety of sources to improve their knowledge of defense and mitigation strategies and to stay up-to-date on known vulnerabilities or intrusion techniques. The following list contains some of the publicly available sources of information:

  • The CERT Coordination Center (www.cert.org)—Established in 1988, the CERT Coordination Center (CERT/CC) is a center of Internet security expertise. It is located at the Software Engineering Institute, a federally funded research and development center operated by Carnegie Mellon University.

  • The Forum of Incident Response and Security Teams (FIRST) (www.first.org)—FIRST brings together a variety of computer security incident response teams from government, commercial, and educational organizations. FIRST aims to foster cooperation and coordination in incident prevention, to stimulate rapid reaction to incidents, and to promote information sharing among members and the community at large.

  • The SANS Institute (www.sans.org)—SANS (SysAdmin, Audit, Network, Security) develops, maintains, and makes available at no cost the largest collection of research documents about various aspects of information security. It operates the Internet’s early warning system, the Internet Storm Center. The SANS Institute was established in 1989 as a cooperative research and education organization. At the heart of SANS are the many security practitioners in government agencies, corporations, and universities around the world who invest hundreds of hours each year in research and teaching to help the entire information security community.

  • The Computer Crime and Intellectual Property Section (CCIPS) (www.cybercrime.gov)—CCIPS is a department of the Criminal Division of the U.S. Department of Justice. It provides information on topics such as computer crime, intellectual property crime, cybercrime documents, and cybernetics.

In addition, there are a number of security portals:

  • Insecure (www.insecure.org)—Insecure.org is the home of Nmap (security scanning tool) and provides information on security tools, techniques, and news.

  • Information Systems Security (http://infosyssec.com)—Infosyssec was originally created by students for students, to help locate and consolidate resources on the Internet that would assist them in their study of information system security. It has become a favorite bookmark of information security professionals.

Security Monitoring, Detection, and Escalation Processes and Techniques

The IT organization should have clear policies and procedures for incident response, including how disruptive incidents are detected, corrected or restored, and managed. Both policies and procedures should outline how specific incidents are to be handled and how the systems, applications, and data involved in the incident can be restored to normal operation. The main goal of an incident-response plan is to restore systems damaged during the incident and to prevent any further damage. The incident-response plan should define a central authority (incident response team) and the procedures for training employees to understand an incident. The incident response team should ensure the following:

  • Systems involved in the incident are segregated from the network so they do not cause further damage.

  • Appropriate procedures for notification and escalation are followed.

  • Evidence associated with the incident is preserved.

  • Documented procedures to recover systems, applications, and data are followed.

An intrusion-detection system (IDS) should be part of the security infrastructure of the organization, to monitor the organization’s systems and data to detect misuse. The IDS can be either network based or host based, and it operates continuously to alert administrators when it detects a threat. Both types of IDSs can use knowledge-based (signature-based) or behavior-based (statistical, neural) programs to detect network attacks. A network-based IDS can be placed between the Internet and the firewall, to detect attack attempts. A host-based IDS should be configured to run on a specific host and monitor the resources associated with the host system. Host-based IDSs can be used to monitor file systems, memory, CPU, and network traffic to the host system. Both network- and host-based IDSs use sensors to collect data for review.

An IDS can be signature based, statistical based, or a neural network. A signature-based IDS monitors and detects known intrusion patterns. A signature-based IDS has a database of signature files (known attack types) to which it compares incoming data from the sensors. If it detects a match, it alerts administrators. A statistical-based IDS compares data from sensors against an established baseline (created by the administrator). If the data from the sensors exceeds the thresholds in the baseline, it alerts an administrator. As an example, security administrators can monitor and review unsuccessful logon attempts to detect potential intrusion attempts. Neural networks monitor patterns of activity or traffic on a network. This self-learning process enables the IDS to create a database (baseline) of activity for comparison to future activity.

The correct implementation of an IDS is critical. If the type of IDS or the configuration of the IDS creates a large number of alerts that are not intrusions (false positives), the administrators might disregard alerts or turn off the rule(s) associated with the alert. The opposite might occur if the type of IDS does not fit the needs of the organization or is misconfigured, and intrusion activity might not be detected. The IT organization must continue to adjust the rules and signatures associated with the IDS, to ensure optimum performance.

The Processes of Design, Implementation, and Monitoring of Security

The IS auditor does not generally review the effectiveness and utilization of assets during a security audit. Security audits primarily focus on the evaluation of the policies and procedures that ensure the confidentiality, integrity, and availability of data. During an audit of security, the IS auditor normally reviews access to assets and validates the physical and environmental controls to the extent necessary to satisfy the audit requirements. The IS auditor also should review logical access policies and compare them to job profiles, to ensure that excessive access has not been granted, and evaluate asset safeguards and procedures to prevent unauthorized access to assets.

Note

The Processes of Design, Implementation, and Monitoring of Security

Network performance-monitoring tools are used to measure and ensure proper network capacity management and availability of services. Proper implementation and incident-handling procedures ensure network connectivity and the availability of network services.

The IT organization should have policies and procedures outlining proper patch-management procedures. The application of patches reduces known vulnerabilities to operating systems and applications, but systems administrators should always assess the impact of patches before installation. System administrators should immediately evaluate patches as they become available and should understand the effect they will have within their environment. Any patch management methodology should also include extensive testing on the effects of the patch implemented.

The data owners, who are responsible for the use and reporting of information under their control, should provide written authorization for users to gain access to that information. The data owner should periodically review and evaluate authorized (granted) access to ensure that these authorizations are still valid.

Note

The Processes of Design, Implementation, and Monitoring of Security

Intrusion-detection systems (IDS) are used to identify intrusion attempts to a network. However, work should be implemented in concert with firewalls and routers because they detect intrusion attempts instead of prevent against attack.

Per ISACA, the IS auditor should review the following when auditing security management, logical access issues, and exposures.

Review Written Policies, Procedures, and Standards

Policies and procedures provide the framework and guidelines for maintaining proper operation in control. The IS auditor should review the policies and procedures to determine whether they set the tone for proper security and provide a means for assigning responsibility for maintaining a secured computer processing environment.

Logical Access Security Policy

These policies should encourage limiting logical access on a need-to-know basis and should reasonably assess the exposure to the identified concerns.

Formal Security Awareness and Training

Promoting security awareness is a preventive control. Through this process, employees become aware of their responsibility for maintaining good physical and logical security.

Per ISACA, assimilation of the framework and intent of a written security policy by the users of the systems is critical to the successful implementation and maintenance of security policy. You might have a good password system, but if the users of the system keep passwords written on their table, the password system is of little value. Management support and commitment is no doubt important, but for successful implementation and maintenance of security policy, user education on the critical nature of security is of paramount importance. The stringent implementation, monitoring, and enforcing of rules by the security officer through access-control software, and the provision for punitive actions when security rules are violated also are required.

Data Ownership

Data ownership refers to the classification of data elements and the allocation of responsibility for ensuring that they are kept confidential, complete, and accurate. The key point of ownership is that by assigning responsibility for protecting the organization’s data to a particular person, you establish accountability for appropriate protection of confidentiality, integrity, and availability of the data.

Security Administrators

Security administrators are responsible for providing adequate physical and logical security for the IS programs, data, and equipment.

Access Standards

The IS auditor should review access standards to ensure that they meet organizational objectives for separating duties, preventing fraud or error, and that they meet policy requirements for minimizing the risk of unauthorized access.

Auditing Logical Access

When evaluating logical access controls, the highest order should be as follows:

  • Obtain a general understanding of the security risks facing information processing, through a review of relevant documentation, inquiry, observation, risk assessment, and evaluation techniques

  • Document and evaluate controls over potential access paths to the system, to assess their adequacy, efficiency and effectiveness, by reviewing appropriate hardware and software security features in identifying any deficiencies

  • Test controls over access paths, to determine whether they are functioning and effective, by applying appropriate audit techniques

  • Evaluate the access control environment, to determine whether the control objectives are achieved, by analyzing test results and other audit evidence

  • Evaluate the security environment, to assess its adequacy, by reviewing written policies, observing practices and procedures, and comparing them to appropriate security standards or practices and procedures used by other organizations

Exam Prep Questions

1.

Which of the following controls is MOST effective for protecting software and access to sensitive data?

A.

Security policies

B.

Biometric physical access control for the server room

C.

Fault tolerance with complete systems and data redundancy

D.

Logical access controls for the operating systems, applications, and data

A1:

Answer: D. Logical access controls are often the primary safeguards for authorized access to systems software and data. All the other controls complement logical access control to applications and data.

2.

Which of the following would an IS auditor review to BEST determine user access to systems or data?

A.

Access-control lists (ACLs)

B.

User account management

C.

Systems logs

D.

Applications logs

A2:

Answer: A. IS auditors should review access-control lists (ACLs) to determine user permissions that have been granted for a particular resource.

3.

Which of the following is ultimately accountable for protecting and securing sensitive data?

A.

Data users

B.

Security administrators

C.

Data owners

D.

Data custodians

A3:

Answer: C. Data owners, such as corporate officers, are ultimately responsible and accountable for access control of data. Although security administrators are indeed responsible for securing data, they do so at the direction of the data owners. A security administrator is an example of a data custodian. Data users access and utilize the data for authorized tasks.

4.

A review of logical access controls is performed primarily to:

A.

Ensure that organizational security policies conform to the logical access design and architecture

B.

Ensure that the technical implementation of access controls is performed as intended by security administration

C.

Ensure that the technical implementation of access controls is performed as intended by the data owners

D.

Understand how access control has been implemented

A4:

Answer: C. Logical access controls should be reviewed to ensure that access is granted on a least-privilege basis, per the organization’s data owners. Logical access design and architecture should conform to policies, not vice versa. Understanding how access control has been implemented is an essential element of a logical access controls review, but the ultimate purpose of the review is to make sure that access controls adequately support and protect the organizational needs of the data owners.

5.

Authorization is BEST characterized as:

A.

Providing access to a resource according to the principle of least privilege

B.

A user providing an identity and a password

C.

Authenticating a user’s identity with a password

D.

Certifying a user’s authority

A5:

Answer: A. Authorization is the process of providing a user with access to a resource based upon the specific needs of the user to perform an authorized task. This process relies upon a verified understanding of the user’s identity. Therefore, a user must provide a claim of identity, which is then verified through an authentication process. Following the authentication process, access can be authorized according to the principle of least privilege.

6.

Data classification must begin with:

A.

Determining specific data sensitivity according to organizational and legal requirements for data confidentiality and integrity

B.

Determining data ownership

C.

A review of organizational security policies

D.

A review of logical access controls

A6:

Answer: B. Data classification is a process that allows an organization to implement appropriate controls according to data sensitivity. Before data sensitivity can be determined by the data owners, data ownership must be established. Logical access controls and organizational security policies are controlled and driven by the data owners.

7.

Which of the following firewalls can be configured to MOST reliably control FTP traffic between the organization’s network and the Internet?

A.

Packet-filtering firewall

B.

Application-layer gateway or a stateful inspection firewall

C.

A router configured as a firewall with access-control lists

D.

Circuit-level firewall

A7:

Answer: B. FTP is a network protocol that operates at the application layer of the OSI model. Of the choices available, only an application-layer gateway or a stateful inspection firewall can reliably filter all the way through to the application layer. The remaining answers are examples of a firewall that can reliably filter only through OSI Layer 3, the network layer.

8.

An IS auditor wants to ensure that the organization’s network is adequately protected from network-based intrusion via the Internet and the World Wide Web. A firewall that is properly configured as a gateway to the Internet protects against such intrusion by:

A.

Preventing external users from accessing the network via internal rogue modems

B.

Preventing unauthorized access to the Internet by internal users

C.

Preventing unauthorized access to the network by external users via ad-hoc wireless networking

D.

Preventing unauthorized access by external users to the internal network via the firewalled gateway

A8:

Answer: D. Firewalls are used to prevent unauthorized access to the internal network from the Internet. Firewalls provide little protection from users who do not need to access the network via the firewall, such as via internal rogue modems or via peer-to-peer ad-hoc wireless network connections. Preventing unauthorized access to the Internet by internal users is the opposite of the goal stated in the question.

9.

Various cryptosystems offer differing levels of compromise between services provided versus computational speed and potential throughput. Which of the following cryptosystems would provide services including confidentiality, authentication, and nonrepudiation at the cost of throughput performance?

A.

Symmetric encryption

B.

Asymmetric encryption

C.

Shared-key cryptography

D.

Digital signatures

A9:

Answer: B. Through the use of key pairs, asymmetric encryption algorithms can provide confidentiality and authentication. By providing authentication, nonrepudiation is also supported. Symmetric encryption, also known as shared-key cryptography, uses only a single shared key. Because the key is shared, there is no sole ownership of the key, which precludes its use as an authentication tool. Digital signatures are used to verify authenticity and data integrity only.

10.

The organization desires to ensure integrity, authenticity, and nonrepudiation of emails for sensitive communications between security administration and network administration personnel through the use of digitally signed emails. Which of the following is a valid step in signing an email with keys from a digital certificate?

A.

The sender encrypts the email using the sender’s public key.

B.

The sender creates a message digest of the email and attachments using the sender’s private key.

C.

The sender creates a message digest of the email and attachments using a common hashing algorithm, such as DSA.

D.

The sender encrypts the message digest using the sender’s public key.

A10:

Answer: C. A digital signature provides the recipient with a mechanism for validating the integrity of the email and its attachments by creating a message digest as a result of the application of a common hashing algorithm such as MD5 or DSA. The message digest is then “signed” by encrypting it with the sender’s private key. The recipient uses the sender’s public key to decrypt the message digest and then uses the same hashing algorithm as the sender of the email and attachments. If the decrypted message digest matches that created independently by the recipient, the recipient can rest assured that the message has not been tampered with since transmission by the sender.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset