Chapter 1. Basic Cryptography

This chapter details the basic building blocks and fundamental issues you need to understand before moving on to more complex security technologies. Cryptography is a basis for secure communications; it is, therefore, important that you understand three basic cryptographic functions: symmetric encryption, asymmetric encryption, and one-way hash functions. Most current authentication, integrity, and confidentiality technologies derive from these three cryptographic functions. This chapter also introduces digital signatures as a practical example of how you can combine asymmetric encryption with one-way hash algorithms to provide data authentication and integrity.

Authentication, authorization, and key management issues are critical for you to understand because the compromise of either identity or secret keys is the most common form of security compromise. Authentication technologies are introduced in Chapter 2, “Security Technologies,” but this chapter explores the methods of authentication, the establishment of trust domains for defining authorization boundaries, and the importance of the uniqueness of namespace.

A cryptographic key is a digital object that you can use to encrypt, decrypt, and sign information. Some keys are kept private, whereas others are shared and must be distributed in a secure manner. The area of key management has seen much progress in the past years; this is mainly because it makes key distribution secure and scalable in an automated fashion. Important issues with key management are creating and distributing the keys securely. This chapter introduces some common mechanisms used to securely create and distribute secret and public keys. The controversial area of key escrow, where a third party has access to a confidential cryptographic key, is explored to raise your awareness of what the controversy is all about and what role key escrow may play in a secure enterprise infrastructure.

Cryptography

Cryptography is the science of writing or reading coded messages; it is the basic building block that enables the mechanisms of authentication, integrity, and confidentiality. Authentication establishes the identity of either the sender or the receiver of information, or both. In some communication instances, it is not always a requirement to have mutual authentication of both parties. Integrity ensures that the data has not been altered in transit, and confidentiality ensures that no one except the sender and receiver of the data can actually understand the data.

Usually, cryptographic mechanisms use both an algorithm (a mathematical function) and a secret value known as a key. Most algorithms undergo years of scrutiny by the world's best cryptographers, who validate the strength of the algorithm. The algorithms are widely known and available; it is the key that is kept secret and provides the required security. The key is analogous to the combination to a lock. Although the concept of a combination lock is well known, you can't open a combination lock easily without knowing the combination. In addition, the more numbers a given combination has, the more work must be done to guess the combination—the same is true for cryptographic keys. The more bits that are in a key, the less susceptible a key is to being compromised by a third party.

The number of bits required in a key to ensure secure encryption in a given environment can be controversial. The longer the keyspace—the range of possible values of the key—the more difficult it is to learn (often referred to as breaking) the key in a brute-force attack. In a brute-force attack, you apply all combinations of a key to the algorithm until you succeed in deciphering the message. Table 1-1 shows the number of keys that must be tried to exhaust all possibilities, given a specified key length.

Table 1-1. Brute-Force Attack Combinations

Key Length (in bits)

Number of Combinations

40

240 = 1,099,511,627,776

56

256 = 7.205759403793 × 1016

64

264 = 1.844674407371 × 1019

112

2112 = 5.192296858535 × 1033

128

2128 = 3.402823669209 × 1038

A natural inclination is to use the longest key available, which makes the key more difficult to discover. However, the longer the key, the more computationally expensive the encryption and decryption process can be. The goal is to make breaking a key “cost” more than the worth of the information the key is protecting.

Three types of cryptographic functions enable authentication, integrity, and confidentiality: symmetric key encryption, asymmetric key encryption, and one-way hash functions.

Symmetric Key Encryption

Symmetric encryption, often referred to as secret key encryption, uses a common key and the same cryptographic algorithm to scramble and unscramble a message. Figure 1-1 shows two users, Alice and Bob, who want to communicate securely with each other. Both Alice and Bob have to agree on the same cryptographic algorithm to use for encrypting and decrypting data. They also have to agree on a common key—the secret key—to use with their chosen encryption/decryption algorithm.

Secret Key Encryption

Figure 1-1. Secret Key Encryption

A simplistic secret key algorithm is the Caesar Cipher. The Caesar Cipher replaces each letter in the original message with the letter of the alphabet n places further down the alphabet. The algorithm shifts the letters to the right or left (depending on whether you are encrypting or decrypting). Figure 1-2 shows Alice and Bob communicating with a Caesar Cipher where the key, n, is three letters. For example, the letter A is replaced with the letter D (the letter of the alphabet three places away). The steps of the Caesar Cipher are as follows:

  1. Alice and Bob agree to use the Caesar Cipher to communicate and pick n = 3 as the secret key.

  2. Alice uses the Caesar Cipher to encrypt a confidential message to Bob and mails the message.

  3. When he receives Alice's mail, Bob decrypts the message and reads the confidential message.

Encryption and Decryption Using the Caesar Cipher AlgorithmencryptionCaesar Cipher algorithmdecryptionCaesar Cipher algorithm

Figure 1-2. Encryption and Decryption Using the Caesar Cipher Algorithm

Anyone intercepting the message without knowing the secret key is unable to read it. However, you can see that if anyone intercepts the encrypted message and knows the algorithm (for example, shift letters to the right or left), it is fairly easy to succeed in a brute-force attack. Assuming the use of a 26-letter alphabet, the interceptor has to try at most 25 keys to determine the correct key.

Some secret key algorithms operate on fixed-length message blocks. Therefore, it is necessary to break up larger messages into n-bit blocks and somehow chain them together. The chaining mechanisms can also offer additional protection from tampering with the transmitted data.

Four common modes exist in which each mode defines a method of combining the plaintext (the message that is not encrypted), the secret key, and the ciphertext (the encrypted text) to generate the stream of ciphertext that is actually transmitted to the recipient, as follows:

  • Electronic CodeBook (ECB)

  • Cipher Block Chaining (CBC)

  • Cipher FeedBack (CFB)

  • Output FeedBack (OFB)

The ECB chaining mechanism encodes each n-bit block independently—but uses the same key. An avid snooper interested only in changes in information and not the exact content can easily exploit this weakness. For example, consider someone snooping a certain employee's automatic payroll transactions to a bank. Assuming that the amount is the same for each paycheck, each ECB-encoded ciphertext message appears the same. If the ciphertext changes, the snooper could conclude that the payroll recipient received a raise and perhaps was promoted.

The remaining three algorithms (CBC, CFB, and OFB) have inherent properties that add an element of randomness to the encrypted messages. If you send the same plaintext block through one of these three algorithms, you get back different ciphertext blocks each time. This is accomplished by using different encryption keys or an initialization vector (IV). An IV is an encrypted block of random data used as the first n-bit block to begin the chaining process. The IV is implementation specific but can be taken from a time stamp or some other random bit of data. If a snooper is listening to the encrypted traffic on the wire, and you are to send the same message 10 times using a different key or IV to encrypt the data, it would look like a different message each time. The snooper would gain virtually no information.

Most secret key algorithms use one of these four modes to provide additional security for the transmitted data. Some of the more common secret key algorithms used today include the following:

  • Data Encryption Standard (DES)

  • 3DES (read “triple DES”)

  • Rivest Cipher 4 (RC-4)

  • International Data Encryption Algorithm (IDEA)

  • Advanced Encryption Standard (AES)

DES

DES is the most widely used encryption scheme today. In 1972, the National Institute of Standards and Technology (NIST, called the National Bureau of Standards at the time) asked for public proposals for an algorithm that would provide strong cryptographic means to protect nonclassified information. In 1974, IBM submitted the Lucifer algorithm, which appeared to meet most of NIST's design requirements. NIST evaluated this program with help from the National Security Agency (NSA). Due to the general distrust of NSA activities, there was quite a bit of skepticism regarding the analysis of Lucifer, especially when it came to the key length, which was reduced from the originally proposed 128 bits down to 56 bits, weakening it significantly.

The NSA was also accused of changing the algorithm to provide a “backdoor” in it that would allow agents to decrypt any information without having to know the encryption key. This fear proved unjustified, however, and no such backdoor has ever been found.

The modified Lucifer algorithm was adopted by NIST as a federal standard in November 1976 and became known as the Data Encryption Standard (DES).

DES operates on 64-bit message blocks. The algorithm uses a series of steps to transform 64-input bits into 64-output bits. In its standard form, the algorithm uses 64-bit keys—of which 56 bits are chosen randomly. The remaining 8 bits are parity bits (one for each 7-bit block of the 56-bit random value). DES is widely used in many commercial applications today, and can be used in all four modes: ECB, CBC, CFB, and OFB. Generally, however, DES operates in either the CBC mode or the CFB mode.

In 1998, the Electronic Frontier Foundation, using a specially developed computer called the DES Cracker, managed to break DES in fewer than 3 days. The cost was less than $250,000, and the encryption chip that powered the DES Cracker was capable of processing 88 billion keys per second. It has also been shown that for a cost of a million dollars a dedicated hardware device can be built that can search all possible DES keys in about 3.5 hours. Because of the relative ease of breaking DES encryption, it is being phased out of use. NIST has depracated DES in favor of AES.

3DES

Triple DES (3DES) is an alternative to DES that preserves the existing investment in software but makes a brute-force attack more difficult. It has the advantage of proven reliability and a longer key length that eliminates many of the shortcut attacks that can be used to reduce the amount of time it takes to break DES. 3DES takes a 64-bit block of data and performs the operations of encrypt, decrypt, and encrypt. 3DES can use one, two, or three different keys. The advantage of using one key is that, with the exception of the additional processing time required, 3DES with one key is the same as standard DES (for backward compatibility). 3DES in ECB mode is the most commonly used mode of operation. At least two keys must be used if 3DES is to be more secure than DES. Triple DES was endorsed by NIST as a temporary standard to be used until the AES specifications was finalized (described later).

RC-4

RC-4 is a proprietary algorithm invented by Ron Rivest and marketed by RSA Data Security. It is used often with a 128-bit key, although its key size can vary. It is unpatented, but is protected as a trade secret—although it was leaked to the Internet in September 1994. Historically, because the U.S. government at one time only allowed encryption algorithms to be exported when using secret key lengths of 40 bits or less, some implementations use a very short key length for compatibility purposes with other 40-bit systems.

IDEA

IDEA was developed to replace DES. It also operates on 64-bit message blocks but uses a 128-bit key. As with DES, IDEA can operate in all four modes: ECB, CBC, CFB, and OFB. IDEA was designed to be efficient in both hardware and software implementations. However, IDEA has a major shortcoming in that it is not available in the public domain. It is a patented algorithm and requires a license for commercial use.

AES

In 1997, NIST abandoned their official endorsement of DES and began work on a replacement, to be called the Advanced Encryption Standard (AES). In November 2001, the AES standard was published as the Federal Information Processing Standards Publication 197 (FIPS 197). It specifies the Rijndael algorithm, which was developed and by two cryptographers from Belgium: Dr. Joan Daemen and Dr. Vincent Rijmen. The Rijndael algorithm is a symmetric block cipher that can process data blocks of 128 bits, using 3 different key lengths: 128, 192, and 256 bits. In decimal terms, this means that there are approximately the following:

  • 3.4 × 1038 possible 128-bit keys

  • 6.2 × 1057 possible 192-bit keys

  • 1.1 × 1077 possible 256-bit keys

Rijndael was designed to handle additional block sizes and key lengths, but these are not adopted in the AES standard. Rijndael's symmetric and parallel structure gives implementors a lot of flexibility and can be efficiently implemented in both hardware and software across a wide range of computing environments. Rijndael's very low memory requirements make it very well suited for mobile and wireless environments, in which it also demonstrates excellent performance. Rijndael's operations are among the easiest to defend against power and timing attacks. These are attacks where the power consumption or radiation emission is measured to aid in breaking the cryptosystem or where private keys can be recovered by measuring how long a particular encryption operation takes. The AES algorithm is gaining wide adoption across many security implementations.

NOTE

References to specific algorithms are given to get you familiar with which algorithms pertain to which basic encryption concepts. Because most of the crypto analytical and performance comparisons are useful more for implementers of the technology, they are not deeply explored here. Appendix A, “Sources of Technical Information,” provides references for more in-depth studies.

Secret key encryption is most often used for data confidentiality because most symmetric key algorithms have been designed to be implemented in hardware and have been optimized for encrypting large amounts of data at one time. Challenges with secret key encryption include the following:

  • Changing the secret keys frequently to avoid the risk of compromising the keys

  • Securely generating the secret keys

  • Securely distributing the secret keys

A commonly used mechanism to derive and exchange secret keys securely is the Diffie-Hellman algorithm. This algorithm is explained later in this chapter in the “Key Management” section.

Asymmetric Encryption

Asymmetric encryption is often referred to as public key encryption. It can use either the same algorithm, or different but complementary algorithms, to scramble and unscramble data. Two different, but related, key values are required: a public key and a private key. If Alice and Bob want to communicate using public key encryption, both need a public key and private key pair. (See Figure 1-3.) Alice has to create her public key/private key pair, and Bob has to create his own public key/private key pair. When communicating with each other securely, Alice and Bob use different keys to encrypt and decrypt data.

Public Key Encryption

Figure 1-3. Public Key Encryption

Some of the more common uses of public key algorithms include the following:

  • Data integrity

  • Data confidentiality

  • Sender nonrepudiation

  • Sender authentication

Data confidentiality and sender authentication can be achieved using the public key algorithm. Figure 1-4 shows how data integrity and confidentiality is provided using public key encryption.

Ensuring Data Integrity and Confidentiality with Public Key Encryptiondataintegrityasymmetric encryptionintegritydataasymmetric encryptionconfidentitalitydataasymmetric encryptioindataconfidentialityasymmetric encryption

Figure 1-4. Ensuring Data Integrity and Confidentiality with Public Key Encryption

The following steps have to take place if Alice and Bob are to have confidential data exchange:

  1. Both Alice and Bob create their individual public/private key pairs.

  2. Alice and Bob exchange their public keys.

  3. Alice writes a message to Bob and uses Bob's public key to encrypt her message. Then, she sends the encrypted data to Bob over the Internet.

  4. Bob uses his private key to decrypt the message.

  5. Bob writes a reply, encrypts the reply with Alice's public key, and sends the encrypted reply over the Internet to Alice.

  6. Alice uses her private key to decrypt the reply.

Data confidentiality is ensured when Alice sends the initial message because only Bob can decrypt the message with his private key. Data integrity is also preserved because, to modify the message, a malicious attacker would need Bob's private key again. Data integrity and confidentiality are also ensured for the reply because only Alice has access to her private key and is the only one who can modify or decrypt the reply with her private key.

However, this exchange is not very reassuring because it is easy for a third party to pretend to be Alice and send a message to Bob encrypted with Bob's public key. The public key is, after all, widely available. Verification that it was Alice who sent the initial message is important. Figure 1-5 shows how public key cryptography resolves this problem and provides for sender authentication and nonrepudiation.

Sender Authentication and Nonrepudiation Using Public Key Encryptionauthenticationasymmetric encryptionnonrepudiationasymmetric encryptioin

Figure 1-5. Sender Authentication and Nonrepudiation Using Public Key Encryption

The following steps have to take place if Alice and Bob are to have an authenticated data exchange:

  1. Both Alice and Bob create their public/private key pairs.

  2. Alice and Bob exchange their public keys.

  3. Alice writes a message for Bob, uses her private key to encrypt the message, and then sends the encrypted data over the Internet to Bob.

  4. Bob uses Alice's public key to decrypt the message.

  5. Bob writes a reply, encrypts the reply with his private key, and sends the encrypted reply over the Internet to Alice.

  6. Alice uses Bob's public key to decrypt the reply.

An authenticated exchange is ensured because only Bob and Alice have access to their respective private keys. Bob and Alice meet the requirement of nonrepudiation—they cannot later deny sending the given message if their keys have not been compromised. This, of course, lends itself to a hot debate on how honest Bob and Alice are; they can deny sending messages by just stating that their private keys have been compromised.

To use public key cryptography to perform an authenticated exchange as well as ensure data integrity and confidentiality, double encryption needs to occur. Alice first encrypts her confidential message to Bob with Bob's public key and then encrypts again with her private key. Anyone can decrypt the first message to get the embedded ciphertext, but only Bob can decrypt the ciphertext with his private key.

NOTE

A crucial aspect of asymmetric encryption is that the private key must be kept private. If the private key is compromised, an evil attacker can impersonate you and send and receive messages as you.

The mechanisms used to generate these public/private key pairs are complex, but they result in the generation of two very large random numbers, one of which becomes the public key and the other becomes the private key. Because these numbers as well as their product must adhere to stringent mathematical criteria to preserve the uniqueness of each public/private key pair, generating these numbers is fairly processor intensive.

NOTE

Key pairs are not guaranteed to be unique by any mathematical criteria. However, the math ensures that no weak keys are generated.

Public key encryption algorithms are rarely used for data confidentiality because of their performance constraints. Instead, public key encryption algorithms are typically used in applications involving authentication using digital signatures and key management.

Some of the more common public key algorithms are the Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) algorithm and the El Gamal algorithm.

Hash Functions

A hash function takes an input message of arbitrary length and outputs fixed-length code. The fixed-length output is called the hash, or the message digest, of the original input message. If an algorithm is to be considered cryptographically suitable (that is, secure) for a hash function, it must exhibit the following properties:

  • The function must be consistent; that is, the same input must always create the same output.

  • The function must be one way (that is, irreversible); if you are given the output, it must be extremely difficult, if not impossible, to ascertain the input message.

  • The output of the function must be random—or give the appearance of randomness—to prevent guessing of the original message.

  • The output of the function must be unique; that is, it should be nearly impossible to find two messages that produce the same message digest.

One-way hash functions are typically used to provide a fingerprint of a message or file. Much like a human fingerprint, a hash fingerprint is unique and thereby proves the integrity and authenticity of the message.

Consider the example shown in Figure 1-6; Alice and Bob are using a one-way hash function to verify that no one has tampered with the contents of the message during transit.

Using a One-Way Hash Function for Data Integritydataintegrityone-way hash functionintegritydataone-way hash function

Figure 1-6. Using a One-Way Hash Function for Data Integrity

The following steps have to take place if Alice and Bob are to keep the integrity of their data:

  1. Alice writes a message and uses the message as input to a one-way hash function.

  2. The result of the hash function is appended as the fingerprint to the message sent to Bob.

  3. Bob separates the message and the appended fingerprint and uses the message as input to the same one-way hash function that Alice used.

  4. If the hashes match, Bob can be assured that the message was not tampered with.

The problem with this simplistic approach is that the fingerprint itself could be tampered with and is subject to the man-in-the-middle attack. The man-in-the-middle attack refers to an entity listening to a believed secure communication and impersonating either the sender or receiver. A variant to make hash functions more secure is to use a keyed hash function, where the input to the algorithm is a shared secret key as well as the original message. To more effectively use hash functions as fingerprints, however, you can combine them with public key technology to provide digital signatures, which are discussed in the next section, “Digital Signatures.”

Common hash functions include the following:

  • Message Digest 4 (MD4) algorithm

  • Message Digest 5 (MD5) algorithm

  • Secure Hash Algorithm (SHA)

MD4 and MD5 were designed by Ron Rivest of MIT. SHA was developed by the National Institute of Standards and Technology (NIST). MD5 and SHA are the hash functions used most often in current security product implementations—both are based on MD4. MD5 processes its input in 512-bit blocks and produces a 128-bit message digest. SHA also processes its input in 512-bit blocks but produces a 160-bit message digest. SHA is more processor intensive and may run a little more slowly than MD5.

Digital Signatures

A digital signature is an encrypted message digest appended to a document. It is sometimes also referred to as a digital fingerprint. Digital signatures enable you to confirm the identity of the sender and the integrity of the document. Digital signatures are based on a combination of public key encryption and one-way secure hash function algorithms. Figure 1-7 shows an example of how to create a digital signature.

Creating a Digital Signaturedigital signaturescreating

Figure 1-7. Creating a Digital Signature

The following steps must be followed for Bob to create a digital signature:

  1. Bob creates a public/private key pair.

  2. Bob gives his public key to Alice.

  3. Bob writes a message for Alice and uses the document as input to a one-way hash function.

  4. Bob encrypts the output of the hash algorithm, the message digest, with his private key, resulting in the digital signature.

The combination of the document and the digital signature is the message that Bob sends to Alice. Figure 1-8 shows the verification of the digital signature.

Verifying a Digital Signaturedigital signaturesverification

Figure 1-8. Verifying a Digital Signature

On the receiving side, Alice follows these steps to verify that the message is indeed from Bob—that is, to verify the digital signature:

  1. Alice separates the received message into the original document and the digital signature.

  2. Alice uses Bob's public key to decrypt the digital signature, which results in the original message digest.

  3. Alice takes the original document and uses it as input to the same hash function Bob used, which results in a message digest.

  4. Alice compares both of the message digests to see whether they match.

If Alice's calculation of the message digest matches Bob's decrypted message digest, the integrity of the document as well as the authentication of the sender are proven.

NOTE

The initial public key exchange must be performed in a trusted manner to preserve security. This is critical and is the fundamental reason for the need for digital certificates. A digital certificate is a message that is digitally signed with the private key of a trusted third party stating that a specific public key belongs to someone or something with a specified name and set of attributes. If the initial public key exchange wasn't performed in a trusted manner, someone could easily impersonate a given entity.

Digital signatures do not provide confidentiality of the message contents. However, it is frequently more imperative to produce proof of the originator of a message than to conceal the contents of a message. It is plausible that you could want authentication and integrity of messages without confidentiality, as in the case where routing updates are passed in a core network. The routing contents may not be confidential, but it's important to verify that the originator of the routing update is a trusted source. An additional example of the importance of authenticating the originator of a message is in online commerce and banking transactions, where proof of origin is imperative before acting on any transactions.

Some of the more common public key digital signature algorithms are the RSA algorithm and the Digital Signature Standard (DSS) algorithm. DSS was proposed by NIST and is based on the El Gamal public key algorithm. Compared to RSA, DSS is faster for key generation and has about the same performance for generating signatures but is much slower for signature verification.

Authentication and Authorization

Because authentication and authorization are critical parts of secure communications, they must be emphasized. Authentication establishes the identity of the sender and/or the receiver of information. Any integrity check or confidential information is often meaningless if the identity of the sending or receiving party is not properly established.

Authorization is usually tightly coupled to authentication in most network resource access requirements. Authorization establishes what you are allowed to do after you've identified yourself. (It is also called access control, capabilities, and permissions.) It can be argued that authorization does not always require a priori authentication, but in this book, authentication and authorization are tightly coupled; authorization usually follows any authentication procedure.

Issues related to authentication and authorization include the robustness of the methods used in verifying an entity's identity, the establishment of trusted domains to define authorization boundaries, and the requirement of uniqueness in namespace.

Methods of Authentication

All methods of authentication require you to specify who or what you are and to relay appropriate credentials to prove that you are who you say you are. These credentials generally take the form of something you know, something you have, or something you are. What you know may be a password. What you have could be a smart card. What you are pertains to the field of biometrics, in which sophisticated equipment is used to scan a person's fingerprint or eye or to recognize a person's voice to provide authentication.

Authentication technologies are discussed in detail in Chapter 2. Here, the important element is to recognize that different mechanisms provide authentication services with varying degrees of certainty. Choosing the proper authentication technology largely depends on the location of the entities being authenticated and the degree of trust placed in the particular facets of the network.

Trust Models

Trust is the firm belief or confidence in the honesty, integrity, reliability, justice, and so on of another person or thing. Authorization is what you are allowed to do once your identity is established. All secure systems must have a framework for an organizational policy for authorization—this framework is called a trust model.

If something is difficult to obtain in a dishonest manner or is difficult to forge, we have inherent trust in that system. An example is the title to a car: This document is used as proof of ownership of a car because it is difficult to forge. It is this proof that authorizes a person to resell his or her car with the relative certainty that the car is actually his or hers to sell.

In the network world, trust models can be very complex. Suppose that we have a large corporation with a number of different affiliated departments—the research department, the marketing department, and the payroll department. These individual departments could structure their networks autonomously but with a spirit of cooperation. Each department sets up a trusted intermediary, which is the entity that keeps all the authentication and authorization information for the employees in that department. (See Figure 1-9.)

Trusted Intermediaries for Individual Corporate Departments

Figure 1-9. Trusted Intermediaries for Individual Corporate Departments

When an executive member of the research department wants to access a document off one of the research servers, the research department's authentication/authorization server authenticates the executive member. Now if that same executive wants to access salary information for his employees, there must be a mechanism for authenticated and authorized access to the payroll department. Instead of each department server having separate account information for every user (this arrangement could become an administrative nightmare with large numbers of users), it may be necessary to create groups with inherited trust. For example, you can create an executive group on the payroll server that permits any executive member of the company to access payroll information.

Delegation of trust refers to giving someone or something permission to act on your behalf. If the executive from the research department went on vacation and left someone else in charge, this individual could have permission to act on the executive's behalf to carry out a salary modification. When the executive returns, the authorization must be revoked because it was granted only on a temporary basis.

The difficulty in many trust models is deciding whom to trust. Weighting risk factors (the amount of damage that can be done if trust is inappropriately placed) and having adequate mechanisms to deal with misplaced trust should be a part of every corporate security policy. Creating a security policy is discussed in more detail in Chapter 7, “Design and Implementation of the Corporation Security Policy.”

NOTE

It is important to recognize that trust does not mean implicit trust. You should have a trust model that works in high probability, but you must verify the trust relationships and put in place checks to verify that information has not been compromised. As Ronald Reagan once said, “Trust, but verify.”

Namespace

Trust domains define authorization boundaries. For each trust domain, it is important that a unique identifier exists to identify what is being acted on. At first, creating unique identifiers may seem trivial, but as the size and numbers of trust domains grow, the problem can get very complex.

Take the simple scenario of a typical enterprise network. Company A is a small startup enterprise, and employees use their first names as login IDs. Company B decides to acquire Company A. Now there are a number of employees who have the same login ID. As you know, the IDs must be unique to preserve authentication and authorization rights. Typically, companies have a naming convention for this situation: Login IDs consist of the user's first initial and last name; any duplicates use the first two initials and last name, or the first three initials and last name, and so on.

Many large corporations create standard naming conventions for all entities that may require authentication (for example, employees and any and all network infrastructure devices). The concept of an object identifier has been used in the industry to veer away from the common notion that an entity has to have a specific name. The object identifier also can be an employee badge number, an employee social security number, a device IP address, a MAC address, or a telephone number. These object identifiers must be unique within given trust domains.

Key Management

Key management is a difficult problem in secure communications, mainly because of social rather than technical factors. Cryptographically secure ways of creating and distributing keys have been developed and are fairly robust. However, the weakest link in any secure system is that humans are responsible for keeping secret and private keys confidential. Keeping these keys in a secure place and not writing them down or telling other people what they are is a socially difficult task—especially in the face of greed and anger. Some people find it quite difficult not to divulge a secret in exchange for a million dollars or to get back at a seemingly unfair employer. Other people do not take secure procedures seriously—sometimes considering them just a nuisance—and are careless in keeping keys private. The human factor will always be an issue that necessitates sufficient checks to ensure that keys have not been compromised.

Creating and Distributing Secret Keys

For a small number of communicating entities, it is not unreasonable to create a key and manually deliver it. In most wide-scale corporations, however, this mechanism is awkward and outdated. Because secret key encryption is often used in applications requiring confidentiality, it is reasonable to assume that there may exist a secret key per session, a session being any single communication data transfer between two entities. For a large network with hundreds of communication hosts, each holding numerous sessions per hour, assigning and transferring secret keys is a large problem. Key distribution is often performed through centralized key distribution centers (KDCs) or through public key algorithms that establish secret keys in a secure, distributed fashion.

The centralized key distribution model relies on a trusted third party, the KDC, which issues the session keys to the communicating entities. (See Figure 1-10.)

Distributing Keys through a Key Distribution CenterKDCsdistributing keys

Figure 1-10. Distributing Keys through a Key Distribution Center

The centralized distribution model requires that all communicating entities have a shared secret key with which they can communicate with the KDC confidentially. The problem of how to distribute this shared secret key to each of the communicating nodes still exists, but it is much more scalable. The KDC is manually configured with every shared key, and each communicating node has its corresponding shared key configured. Keys can be distributed physically to employees when they get an employee badge or to devices from the IS department as part of the initial system setup.

NOTE

If a device is given the key, anyone using that device may be authorized for accessing certain network resources. In this age of mobile hosts, it is good practice to authenticate a device as well as the user using the device to access network resources.

A common method used to create secret session keys in a distributed manner is the Diffie-Hellman algorithm. The Diffie-Hellman algorithm provides a way for two parties to establish a shared secret key that only those two parties know—even though they are communicating over an insecure channel. This secret key is then used to encrypt data using their favorite secret key encryption algorithm. Figure 1-11 shows how the Diffie-Hellman algorithm works.

Establishing Secret Keys Using the Diffie-Hellman Algorithmsecret keysDiffie-Hellman algorithmDiffie-Hellman algorithmsecret keysalgorithmsDiffie-Hellmansecret keys

Figure 1-11. Establishing Secret Keys Using the Diffie-Hellman Algorithm

The following steps are used in the Diffie-Hellman algorithm:

  1. Alice initiates the exchange and transmits two large numbers (p and q) to Bob.

  2. Alice chooses a random large integer XA and computes the following equation:

    • YA = (qXA) mod p

  3. Bob chooses a random large integer XB and computes this equation:

    • YB = (qXB) mod p

  4. Alice sends YA to Bob. Bob sends YB to Alice.

  5. Alice computes the following equation:

    • Z = (YB)XA mod p

  6. Bob computes this equation:

    • Z' = (YA)XB mod p

The resulting shared secret key is as follows:

  • Z = Z' = q(XAXB) mod p

The security of Diffie-Hellman relies on two very difficult mathematical problems:

  • Any eavesdropper has to compute a discrete logarithm to recover XA and XB. (That is, the eavesdropper has to figure out XA from seeing qXA or figure out XB from seeing qXB.)

  • Any eavesdropper has to factor large prime numbers—numbers on the order of 100 to 200 digits can be considered large. Both p and q should be large prime numbers and (p – 1)/2 should be a prime number.

For the reader interested in a more detailed mathematical explanation, see the following sidebar.

The Diffie-Hellman exchange is subject to a man-in-the-middle attack because the exchanges themselves are not authenticated. In this attack, an opponent, Cruella, intercepts Alice's public value and sends her own public value to Bob. When Bob transmits his public value, Cruella substitutes it with her own and sends it to Alice. Cruella and Alice thus agree on one shared key, and Cruella and Bob agree on another shared key. After this exchange, Cruella just decrypts any messages sent out by Alice or Bob, and then reads and possibly modifies them before re-encrypting with the appropriate key and transmitting them to the correct party. To circumvent this problem and ensure authentication and integrity in the exchange, the two parties involved in the Diffie-Hellman exchange can authenticate themselves to each other through the use of digital signatures and public key certificates.

Creating and Distributing Public Keys

For public key algorithms, creating the public/private key pairs is complex. The pairs adhere to stringent rules as defined by varying public key algorithms to ensure the uniqueness of each public/private key pair. Uniqueness is “statistically” guaranteed; that is, the odds of two identical keys being generated independently are astronomical. The complexity associated with generating public/private key pairs is the creation of sets of parameters that meet the needs of the algorithm (for example, primarily for RSA and many other algorithms).

NOTE

It is ideal for the end user (the person or thing being identified by the key) to generate the key pair themselves. The private key should never leave the end user's possession. In corporate environments where this may not be practical or where key escrow is required, different rules apply. However, all technical solutions should attempt self-generation as the first goal of a design architecture so that the private key is known only to the entity creating the key pair.

The problem is how you can distribute the public keys in a secure manner and how you can trust the entity that gives you the key. For a small number of communicating parties, it may be manageable to call each other or to meet face to face and give out your public key.

It should never be taken on faith that a public key belongs to someone. Many organizations today have so-called key-signing parties, a time in which people get together in the same room and exchange respective public keys. Someone in the room may know for sure that a person is who he says he is; however, it may be necessary to provide proof of identity with a drivers license or passport.

The key-signing parties are necessary when there is a lack of a trusted third party. A more scalable approach is to use digital certificates to distribute public keys. Digital certificates require the use of a trusted third party—the certificate authority.

NOTE

There is no need for complex key distribution methods of a public key cryptosystem to ensure data confidentiality. The security of the data encrypted with the public key doesn't depend on the authenticity of the person advertising the public key. A public key cryptosystem encrypts a message just as strongly with a public key that is widely known by broadcasting it as it does with a public key obtained from a trusted certificate authority. Key distribution in public key systems is a problem only if you want true authentication of the person claiming to be associated with the public key.

Digital Certificates

A digital certificate is a digitally signed message that is typically used to attest to the validity of a public key of an entity. Certificates require a common format and are largely based on the ITU-T X.509 standard today. Figure 1-12 shows an example of a digital certificate format using the X.509 standard.

The X.509 Certificate FormatX.509 certificate format

Figure 1-12. The X.509 Certificate Format

The general format of an X.509 certificate includes the following elements:

  • Version number

  • Serial number of certificate

  • Issuer algorithm information

  • Issuer of certificate

  • Valid to/from date

  • Public key algorithm information of the subject of the certificate

  • Digital signature of the issuing authority

Digital certificates are a way to prove the validity of an entity's public key and may well be the future mechanism to provide single login capabilities in today's corporate networks. However, this technology is still in its infancy as far as deployment is concerned. Much of the format of certificates has been defined, but there is still the need to ensure that certificates are valid, manageable, and have consistent semantic interpretation.

Certificate Authorities

The certificate authority (CA) is the trusted third party that vouches for the validity of the certificate. It is up to the CA to enroll certificates, distribute certificates, and finally to remove (revoke) certificates when the information they contain becomes invalid. Figure 1-13 shows how Bob can obtain Alice's public key in a trusted manner using a CA.

Obtaining a Digital Certificate Through a Certificate Authoritydigital certificatesobtaining through certificate authority

Figure 1-13. Obtaining a Digital Certificate Through a Certificate Authority

Assume that Alice has a valid certificate stored in the CA and that Bob has securely obtained the CA's public key. The steps that Bob follows to obtain Alice's public key in a reliable manner are as follows:

  1. Bob requests Alice's digital certificate from the CA.

  2. The CA sends Alice's certificate, which is signed by the CA's private key.

  3. Bob receives the certificate and verifies the CA's signature.

  4. Because Alice's certificate contains her public key, Bob now has a “notarized” version of Alice's public key.

This scheme relies on the CA's public key to be distributed to users in a secure way. Most likely, this occurs using an out-of-band mechanism. There is still much debate over who should maintain CAs on the Internet. Many organizations (including financial institutions, government agencies, and application vendors) have expressed interest in offering certificate services. In all cases, it's a decision based on trust. Some corporations may want to control their own certificate infrastructure, and others may choose to outsource the control to a trusted third party.

The issues still being finalized in the industry include how to perform efficient certificate enrollment, how to revoke certificates, and how to handle cross-certifications in CA hierarchies. These issues are discussed in more detail in Chapter 2 in the section “Public Key Infrastructure and Distribution Models.”

Key Escrow

Key escrow is the notion of putting a confidential secret key or private key in the care of a third party until certain conditions are fulfilled. This, in itself, is not a bad idea because it is easy to forget a private key, or the key may become garbled if the system it is stored on goes berserk. The controversy revolves around which keys should be in escrow and who becomes the trusted third party who has access to confidential keys while still protecting the privacy of the owners of the keys.

By far the most controversial key escrow issue surrounds whether cryptosystems should be developed to have a backdoor for wire-tapping purposes. The U.S. government for one would like secret keys and private keys to be made available to law and government officials for wire-tapping purposes. Many leading security and cryptography experts have found flaws in cryptographic systems that support key recovery. All the current algorithms operate on the premise that the private and secret keys cannot be compromised (unless they are written down or conveyed). Key recovery goes against all these assumptions.

The Business Case

In a corporate environment, many business needs for key escrow exist. It would not seem unreasonable for a corporation to keep in escrow keys used to encrypt and decrypt corporate secrets. The corporation must make a business decision about which kinds of traffic require encryption and which information is critical to be able to retrieve. Typically, the encryption/decryption is performed at the application level; the keys used can be offered to trusted key escrow personnel. In all cases, the business keeps all parts of a key and the cryptosystem private within the business. No external escrow agent is needed.

The Political Angle

Government policy is still being defined for key escrow. A technical solution initially proposed by NIST and the NSA during the Bush Sr. administration was a new tamper-proof encryption chip called the Clipper chip. The algorithm it used contained a superkey—essentially a law enforcement agency field in the key. Each Clipper chip is unique and has a key field tied to the chip's serial number. The FBI, supposedly only with a court-ordered warrant, could use the superkey to open up your message. Matt Blaze, principal research scientist at AT&T Laboratories, and others showed that the Clipper chip is not secure; the Clipper proposals have mostly been cast aside.

Now the government is back to brute-force escrow: You give your private key or keys to the escrow agency. The government has “compromised” by allowing in its proposal that the escrow agency can be a private business that has been “certified” by the U.S. government.

The Clinton administration continued to pursue a policy of key recovery both inside the United States and abroad. An extensive study on the risks of key recovery mechanisms was conducted by a group of leading computer scientists and cryptographers. This report attempted to outline the technical risks, costs, and implications of deploying systems that provide government access to encryption keys. You can find this report at http://www.cdt.org/crypto/risks98/.

You can find updates on the U.S. government's position on key escrow at http://www.cdt.org/crypto/ and http://www.epic.org.

The Human Element

Aside from the political and business problems with government key escrow (who wants to buy a cryptosystem for which you know someone else has the keys?), there is the critical human element to key escrow. Assume that there is a government key escrow system in which all keys are escrowed with a very few “trusted” agents. Further, assume that a large amount of commerce, trade, banking, currency transfer, and so on is performed on these escrowed cryptosystems.

The equivalent of a huge pot of gold is now concentrated in a few, well-known places: the escrow agencies. All you have to do is get a few escrowed keys, tap in to some secure banking or currency transfer sessions, and you can quickly become a very wealthy thief. There is no need to spoof an encrypted session or spoof a wire transaction to put a lot of money in an offshore bank account. In the world of finance and banking, having prior knowledge of significant events coupled with fully legitimate investments or trading moves in the open market can make you extremely wealthy without having to resort to anything more than mere eavesdropping on what are thought to be “secure” channels.

Greed and anger are the issues that most severely weaken a cryptosystem. If a large amount of wealth is tied up in one place (the key escrow system), a foreign government or economic terrorist would conceivably offer a large amount of money to escrow agency employees. In the example of a compromised escrowed key being used to get rich in open markets with insider knowledge, an unscrupulous person could offer an escrow agent a million dollars as well as a percentage of the gains. In this way, the more keys the employee reveals, the more money he makes. Greed can be a major factor in causing the entire escrowed key system to crumble. It is more because of human reasons than technical or legal ones that escrowed encryption is largely not workable.

Summary

This chapter has explored many fundamental security concepts. The intent was to provide you with a precursory understanding of three basic cryptographic functions: symmetric encryption, asymmetric encryption, and one-way hash functions. You also learned how these cryptographic functions can be used to enable security services such as authentication, integrity, and confidentiality.

In many systems today, end-user or device authentication can be established using public key technology, where the public keys are distributed in some secure manner, possibly through the use of digital certificates. Digital signatures are created through the use of hash functions and ensure the integrity of the data within the certificate. Data confidentiality is typically achieved through the use of some secret key algorithm where the Diffie-Hellman exchange is used to derive the shared secret key used between the two communicating parties.

Authentication and authorization issues were discussed. It is important to recognize that different mechanisms provide authentication services with varying degrees of certainty. Choosing the proper authentication technology largely depends on the location of the entities being authenticated and the degree of trust placed in the particular facets of the network.

Also, the issues of key management systems were explored. How to effectively generate and distribute cryptographic keys will probably change over time and will require a periodic assessment. The human factor will always be an issue that necessitates sufficient checks to ensure that keys have not been compromised.

Review Questions

The following questions provide you with an opportunity to test your knowledge of the topics covered in this chapter. You can find the answers to these questions in Appendix E, “Answers to Review Questions.”

1:

Cryptography is the basic building block that enables which of the following?

  1. Authentication

  2. Integrity

  3. Confidentiality

  4. All of the above

2:

True or false: The best security algorithms are designed in secret.

3:

What is a brute-force attack?

4:

Name three fundamental cryptographic functions.

5:

What is the inherent weakness in the ECB chaining mechanism?

6:

Which of the following algorithms is specified as the symmetric key encryption AES standard?

  1. 3DES

  2. RC4

  3. Rijndael

  4. IDEA

7:

What are three challenges with secret key encryption?

8:

What is another term for asymmetric encryption?

9:

Name three common uses of asymmetric encryption algorithms.

10:

True or false: The crucial aspect of asymmetric encryption is that the public key needs to be kept confidential.

11:

Which of the following properties is not suitable for a hash function?

  1. It must be random—or give the appearance of randomness.

  2. It must be unique.

  3. It must be reversible.

  4. It must be consistent.

12:

What is a digital signature?

13:

How do keyed hash functions and digital signatures differ?

14:

A centralized key distribution model relies on what entity to issue keys?

15:

Which algorithm is commonly used to create secret session keys in a distributed manner?

16:

Name five elements commonly found in an X.509 certificate.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset