Images

Domain 3

Cryptography

THE CRYPTOGRAPHY DOMAIN requires security architects to understand cryptographic methodologies and the use of cryptography to protect an organization’s data storage and communications from compromise or misuse. This includes awareness of the threats to an organization’s cryptographic infrastructure. The security architect must understand the importance of choosing, implementing, and monitoring cryptographic products and adoption of corporate cryptographic standards and policy. This may include oversight of digital signatures and PKI implementations and a secure manner of addressing the issues and risks associated with management of cryptographic keys.

TOPICS

Images   Identify Requirements (e.g., confidentiality integrity, non-repudiation)

Images   Determine Usage (i.e.. in transit, at rest)

Images   Identify Cryptographic Design Considerations and Constraints

Images   Vetting of proprietary cryptography

Images   Computational overhead

Images   Useful life

Images   Design testable cryptographic system

Images   Define Key Management Lifecycle (e.g., creation, distribution, escrow, recovery)

Images   Design integrated cryptographic solutions (e.g., Public Key Infrastructure (PKI), API selection, identity system integration)

OBJECTIVES

Key areas of knowledge include:

Images   The application and use of cryptographic solutions

Images   Interoperability of devices

Images   Strength of cryptographic algorithms

Images   Cryptographic methodologies and methods

Images   Addressing key management issues

Images   Public Key Infrastructure

Images   Application-level encryption

Images   Design validation

Images   Defining cryptanalysis methods and threats

Images   Cryptanalytic attacks

Cryptographic Principles

Cryptography provides the bedrock for a multitude of security controls. The wide variety of applications where cryptography can be applied offers plenty of opportunity for security controls that provide an overall benefit. Its wide range of applications and uses also means there is more chance for a security control to be the weakest link in a chain. If cryptography is to be used effectively, the methodology and principles behind cryptography must be fully understood by the security architect.

Applications of Cryptography

Benefits

While cryptography may not directly benefit the availability of information, the encryption of data is the most straightforward means of protecting its confidentiality. Hash functions such as MD5, SHA-256, and the new SHA-3 are used for integrity to protect against unauthorized modification of data and are cryptography’s workhorses1. The use of public key certificates and digital signatures are but two examples of cryptography providing a means of Authentication. This can include user authentication, data authentication, and data origin authentication—which is verification that a message received from a sender also originated from that sender. By binding a public key to its owner using a Public Key Infrastructure (PKI), a non-repudiation service can also be provided2. Non-repudiation offers protection from either the sender or the receiver of a message, denying that the message has been sent or received. Non-repudiation can be used to prove to a third party that a particular event took place and can prove to a third party that a particular event did or did not occur.

These benefits form four fundamental goals from which all the major benefits of cryptography are derived:

Confidentiality means the secrecy and privacy of information must be protected from unauthorized disclosure or access. Personal information, intellectual property, diplomatic and military communications, and credit card numbers are a few examples of such data. Protection methods can utilize public-key/private-key pairs (asymmetric encryption) or secret-keys (symmetric encryption).

Integrity is concerned with guaranteeing data is not accidentally or maliciously changed. Integrity also relates to ensuring that the methods used for processing information perform with accuracy and completeness. One-way hash functions, while not necessarily the most effective, are the most common means used to ensure integrity.

Authentication is the broad goal of verifying that data is of undisputed origin and includes verifying the positive identity of users or other entities such as network devices. Passwords, PINs, and tokens can also be used. Digital signatures are used to provide data origin authentication.

Non-repudiation involves preventing denial by one of the entities involved in a communication of having participated in all or part of the communication. It is also used to prove to a third party that some kind of event or action did or did not occur. PKI certificates, where a digital signature binds together a public key with an individual’s identity during a valid time period, can be used to provide a measure of cryptographic non-repudiation.

Uses

The need to use cryptography depends in part on the level of criticality of data being protected. While financial transaction data, such as credit card information or personal data and privacy information, could have a strong requirement for confidentiality provided by encryption, inventory data or public reference files in a central data store may have low confidentiality needs. The rationale for spending money on encryption controls depends on the data protection required. At the same time, technological improvements are lowering the costs of hardware and software encryption and making the controls provided by encryption ubiquitous. For example, at rest encryption for data stored within portable devices and in flight encryption for remote access VPNs have become common.

Cryptography remains at the heart of many logical information security controls. Cryptography is used in security controls that protect data during transmission over a network (data in flight), data residing in a storage medium (data at rest), data being processed within an application, user authentication, and device authentication 3.

Cryptography is not limited to uses in access control and telecommunications security. Business continuity planning lends itself to cryptographic uses for protecting data transferred to a hot site. A recovery site service provider may need to be one of the trusted parties having access to encryption keys for storage data, for instance.

Cryptography can depend on physical security as well. One example is the physical security of the master key in a media encryption system. Storage of the key may leverage physical security by splitting it into key shares (known as split knowledge; see section titled “Key Management”), with portions of the encryption key stored on separate smart cards in different safes, thereby limiting physical access to the master encryption key. To expound on this example further, the same key can reside within a hardware component of a cryptographic system, requiring specialized physical protection of the computing device itself. The US National Institute of Standards and Technology (NIST) FIPS 140-2 defines standards for both hardware and software components of cryptographic modules. The specialized physical protection of cryptographic modules required by FIPS 140-2 may include tamper-proof enclosures and means of destroying or zeroizing keys upon physical opening 4.

In addition, many ISO standards also provide guidance for the security architect in this area. ISO/IEC 18033-2:2006 specifies encryption systems (ciphers) for the purpose of data confidentiality. ISO/IEC 18033-2:2006 specifies the functional interface of such a scheme, and in addition specifies a number of particular schemes that appear to be secure against chosen ciphertext attack. The different schemes offer different trade-offs between security properties and efficiency 5. ISO/IEC 11770-1:2010 defines a general model of key management that is independent of the use of any particular cryptographic algorithm 6. ISO/IEC 11770-1:2010 addresses both the automated and manual aspects of key management, including outlines of data elements and sequences of operations that are used to obtain key management services 7. Examples of the use of key management mechanisms are included in the ISO 11568 series, which specifies the principles for the management of keys used in cryptosystems implemented within the retail-banking environment 8. If non-repudiation is required for key management, ISO/IEC 13888 is applicable 9. The fundamental problem is to establish keying material whose origin, integrity, timeliness and (in the case of secret keys) confidentiality can be guaranteed to both direct and indirect users. Key management includes functions such as the generation, storage, distribution, deletion and archiving of keying material in accordance with a security policy (ISO 7498-2) 10.

The core areas that benefit from cryptography deal with keeping data confidential, maintaining its integrity, guaranteeing authenticity not only of data but of those accessing the data as well as those from whom the data originates, and of ensuring non-repudiation for data originators.

Message Encryption

Secure communication of messages is a traditional use of cryptography. Military communications have employed cryptography since at least the time of the Greco-Persian wars, with techniques such as hiding messages within wax tablets. Commercial messaging systems transmitting across untrusted networks also require encryption for privacy of messages. Corporate e-mail traffic may contain various types of sensitive information including financial data, personal information, intellectual property, or trade secrets. In addition to needing confidentiality for messages, e-mail can require authentication of the message recipient to the message, integrity of the message content, and non-repudiation of the message being sent or received.

Messaging security standards include:

Images   Secure Multi-Purpose Internet Mail Extensions (S/MIME): This extension of the MIME standards that specify e-mail formatting and encapsulation adds encryption of message content. S/MIME also uses a hashing algorithm for message integrity, public key certificates for message authentication, and digital signatures to provide non-repudiation of origin11.

Images   Privacy-Enhanced Mail (PEM): An early Internet Engineering Task Force (IETF)-proposed standard for securing e-mail using public-key cryptography with trusted distribution of public keys via PKI, PEM was never widely used for securing e-mail12.

Images   Only PEM’s definition of header field format (PEM format) has found use as a common means of representing digital certificates in ASCII form.

Images   Pretty Good Privacy (PGP): Originally developed by Phil Zimmermann in 1991, PGP is a cryptosystem utilizing symmetric key, asymmetric key, and message digest algorithms. When applied to securing e-mail, PGP provides message authentication by binding a public key to an e-mail address where the public key is distributed to a community of users who trust each other, commonly known as a web of trust. PGP with e-mail also provides message encryption, uses a hashing algorithm for message integrity, and digital signatures for non-repudiation13.

Secure IP Communication

TCP/IP is a standard communication protocol for information systems today. Various cryptographic protections are provided for data traveling over IP networks by the IPSec suite of open standards developed by the Internet Engineering Task Force (IETF)14. The IPSec set of standard protocols provides cryptographic security services at Layer 3, the Network layer of the OSI model.

IPSec includes two protocols: Authentication Header (AH) and Encapsulating Security Protocol (ESP). The cryptographic benefits provided by them are:

Images   AH: Authentication Header provides data origin authentication and data integrity but does not provide confidentiality for the IP payload and header that it protects.

Images   ESP: Encapsulating Security Protocol also provides data origin authentication and data integrity, and also offers confidentiality for the IP payload it protects.

IPSec operates in one of two modes:

Images   Transport mode: In transport mode, only the IP payload is protected by the AH or ESP protections. Transport mode is used for end-to-end security between two systems, such as between a client and a server.

Images   Tunnel mode: In tunnel mode, both the IP payload and the header are protected, and a combination of AH and ESP protections can be used. Tunnel mode sets up a virtual tunnel where multiple intermediaries may exist and is used for protecting traffic between hosts and network devices such as gateways or firewalls, routers, and VPN appliances.

Secure TCP/IP communication is not limited to IPSec. Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are additional cryptographic protocols that provide communications security for TCP/IP15. TLS/SSL provides confidentiality, integrity, and authentication for securing data traveling over IP networks. Authentication in TLS/SSL is commonly provided when an HTTP server proves to a client such as a browser that the server is authentic, and may also be used for mutual or server-to-server authentication. TLS/SSL is often used to provide secure HTTP (HTTPS), and is also used for securing data communicating over other application level protocols, such as File Transfer Protocol (FTP), Lightweight Directory Access Protocol (LDAP), and Simple Mail Transfer Protocol (SMTP).

Remote Access

Cryptographic controls are used when remote access is necessary. Examples include the need for integrity protection to prevent man-in-the-middle spoofing and hijacking attacks and vendor remote network access to a customer’s data center, where the authentication and confidentiality of the network access are important. Likewise, remote access by telecommuting employees commonly uses virtual private networks (VPNs), which provide encryption and user authentication. Often, remote access means crossing boundaries where untrusted networks are present. In such cases, the need for confidentiality increases.

A VPN provides confidentiality by encrypting IP traffic and offering authentication between VPN endpoints. Because VPNs are often based on IPSec or SSL, the security benefits of the underlying protocols are provided. VPNs are implemented in the following architectures:

Images   Remote Access VPN: A remote access VPN provides security for remote users connecting to a central location via IP.

Images   Site-to-Site VPN: A site-to-site VPN provides communications security for separate locations in an organization that can connect over IP.

Images   Extranet VPN: An extranet or trading partner VPN provides an organization with communications security when one or more separate organizations are connecting to that organization over IP.

Point-to-Point Protocol (PPP) is another means of establishing remote connectivity. PPP, operating at the data link layer of the OSI model, was designed to be used with network layer protocols such as IP or IPX. By default, PPP does not provide any security or rely on any cryptographic controls. However, PPP does include an optional authentication phase and an optional encryption feature, PPP Encryption Control Protocol (ECP) 16.

A common protocol for remote access that involves cryptographic controls is Secure Shell (SSH), which operates at the application layer of the OSI model. SSH can be used in a client-server model for remote administration of servers, and in combination with other protocols such as Secure File Transfer Protocol (SFTP) or Secure Copy (SCP). SSH encrypts the data it transfers, and provides authentication using password- or public-key based methods. SSH also uses a keyed hash for integrity protection.

Secure Wireless Communication

Wireless networks are commonly used for enhancing user mobility and extending or even replacing wired IP networks. Their transmission is easily intercepted, so confidentiality is a must. Wireless transmissions can be more susceptible to man-in-the-middle attack than wired communication, so authentication is very important.

The most commonly used family of standards for Wireless Local Area Networks (WLANs) is Institute of Electrical and Electronics Engineers (IEEE) 802.1117. 802.11 originally relied on the Wired Equivalent Privacy (WEP) security method to provide confidentiality and integrity. WEP has been proved insecure due to the way it implements its RC4 stream cipher algorithm; thus, WLANs using WEP are often vulnerable to eavesdropping and unauthorized access.

As a result, IEEE introduced a range of new security features designed to overcome the shortcomings of WEP in the IEEE 802.11i amendment. 802.11i introduces the concept of a Robust Security Network (RSN), an element of the protocol that allows a variety of encryption algorithms and techniques to be used for providing confidentiality and authentication18. Prior to the introduction of 802.11i, the Wi-Fi Alliance, a global nonprofit industry association, created a protocol and certification program for wireless network components known as Wi-Fi Protected Access (WPA). WPA, based on a draft of IEEE 802.11i, securely implements the RC4 stream cipher for more effective confidentiality and authentication. The biggest difference between WPA and the draft is that WPA does not require support for the Advanced Encryption Standard (AES) strong encryption algorithm. WPA allows many existing IEEE 802.11 hardware components that cannot support the computationally intensive AES encryption.

At the same time the IEEE 802.11i amendment was ratified, the Wi-Fi Alliance introduced WPA2, its term for interoperable equipment that is capable of supporting IEEE 802.11i requirements. WPA2 certification is based on the mandatory elements of the IEEE 802.11i standard, but there are some differences. WPA2 extends its certification program to include interoperability with a set of common Extensible Authentication Protocol (EAP) methods. For example, WPA2 adds EAP-TLS, which is not a component of the 802.11i standard. WPA2 also excludes support for ad hoc networks, an 802.11i feature that allows peer-to-peer network device communication.

A short-range wireless protocol commonly used by many types of business and consumer devices such as mobile phones, smart phones, personal computer peripherals, cameras, and video game consoles is Bluetooth. The Bluetooth specification was developed, and is managed, by the Bluetooth Special Interest Group, a privately held trade association19. By creating wireless Personal Area Networks (PANs), Bluetooth enables ad hoc communication between multiple wireless devices. Bluetooth optionally encrypts, but does not provide integrity protection for the transmitted data. It is possible to easily modify a transmitted Bluetooth packet without being detected because only a simple cyclic redundancy check (CRC) is appended to each packet, and no message authentication code is used. Another security weakness with Bluetooth involves device pairing, the initial exchange of keying material that occurs when two Bluetooth-enabled devices agree to communicate with one another. In version 2.0 and earlier of the Bluetooth specification, pairing is performed over a nonencrypted channel, allowing a passive eavesdropper to compute the link key used for encryption. Version 2.1 introduced the use of Elliptic Curve Diffie–Hellman (ECDH) public key cryptography, which can be utilized by Bluetooth device developers for protection against a passive eavesdropping attack. The Bluetooth specification defines its own stream cipher called E0. Several weaknesses have been identified in Bluetooth’s E0 stream cipher, which is not a Federal Information Processing Standards (FIPS)-approved algorithm and can be considered nonstandard [SP800-121]20 21.

Version 3.0 + High Speed (HS) of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 and it has been adopted as of 30 June 2010. It includes Classic Bluetooth, Bluetooth high speed and Bluetooth low energy protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols. General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption.

The Security Manager (SM) is responsible for device pairing and key distribution. The Security Manager Protocol (SMP) is defined as how the device’s SM communicates with its counterpart on the other device. The SM provides additional cryptographic functions that may be used by other components of the stack.

Other Types of Secure Communication

Secure communication is not limited to IP networks. Plain Old Telephone Service (POTS), including voice as well as data, needs encryption for ensuring confidentiality. Encrypted telephones are no longer the domain of military communications. Portable/wireless telephone headsets that include encrypted transmission and reception are available in office supply stores for commercial use. Sensitive data, including personally identifiable information, trade secrets, and intellectual property, are routinely shared over telephone networks with limited protections.

While Storage Area Networks (SANs) utilizing protocols such as FICON and Fibre Channel (FC) are thought to be less exposed and thus need less protection than the common TCP/IP networks, cryptographic controls are still necessary. A service provider hosting multiple clients in a data center may use encryption for privacy of data within a SAN. This can be done using Fibre Channel Security Protocol (FC-SP), a security framework that includes protocols to enhance FC security in several areas, including authentication of Fibre Channel devices, cryptographically secure key exchange, and cryptographically secure communication between FC devices22.

Depending on the criticality of the data, radio frequency communications of all types can require some measure of protection. Communication satellites, for instance, will require encryption, which may be in the form of hardware modules for securing telemetry, tracking, and control. Radio Frequency Identification (RFID) sensors and tags used for tracking and identification purposes can benefit from short transmission encryption to guarantee that the information they deliver is confidential, authentic, and unchanged.

Identification and Authentication

Cryptography is used for identification as well as user, device, and data origin authentication. An early use of cryptographic identification for distinguishing friendly aircraft was developed during WWII with the Identification, Friend or Foe (IFF) system using coded radar signals to trigger a transponder on the aircraft. Modern military IFF transponders encrypt challenge and response messages, and include the use of key codes to prevent unauthorized use.

Similar to IFF, RFID relies on use of a transponder, or an RFID tag, to identify physical assets such as warehouse inventory when queried with a reader. RFID tags are finding their way into a wide range of applications such as libraries, transportation toll collection, and passports. Use of cryptography with RFID may become a necessity for privacy or to ensure authenticity.

Securely identifying physical items can prevent counterfeiting of bank notes, pharmaceuticals, computer parts, and a host of other products. While use of bar codes, holographic labels, and watermarks or signets is common, these methods often involve use of a simple code versus a cryptographic key and algorithm. Newer methods of applying cryptographic means include use of digital certificates, embedded encryption processing chips, and Hardware Security Modules (HSMs) to securely identify components. One such application could involve use of a cryptographic component identification system to protect automotive components from theft, counterfeiting, or manipulation.

Securely identifying persons is necessary for user authentication and for access to information resources and processing systems. Authentication systems based on a user entering a password or a PIN are widely deployed and provide a low-cost but easily compromised means of authenticating users. A common method for user authentication involves comparing the results of a one-way hash operation performed on the password a user enters with the hash value stored in the authentication system.

User authentication can also be done with a software token. A software token can involve a user presenting a secret key during authentication, or may involve a system based on public key cryptography. Symmetric encryption is used in Kerberos, the MIT-developed authentication protocol commonly used for providing users single-sign on access to computing resources23.

Using cryptographic hardware tokens for user authentication can provide increased security at a higher cost. Hardware tokens combined with passwords are commonly used for providing two-factor authentication. Hardware tokens may be able to generate and store private keys, support use of one-time passwords, and often contain cryptographic processing capabilities along with tamper resistance. Examples of hardware-token-based technologies include smart cards, Universal Serial Bus (USB) tokens, and special-purpose interfaces such as the NSA-developed Crypto Ignition Key (CIK) used in the STU-III family of secure telephones [CIK].

Authentication protocols used by Point-to-Point Protocol (PPP) include Password Authentication Protocol (PAP) and Challenge-Handshake Authentication Protocol (CHAP). PAP is a weak authentication method, transmitting a cleartext password and static identifier that does not protect against replay attack. CHAP transmits a hash that is computed based on a random challenge value and shared secret, providing replay protection and a stronger level of authentication. Another protocol originally developed to provide authentication services for PPP and commonly found in wireless network communication is Extensible Authentication Protocol (EAP). EAP is an authentication framework that supports a number of authentication mechanisms such as pre-shared keys, digital certificates, Kerberos, and others24. These authentication mechanisms are implemented in a number of ways called EAP methods, for example, EAP-MD5 and EAP-TLS.

Storage Encryption

Storage encryption is typically known as encryption at rest. This includes file-level, file-system-level, and entire disk encryption. Storage encryption provides confidentiality of data, but it also requires authentication. For instance, a SAN may store its data in encrypted form, but the authorized hosts and devices that can access the data must be identified for the encryption to be effective. Cryptographic controls also provide integrity for storage media. For instance, Content Addressable Storage (CAS) provides file integrity via cryptographic methods.

Storage media encryption can be an excellent means of ensuring that removable media is protected during transit. Portable tape media, USB devices or “thumb drives,” and notebook computers must be encrypted if they contain data that is not public. While secure data erasure should be depended upon to destroy data from disk drives that are decommissioned or that fail, disk encryption can also provide a degree of protection in those situations.

Storage encryption has a great reliance on proper encryption key management. Magnetic tape media must be encrypted when the criticality of the data warrants it, and especially when the tape is physically moved, such as via third-party carriers. Key management comes into play when the encrypted tapes must be read by a third party requiring access to a symmetric key that was used to encrypt the data. It should also be noted that using unique keys when encrypting multiple tape volumes provides greater protection than using a common key for a set of tapes, should the encryption key be compromised. Solutions being developed for managing keys for disk and tape encryption should take into account the OASIS Key Management Interoperability Protocol (KMIP)25. KMIP is an architecture specification for managing keys that defines a low-level protocol that can be used to request and deliver keys between any key manager and any encryption system.

Other standards produced by the IEEE Standards Association are applicable to disk and tape encryption products. The approved standard IEEE P1619 addresses data storage on disk drives, and the approved standard IEEE P1619.1 is for data encryption on tape drives. An additional standard in progress is P1619.2 for encrypting whole disk sectors [P1619]26.

While tape and disk encryption may be provided by software methods, appliances that perform encryption at hardware speed offer better performance, often with increased cost. Such specialized encryption appliances may continue to have a place; however, the trend is for tape, disk and network devices to provide the encryption functionality within the device itself.

Electronic Commerce (E-Commerce)

E-commerce consists of two or more parties using information technology infrastructures to execute financial transactions, and often involves the buying and selling of goods or services over networks such as the World Wide Web. Examples include consumers accessing online services over the World Wide Web or trading partners processing orders via an extranet. E-commerce includes the integration of Web-based IT applications in activities that do not directly involve buying and selling, such as in advertising, sharing production capacity information, or servicing warranties.

E-commerce business models are normally defined as:

Images   Business to Business (B2B)

Images   Business to Consumer (B2C)

Images   Consumer to Consumer (C2C)

A common infrastructure supporting e-commerce includes the following elements:

Images   Client: may be a web browser or customer’s back-office network.

Images   Front-end systems: may consist of one or more Web servers connected to the Internet and to back-end systems.

Images   Back-end systems: may include application servers and databases necessary for supplying information to front-end systems (such as product information) and for receiving data from them (such as payment information).

For transactions to occur, there must be a level of trust between the trading parties and a level of assurance in the security of the transacting environment. The security requirements of the expansive and technologically entrenched use of e-commerce are supported by all the basic goals of cryptography: confidentiality, integrity, authentication, and non-repudiation. Cryptography also supports a number of detailed security requirements of e-commerce, including:

Images   Auditing: Accountability in financial transactions depends on secure audit logging records, and cryptographic methods such as hash functions and digital signing can ensure that records are not modified.

Images   Authorization: E-commerce depends on being able to authorize transactions based on pre-associated policies for a given user or other entity that is successfully authenticated. The access control mechanisms involved in authorization may involve cryptographic components such as digital certificates.

Images   Privacy: Web-based transactions may include personal information. If this information is to be stored, cryptography can provide access control and secure storage.

B2B e-commerce still widely uses Electronic Data Interchange (EDI), the decades-old set of standards for exchanging data between trading partners27. EDI transmissions often occur using a value-added network (VAN), which acts as a gateway and clearinghouse for supporting the transmission. EDI can also be transmitted by a variety of methods including modems, FTP, e-mail, and HTTP. EDI transmission methods must include appropriate security methods, such as encryption for confidentiality or digital signing for authentication. One specification for protecting EDI transmitted over the Internet is Applicability Statement 2 (AS2), found in RFC 4130 28. AS2 specifies use of existing security methods including Secure/Multipurpose Internet Mail Extensions (S/MIME), Cryptographic Message Syntax (CMS), and cryptographic hash algorithms in order to provide confidentiality, data authentication, and non-repudiation for EDI.

B2B, B2C, and C2C e-commerce often require Web Services Security (WS-Security) as part of the server-to-server protection mechanism involved in IP communications between front-end and back-end systems. WS-Security is an OASIS standard that builds a security layer to the Simple Object Access Protocol (SOAP). SOAP is used for exchanging XML-based messages over HTTP. WS-Security allows SOAP messages to be signed and encrypted, and adds Kerberos tickets or X.509 certificates as tokens for authentication [WS-Security].

Software Code Signing

While WS-Security uses digital signatures to ensure that an XML-based message is not altered during server-to-server transactions, client browser to Web server based transactions may require downloading a piece of code such as a Java applet, browser plug-in, or Microsoft ActiveX control. To ensure integrity of the code, digital certificates and cryptographic hash functions may be used, such as with the Microsoft Authenticode protocol.

One-way hash functions such as MD5 or SHA-1 are also commonly used for ensuring integrity when software such as source code or executables is distributed. While using a hash-function alone can provide integrity, unless the recipient knows the hash value is the same one supplied by the software provider, authentication of the software cannot be guaranteed. The software’s authenticity is often protected by publishing the hash value separately.

Interoperability

Cryptographic interoperability means that the suite of cryptographic algorithms available is used in a manner that meets industry and governmental standards.

One example of a cryptographic interoperability objective is the United States National Security Agency (NSA) Suite B cryptography [NSA Suite B Cryptography] 29. NSA Suite B is a subset of cryptographic algorithms approved by NIST including those for hashing, digital signatures, and key exchange.

Suite B includes the following:

Images   Encryption: Advanced Encryption Standard (AES)—FIPS 197 (with key sizes of 128 and 256 bits) 30

Images   Digital Signature: Elliptic Curve Digital Signature Algorithm (ECDSA) - FIPS PUB 186-3 (using the curves with 256 and 384-bit prime moduli) 31

Images   Key Exchange: The Ephemeral Unified Model and the One-Pass Diffie Hellman (referred to as ECDH) - NIST Special Publication 800-56A (using the curves with 256 and 384- bit prime moduli) 32

Images   Hashing: Secure Hash Algorithm (SHA) - FIPS PUB 180-4 (using SHA-256 and SHA-384) 33

The goals of Suite B are to provide a common set of cryptographic algorithms that the commercial industry can use for creating products that are compatible in the United States as well as internationally34.

Methods of Cryptography

A cryptosystem contains the algorithm used as well as the key, and can include the plaintext and ciphertext. Because the algorithm or cipher is a mathematical function that produces a predictable result, using a key provides the ability to control the algorithm and limit predictability of the ciphertext. These elements are reflected in Figure 3.1.

A simple way to represent an encryption function is the following, where ciphertext “C” results from message “M” being encrypted by an algorithm together with a key denoted by “Ek”:

Ek(M) = C

Symmetric Cryptosystems

Suppose the same key is used to decrypt the ciphertext. In the following decryption function, an algorithm together with the same key in our previous function, denoted by Dk, produces the original message M when the previous ciphertext is decrypted:

Dk(C) = M

These are the elements of a symmetric cryptosystem, the primary element being use of the same secret key (see Figure 3.2).

Images

Figure 3.1 - Elements of a cryptosystem

In symmetric cryptosystems, the secret key used for decryption by the message recipient was also used for encryption by the sender. Confidentiality of the secret key becomes paramount, because knowledge of the key allows an unintended individual the ability to see the message. With the secret key in hand, all the individual needs is the ciphertext and cipher to read the message. Thus, transport and protection of the secret key are important factors to consider in architectures using symmetric cryptosystems.

In a symmetric cryptosystem, management of keys can also become a problem. When a sender wishes to communicate with individual secrecy to multiple receivers, a symmetric cryptosystem requires the sender to distribute unique keys to each receiver. If the receiving parties share the same secret key, then the receiving parties would be able to access each other’s messages from the sender, which the sender does not want. In such a case, where confidentiality is required between the sender and each receiver, the number of secret keys required becomes larger than the number of individuals. This is shown in Figure 3.3, where four individuals require six keys.

Images

Figure 3.2 - Elements of a symmetric cryptosystem.

Images

Figure 3.3 - Management of secret keys problem

The number of keys is based on the number of communication channels, and increases dramatically. It is possible to determine the number of secret keys required for a given number of individual communication channels. In order to ensure secure communication between everyone in a group of n members, the number of keys is given by the following:

Keys = [n × (n−1)]/2

Thus, while a group of 2 requires 1 secret key, a group of 100 requires 4950 keys.

While taking into account the problems inherent in management and distribution of secret keys, cryptographic solution development should be aware of the performance characteristics of symmetric algorithms. Symmetric key algorithms perform faster than asymmetric key algorithms. A few common examples of symmetric algorithms are the following:

Images   Advanced Encryption Standard (AES)

Images   Blowfish

Images   Data Encryption Standard (DES)

Images   IDEA

Images   RC2, RC4, RC5, and RC6

Images   Triple-DES (3DES)

Symmetric algorithms fall into two categories: block ciphers and stream ciphers. In block ciphers, plaintext is encrypted using the secret key in blocks of a certain size, for example, 128-bit block size. In stream ciphers, plaintext is encrypted one bit, byte, or word at a time using a rotating stream of bits from the key.

Block Cipher Modes

Symmetric key algorithms that operate as block ciphers are used in one or more modes of operation. Each block cipher mode provides a different level of security, efficiency, fault tolerance, or in some cases, provides a specific protection benefit such as confidentiality or authentication.

Block ciphers operate on blocks of plaintext of a certain size (often 64 or 128 bits) to produce ciphertext in blocks of the same size. The block size affects security (larger is better), at the cost of increased complexity. Secret key size also affects security, as larger keys increase the randomness of the keyspace. Block ciphers typically include an Initialization Vector (IV), a block of bits added to ensure that identical plaintext messages encrypt to different ciphertext messages.

There are several common block cipher modes of operation [Modes]. The following offer various degrees of security, a range of performance characteristics, and different levels of implementation complexity:

Images   Electronic Code Book (ECB) Mode: The least complex mode; each block is operated on independently, and an IV is not used. Because identical plaintext blocks result in identical ciphertext, this mode is not useful for providing message confidentiality. ECB may be useful for short transmissions such as key exchange. ECB is commonly, and erroneously, implemented by vendors for bulk data encryption. This contradicts NIST guidance and puts customer data at grave risk.

Images   Cipher Block Chaining (CBC) Mode: Adds an IV and uses a chaining method such that results of the encryption of previous blocks are fed back into the encryption of the current block. This makes CBC useful for message confidentiality.

Images   Cipher Feedback (CFB), Output Feedback (OFB), and Counter (CTR) Mode: These modes are capable of producing unique ciphertext given identical plaintext blocks, and are useful for message confidentiality. Because these modes employ a block cipher as a keystream generator, they can operate as a stream cipher. This may be desirable in applications that require low latency between the arrival of plaintext and the output of the corresponding ciphertext.

The previous modes do not provide integrity protection; thus, an attacker may be able to undetectably modify the data stream. Additional security benefits are provided by the following block cipher modes:

Images   Cipher-Based Message Authentication Code (CMAC) Mode: This mode provides data integrity and data origin authentication with respect to the original message source, allowing a block cipher to operate as a Message Authentication Code (MAC). The CMAC algorithm addresses security deficiencies found in the Cipher Block Chaining MAC algorithm (CBC-MAC), which has been shown to be insecure when using messages of varying lengths such as the type found in typical IP datagrams. The CMAC algorithm thus offers an improved means of using a block cipher as a MAC.

Images   Counter with Cipher Block Chaining-Message Authentication Code (CCM) Mode: This mode can provide assurance of both confidentiality and authenticity of data by combining a counter mode with CBC-MAC.

Images   Galois/Counter Mode (GCM): This mode also can provide assurance of both confidentiality and authenticity of data, and combines the counter mode with a universal hash function. GCM is suitable for implementation in hardware for high-throughput applications.

Images   The following are some of the block ciphers currently in use, or that have been popular at one time:

Images   Advanced Encryption Standard (AES) [FIPS197]: Adopted as a standard by the United States in NIST FIPS PUB 197, AES is one of the most popular block ciphers. AES supports a fixed block size of 128 bits and a key size of 128, 192, or 256 bits 35.

Images   CAST [RFC2144]: The popular CAST family of block ciphers uses a 64-bit block size and key sizes of between 40 and 128 bits. CAST-128 is a strong cryptosystem suitable for general-purpose use 36.

Images   Cellular Message Encryption Algorithm (CMEA) [CMEA]: Designed for encrypting the control channel for mobile phones in the United States, CMEA is a deeply flawed encryption algorithm. The simple CMEA block cipher employs a block size of 16–64 bits and a 64-bit key 37.

Images   Data Encryption Standard (DES) [SCHNEIER]: This once highly popular 64-bit block cipher, derived from Lucifer and modified to use a 56-bit key size, was called the Data Encryption Algorithm (DEA) in the FIPS 46-1 adopted in 1977. DEA is also defined as ANSI Standard X3.92. With the availability of increasing computing power, DES with its 56-bit key size was found to be insufficient at protecting against brute force attack. DES is more commonly implemented as Triple DES (3DES or TDES and also known as TDEA), offering a simple way to enlarge the key space without throwing away the algorithm. 3DES uses multiple keys in a block mode, allowing a key size of 168 bits. 3DES is specified in ANSI X9.52 and replaces DES as a FIPS-approved algorithm. 3DES is gradually being replaced by AES as an encryption standard 38.

Images   GOST 28147-89 [GOST]: GOST is a strong 64-bit block size cipher using a 256-bit key in addition to 512 bits of additional secret keying material in the form of optional Substitution-boxes (S-box). GOST 28147-89 is the name of a government standard of the former Soviet Union, where the cipher was developed. GOST is now freely available, and used in software and hardware implementations in the former Soviet republics and elsewhere39.

Images   International Data Encryption Algorithm (IDEA) [SCHNEIER]: This 64-bit block cipher with 128-bit keys is used in the popular encryption software, Pretty Good Privacy (PGP). Commercial use of IDEA requires licensing from a Swiss company. Thus far IDEA has stood up to attack from the academic community40.

Images   LOKI [SCHNEIER]: The LOKI family of ciphers originated in Australia with LOKI89 and LOKI91, which use a 64-bit block and 64-bit key. LOKI91 is a redesign of LOKI89, which was shown to be especially vulnerable to brute force attack. The LOKI97 evolution has a 128-bit block size and offers a choice of 128, 192, or 256-bit key length. LOKI97 was rejected as a candidate for the AES standard, and was shown to be susceptible to cryptanalytic attack41.

Images   Lucifer [SCHNEIER]: Some of the earliest block ciphers originated at IBM by the early 1970s with the name Lucifer. Early variants operated on a 48-bit block using a 48-bit key, and a later version used 128-bit blocks with a 128-bit key. Even with a longer key length, Lucifer has been found vulnerable to cryptanalytic attack42.

Images   RC2, RC5, RC6: The RC algorithms, invented by Ron Rivest, are proprietary and largely unrelated to one another. RC2 is a variable key-size 64-bit block cipher intended as a replacement for DES. RC2 was found vulnerable to a related-key attack [RC2]. RC5 is a fast cipher with a variable block size (32, 64, 128-bit) and employs a variable key size (0 to 2040 bits). Brute force attack against RC5 is possible using distributed computing, and the level of security provided by RC5 is dependent upon how it is implemented [RC5]. RC6 was designed as a candidate for the AES standard, and is based on RC5 with improved performance, security, and a 128-bit block size and key sizes of 128, 192, or 256-bits. RC6 is a strong cipher with excellent performance characteristics [RC6].

Images   Skipjack [SCHNEIER]: Invented by NSA, the now-declassified Skipjack algorithm uses a 64-bit block size with 80-bit key length. Skipjack was intended for implementation in tamperproof hardware using the Clipper chip as part of a now-defunct key escrow program that would allow U.S. government agency decryption of telecommunications. Skipjack is considered a strong cipher43.

Images   Tiny Encryption Algorithm (TEA): Designed at Cambridge University in England and first presented in 1994, TEA operates on 64-bit blocks and uses a 128-bit key. Corrected Block TEA (referred to as XXTEA) corrects weaknesses in the original version. Because TEA can be implemented in a few lines of code, it may be suitable for resource-constrained hardware implementations44.

Images   Twofish: A freely available 128-bit block cipher using key sizes up to 128 bits, Twofish was one of the finalists that were not selected for the AES standard. Cryptanalysis of Twofish continues to reveal that it is secure45.

Stream Ciphers

In contrast to block ciphers, stream-based algorithms operate on a message flow (usually bits or characters) and use a keystream. Stream ciphers are applied where buffering may be limited or where data must be processed as it is received. While the case may exist for block ciphers to function as stream ciphers, and for block ciphers to be implemented in hardware, stream ciphers are generally less complex than block ciphers. Thus, stream ciphers may traditionally be found in hardware implementations of encryption.

Stream ciphers may be viewed as approximating the function of a one-time pad or Vernam cipher, which uses a random keystream of the same length as the plaintext. Due to the size of the keystream, a Vernam cipher is cumbersome and impractical for most applications. Traditional stream ciphers approximate the randomness of a keystream with a much smaller and more convenient key size (128 bits, for example). Encryption is accomplished by combining the plaintext with the keystream using the exclusive or (XOR) binary operation (see Figure 3.4).

Images

Figure 3.4 - High-level view of stream cipher encryption

Stream ciphers that operate in this fashion require sender and receiver to be in step during operations and are called synchronous stream ciphers for that reason. A benefit to using synchronous stream ciphers when the data transmission error rate is high is that if a digit of ciphertext is corrupted, only a single digit of plaintext is affected. However, this property is a disadvantage to security, because an attacker may be able to introduce a predictable error. Examples of synchronous stream ciphers are RC4 and HC-128.

Self-synchronizing or asynchronous stream ciphers overcome this security limitation by generating keystreams based on a set of former ciphertext bits. This allows resynchronization if ciphertext bits are corrupted, with loss of usually only a few characters. From a security standpoint, asynchronous stream ciphers are less susceptible to attack by attempting to introduce predictable error. Examples of asynchronous stream ciphers are ciphertext autokey (CTAK) and stream ciphers based on block ciphers in cipher feedback mode (CFB).

Asymmetric Cryptosystems

While the same key is used to decrypt the ciphertext in symmetric cryptosystems, asymmetric cryptosystems require use of a different key for decryption. In the following, the encryption key K1 is different from the corresponding decryption key, K2:

Encryption: EK1(M) = C

Decryption: DK2(C) = M

Use of a different key to encrypt the message than the key used to decrypt and the fact that it is infeasible to generate one key from the other are what distinguishes an asymmetric or public key cryptosystem (see Figure 3.5).

Images

Figure 3.5 - Asymmetric cryptosystem.

Asymmetric cryptosystems rely heavily on mathematical functions known as trapdoor functions. Such functions are easy to apply in one direction but extremely difficult to apply in the reverse.

Public key encryption can provide the benefits of confidentiality of message data, provide authenticity and data origin integrity of message data, and non-repudiation of data. Using public key encryption for the secure distribution of a secret key with a recipient is also possible, as long as a mechanism is used to authenticate the recipient and to ensure that the public key is authentic.

To review how confidentiality can easily be produced, let us ask Bob to send a secret message to Alice. We want Alice to be the only person who can read Bob’s message. For this to occur, Alice must first generate a public/private key pair, and publish her public key for Bob to read in the local newspaper. Bob meticulously enters the public key into his asymmetric cryptosystem, and encrypts his plaintext message, “Hello this is Bob!”

When Alice uses the private key she generated earlier, she finds she can read the message. Because Alice has kept her private key secure, she is assured no one else was able to read Bob’s message. However, Alice is not sure the message is really from Bob, because the public key is available to any newspaper subscriber.

So, to see how authenticity can be produced with an asymmetric cryptosystem, Bob’s private key, not Alice’s public key, must be used to encrypt the message. Bob generates a public/private key pair and publishes the public key. Bob uses his private key to encrypt the message “This message is from Bob!” Anyone, including Alice, can now obtain a copy of Bob’s public key and decrypt the message. While there is no confidentiality of the message, using Bob’s public key provides assurance the messages decrypted with it are from Bob. Thus, asymmetric cryptosystems can also be used for signing.

Because asymmetric cryptosystems tend to perform slower than symmetric cryptosystems, Alice and Bob can use their public/private key system to exchange the secret keys of a faster symmetric cryptosystem, using it for future communications.

The idea that separate keys for encryption and decryption could be used was presented in 1976 by Whitfield Diffie and Martin Hellman [DH]. This is the basis for the Diffie–Hellman (DH) key agreement protocol, also called exponential key agreement, which is a method of exchanging secret keys over a nonsecure medium without exposing the keys. The DH protocol is based on the difficulty of calculating discrete logarithms in a finite field.

While DH provides confidentiality for key distribution, the protocol does not provide authentication of the communicating parties. Thus, a means of authentication such as digital signatures must be used to protect against a man-in-the-middle attack.

The idea of a public-key cryptosystem and its use in digital signing was presented by Ron Rivest, Adi Shamir, and Leonard Adleman in 1977 [RSA]. RSA public and private keys are functions of a pair of large prime numbers. Recovering the plaintext from RSA encryption without the key would require factoring the product of two large primes, forming the basis for the security provided by the RSA algorithm. The keys must be generated in such a way that it is computationally infeasible to factor them; thus, proper creation of random prime numbers is a factor in how secure the cryptosystem is. An additional factor is the size of the key. Typically, key size of 1024 bits is required; however, 2048 or even 4096 bits may be used to provide additional security at the cost of performance.

Cryptosystems employ cryptographic primitives, which are the basic mathematical operations on which the encryption procedure is built. Primitives by themselves do not provide security. A particular security goal is achieved by employing the cryptographic primitives in what is known as a cryptographic scheme. Cryptosystems built using RSA schemes may be used for confidentiality, signing to provide authenticity, or key exchange.

Use of a different scheme can also provide a different level of security, including resistance to attack. The RSA Cryptography Specifications Version 2.1 combines the RSA Encryption Primitive (RSAEP) and RSA Decryption Primitive (RSADP) with particular encoding methods to define schemes for providing encryption for confidentiality and for digital signatures for providing authenticity of messages. While the current specification allows for use of earlier RSA schemes for compatibility reasons, it is highly recommended that the newer schemes be used for improved security. For instance, while the version 2.1 RSA Cryptography specification allows use of the version 1.5 scheme known as “RSAES-PKCS1-v1_5” for cryptographic applications requiring backward compatibility, if possible, the newer “RSAES-OAEP” scheme based on a more secure optimal asymmetric encryption padding encoding method should be used [PKCS #1] 46. Cryptosystems must consider not only the algorithm but the scheme to use for a given application.

While RSA and DH enjoy widespread use in applications such as for IPSec or for protecting AES private keys, their continued use will require improvements to the level of security they provide. Even if newer methods for attack against the fundamental problem of factoring large prime numbers or discrete logarithms are not created, computing power continues to increase significantly over time. As a result, implementing existing attack methods using special-purpose ultra-high-speed computers poses a theoretical threat to RSA and DH. To counter the threat, key size may be increased, which also requires additional computing power. Another option is to use a different asymmetric encryption algorithm altogether.

Another popular approach to public-key cryptography, which is more computationally efficient than either RSA or DH, is elliptic curve cryptography (ECC). For instance, recommendations by the National Institute of Standards and Technology (NIST) for protecting AES 128-bit private keys is to use RSA and DH key sizes of 3072 bits, or elliptic curve key size of 256 bits [NISTSP800-57-1]. Although ECC is slightly more complex than either RSA or DH, ECC has been shown to offer more security per bit increase in key size.

ECC schemes are based on the mathematical problem of computing discrete logarithms of elliptic curves. Because the algorithm is very efficient, ECC can be very useful in applications requiring limited processing power such as in small wireless devices and mobile phones.

Other asymmetric cryptosystems include El Gamal and Cramer–Shoup. El Gamal is based on the problem of computing discrete logarithms, and makes use of the Diffie–Hellman key exchange protocol. Cramer–Shoup is an extension of El Gamal.

Asymmetric cryptosystems that have been proved insecure and should not be used are those based on the knapsack algorithm. The first of these to be developed was the Merkle–Hellman Knapsack cryptosystem [Merkle–Hellman Knapsack]. The knapsack algorithm is based on having a set of items with fixed weights and needing to know which items can be combined to obtain a given weight.

Public key cryptosystems will continue to be necessary when secret key exchange is required. Common software protocols and applications where they are used include IPSec, SSL/TLS, SSH, and PGP.

Hash Functions and Message Authentication Codes

Hash functions are cryptographic algorithms that provide message integrity by producing a condensed representation of a message, called a message digest. Message Authentication Codes (MACs) are cryptographic schemes that also provide message authenticity along with message integrity by using a secret key as part of message input.

At a minimum, the following properties are present in a hash function:

Images   Compression: The hash function H transforms a variable-length input M to a fixed-length hash value h. This is represented by

H(M) = h

Images   Ease of computation: Given a hash function H and an input M, the hash value h is easy to compute.

In addition, the following properties of cryptographic hash functions exist:

Images   Preimage resistance: Given a hash function h, it is computationally infeasible to compute what the input M was. This is known as the “one-way” property of hash functions.

Images   Second preimage resistance: For a given input M, is computationally infeasible to find any second input which has the same hash value h.

Images   Collision resistance: For hash function h, it is computationally infeasible to find any two distinct inputs that produce the same hash value.

Hash functions may be built from one-way compression functions, algorithms that exhibit the property of collision resistance. One-way functions are limited in their ability to provide collision resistance, however. A popular means of constructing the hash function and strengthen its collision resistance is the Merkle–Damgård technique, which involves breaking the message input up into a series of smaller blocks. A compression function is performed taking the first message block and an initial fixed value as inputs. The output is fed along with the next message block into the compression function being used. Successive outputs combine with respective message blocks as input to the collision-resistant compression function in an iterative fashion. This results in a fixed-length message digest. A simplified typical Merkle–Damgård construction is shown in Figure 3.6.

Images

Figure 3.6 - Merkle-Damgård strengthening

MD5 (Message Digest algorithm 5), designed by Ron Rivest in 1991, is one such hash function based on a one-way algorithm and utilizing Merkle–Damgård construction. While MD5 has been widely used, it has been found to be prone to collision weakness and is thus insecure [Tunnels].

A common replacement recommended for MD5, and which is also widely used, is SHA-1 (Secure Hash Algorithm), designed by the United States National Security Agency (NSA). SHA-1 is also based on a one-way function utilizing Merkle–Damgård, and produces a 160-bit message digest. It has been found possible to derive a collision (determine a pair of different inputs that produce the same hash value) with 263 hash operations, which is less than the brute force strength of 280 steps that would be necessary [SHA-1 Collisions]. To determine a collision in SHA-1 would still require significant computational resources, such as those provided by distributed computing.

An alternative to SHA-1 is RIPEMD-160, designed by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel and which also produces a 160-bit message digest. RIPEMD-160 also replaces RIPEMD, which has been found to be prone to collision weakness [Collisions].

A summary of hashing functions based on one-way algorithms, along with their susceptibility to collision weakness, is summarized in Table 3.1.

Another means of creating a hash function is by using a block cipher algorithm. Thus, it is possible to use AES to create a cryptographic hash function. Block ciphers operate by encrypting plaintext using a private key to produce ciphertext. The ciphertext cannot be used by itself to recreate the plaintext, which resembles the one-way property of a hash function. However, because the block cipher’s secret key and decryption algorithm would allow reconstruction of the plaintext, some additional operations must be added to a block cipher to turn it into a secure cryptographic hash function.

An example of a block cipher hash function is MDC-2 (Modification Detection Code 2, sometimes called Meyer-Schilling), developed by IBM, which produces a 128-bit hash. Another example is Whirlpool, which produces a 512-bit hash; it was adopted by the International Organization for Standardization (ISO) in the ISO/IEC 10118-3:2004 standard [Dedicated Hash].

Another use of a block cipher is in a MAC, which is a key-dependent hash function. A MAC adds to the input message the secret key used by the symmetric block cipher, and the resulting output is a fixed-length string called the MAC. Adding the secret key to the message produces origin authentication, showing that the message must have been constructed by someone with knowledge of the secret key. MACs also provide integrity, because any change to the message would result in a different MAC value. The most common form of MAC algorithm based on a block cipher employs cipher block chaining, and is known as a CBC-MAC.

Images

Table 3.1 - Hashing Functions Based on One-Way Algorithms

A MAC may also be derived using a hash function, where the hash function is modified to incorporate use of a secret key to provide origin authentication and integrity. This is known as an MDx-MAC scheme, and can be based on a RIPEMD-128, RIPEMD-160, or SHA-1 hash function.

Images

Figure 3.7 - A Hashed Message Authentication Code (HMAC)

A Hashed Message Authentication Code (HMAC) is another case of a MAC derived using a hash function. In an HMAC, the underlying hash function is not modified, but is treated as a “black box.” HMAC uses any iterative hash function and adds a secret key to the input message in order to obtain origin authentication and integrity. See Figure 3.7 for a simplified view of HMAC.

HMAC is used in a variety of applications from mobile phones to network-attached storage devices and in IPSec. The construction of HMAC was published in IETF RFC 2104. The use of HMACs is standardized in NIST FIPS PUB 180, the Secure Hash Standard, and in ISO/IEC 9797-2.

Digital Signatures

MACs depend on use of a symmetric key that must be securely transmitted from the sender to the receiver. A digital signature may be thought of as a MAC that uses asymmetric cryptography, because a digital signature uses a private signing key and a public verification key. A digital signature scheme operates in the following manner:

  1. A message digest is generated using a hash function.

  2. The message digest is encrypted with the sender’s private key and attached to the cleartext message, signing it (note that a digital signature does not provide confidentiality).

  3. The attached message digest is decrypted by the receiver, using the sender’s public key. The receiver also compares this message digest with the message digest produced by hashing the cleartext message, to ensure that the message was not altered.

Figure 3.8 depicts digital signing and verifying.

Images

Figure 3.8 - Digital Signing and Verifying

A digital signature can provide origin authentication, non-repudiation, and integrity. By using a public/private key pair, the message is bound to the originator who used the private key. The originator can be bound to an individual using a mutual certification authority such as a PKI, thus assuring some measure of non-repudiation for the sender. Integrity is provided by using the cryptographic hashing function to make certain any alteration of the message can be detected.

A digital signature scheme contains the following elements:

Images   Cryptographic hash function: Hashing is done by the sending and receiving parties to determine integrity of the message.

Images   Key generation algorithm: Key generation produces a private key for signing and a public key for distribution to parties who will verify the digital signature.

Images   Signing algorithm: Signing produces a digital signature output using the private key and message.

Images   Verification algorithm: Verification uses the public key and digital signature to determine authenticity of the message.

Public key cryptosystems that are used to implement digital signature schemes include ECC, El Gamal, DSA, and RSA.

Standards that specify various schemes for digital signature algorithms exist. Digital Signature Algorithm (DSA) is a NIST standard specified for use in the Digital Signature Standard (DSS). DSS is defined in FIPS PUB 186. ISO/IEC 9696 and ISO/IEC 14888 specify a portfolio of digital signature schemes. Additional international standards specifying digital signature standards include ANSI X9.30.1, ANSI X9.62, and IEEE 1363.

Vet Proprietary Cryptography & Design Testable Cryptographic Systems

When it comes to the design and implementation of cryptographic systems, the main school of thought is that if the system is going to be designed for use commercially, then it cannot be a proprietary system, as the ability to test it and probe for weaknesses would be a problem. There are many issues associated with the ability to successfully vet proprietary cryptographic systems and the requirements inherent in designing testable cryptographic systems. The main issue is that the algorithm(s) used must be made available publically so that they can be tested by cryptanalysis, to establish a benchmark for the relative security of the system. There is no specific series of tests that can be performed against an algorithm to accurately evaluate its security per se; instead, what has to be done is that the algorithm needs to be vetted publically for an extended period of time through the rigorous application of cryptanalysis by experts in order to potentially discover any flaws or vulnerabilities inherent in the cryptography. If an algorithm is kept secret, then there is no vetting of it by experts, and as a result, its true strength, or potential weakness, is unknown by its users, which presents many issues for the security architect.

A practical example of this thought process can be seen by looking at some of the history surrounding DES, the unusual circumstances of its creation and the uniqueness of one architectural element of its implementation. DES was created as the result of a partnership between the NSA and IBM in the 1970’s. Due to the involvement of the NSA in the creation of DES, there have always been rumors of some sort of backdoor, or hidden element(s) within DES that could be exploited by the NSA if it needed to break the confidentiality of the data encrypted with DES. There has never been any proof of this found, despite glaring scrutiny of the algorithm for years, with one exception. The interesting thing that was found regarding DES was the unique way that the algorithm implemented S-Boxes, which are used to break up the plaintext as input in order to encrypt it as it is run through the algorithm. When the new attack methods discovered through differential cryptanalysis were applied to DES over 20 years after it was first released, the version built by NSA and IBM was found to be immune to these attack methods due to the unique bit shift that the S-Boxes utilized. The fact that DES was able to withstand an attack that had not even been invented until more than 20 years after its release has served to perpetuate the myths and legends surrounding the NSA’s involvement with DES, but it also serves to illustrate another important lesson, which is that no amount of vetting and scrutiny of an algorithm can discover all potential vulnerabilities, nor can it expose all potential defense mechanisms either. The need to allow algorithms to be vetted by communities of experts is a critical success factor in the construction of testable cryptographic systems.

There have been many more recent examples of the use of proprietary, or hidden, architectural elements to carry out attacks against infrastructure all over the world. The creation, deployment, and use of malware such as Flame, Stuxnet, and Duqu among others should focus the attention of security architects on the potential impact of the Advanced Persistent Threat (APT) category of attack vectors and their potential impact47. The issue, which is at the heart of these specific APT attacks, is the proprietary nature of their architecture and design, and their ability to operate undetected for years in some cases. This is just an extension of the same issues faced by the security architect with regards to the use of proprietary cryptographic systems and algorithms, and the inability to properly vet and understand the risks associated with operating them.

Computational Overhead & Useful Life

The computational overhead of encryption systems and algorithms is an issue that is not well understood by many security architects. We have always been told that encryption imposes computational overhead on a system, and that this overhead, or “crypto-tax “, is just the price of using the encryption in the first place. The challenge that most security architects, and indeed most IT professionals at large, face with regards to an issue such as computational overhead is that it is not something that is explained by those who understand it in such a way that it can be addressed easily. In the area of cryptography for instance, the discussions with regards to computational overhead are highly technical affairs that involve complex mathematics and cutting edge research by leading experts in the fields of mathematics and systems engineering and design 48. A good example of one of the more approachable research papers on this subject is “A generic characterization of the overheads imposed by IPsec and associated cryptographic algorithms49. This paper presents an assessment of the communication overheads of IPsec and evaluates the feasibility of deploying it on handheld devices for the UMTS architecture. A wide range of different cryptographic algorithms are used in conjunction with IPsec, such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), Message Digest (MD5) and Secure Hash Algorithm 1 (SHA-1). The paper considers the processing and packetization overheads introduced by these algorithms and quantifies their impact in terms of communication quality (added delay for the end-user) and resource consumption (additional bandwidth on the radio interface).

The security architect needs to be aware of these solutions and the research that supports them in order to be able to make informed decisions with regards to implementation of the best possible elements within the security architecture to align the needs of the business with the architecture. It is hard however for the security architect to stay informed about this kind of research and to consume and utilize it in an effective manner, as it is often presented in academic journals, and does not always make its way out to the security community in ways that allow for it to be readily available to practitioners.

The security architect also needs to stay informed regarding useful life concepts in their security architectures. The algorithms that the security architect chooses to deploy all have a useful life, and just like any other metric that indicates something of importance with regards to the health of a system under management, monitoring is the key to successful usage and consumption. Publically available algorithms are all subject to constant scrutiny and cryptanalysis in order to ensure that weaknesses and vulnerabilities are discovered before they can lead to compromise and data loss if at all possible. One of the key outcomes of this exhaustive vetting process is an eventual end of life milestone for the algorithm, if it is found to be vulnerable for any number of reasons.

For instance, the announcement by NIST with regards to the approval of the withdrawal of DES in 2005, and the subsequent announcement just over one year later by the MIT Kerberos Team to end of life Kerberos Version 4 as a result of the issues associated with DES that NIST reacted to, as well as serious protocol flaws with the implementation of Kerberos v4, are both examples of algorithms and protocols that reached the end of their useful life. The security architect that had deployed either DES, or DES in support of a Kerberos v4 implementation would need to change the effected elements of the systems in question to address the loss of DES and Kerberos.

Security architects need to ensure that they stay informed with regards to useful life concerns in all areas of their system architectures in order to ensure that they are not operating systems past their useful life, and in so doing, putting the users of those systems, and the data in those systems at unnecessary risk.

Key Management

Historically, a lot of efforts in the field of cryptography have been devoted to the development and implementation of secure algorithms and protocols. They have been put through scrutiny at all levels. There are a lot of publications that give an estimation of an algorithm’s strength. But it is very unlikely that a real attacker will prefer brute force to lower-hanging fruit in order to break a code. A real vulnerability can most likely be found in the key management methods. Key management in the real world is the most difficult part of cryptography [SCHNEIER]. It is much easier to find a flaw in one of the key management processes than to expend resources on sophisticated crypto attacks. It is not easy to design and implement a well-thought-out key management system. That is why the most successful crypto attacks have exploited poor key management. Moreover, the fact of using strong crypto algorithms and long keys very often creates a false sense of security that results in overlooking more mundane chores related to key management. Key management should provide the foundation for the secure generation, storage, distribution, and destruction of keys [NISTSP800-57-1]. Modern key management is usually an automated process, which helps to minimize human errors in the process of key generation, distribution and update, and also increases the key’s secrecy. One of the principles of modern cryptography requires that keys not appear in cleartext outside the crypto module (except public keys, which are usually distributed within public key certificates).

The subject of key management is multidimensional because of different types of cryptography and the purposes they serve. Thus, asymmetric (public and private) keys are managed differently from symmetric keys. Likewise, data encryption keys are managed differently from signing keys. One of the goals of this section is to address these specifics.

Although the main purposes of cryptography have been reviewed from an application perspective earlier in this chapter, let us look at the main purpose from a key management perspective.

Purpose of the Keys and Key Types

Either as a stand-alone service or as a supporting part of another system, cryptography supports one or several security services or properties: confidentiality, authentication, integrity, non-repudiation, and authorization. The important characteristics of the cryptographic keys, such as key size and life span, are defined by the security services and type of cryptography these keys should support. One of the cryptographic principles is a preferred use of each key type for one designated purpose, although issuing and using a multipurpose key in real life is common.

Confidentiality

Confidentiality is protection of information against unauthorized disclosure, and it is achieved by data encryption. Data to be protected may be either a human-readable text, or any type of binary, including other crypto keys. Both symmetric and asymmetric cryptography can be used. An unauthorized party should not be able to deduce or obtain the key for decryption. Keys for data at rest encryption may have a long crypto period; thus, they need to have a sufficient length and to be supported by a sophisticated and robust key management system (KMS). On the other hand, the keys for data in transit encryption may have a short life span, sometimes limited to one session. Their key length may be shorter and, thus, the KMS for this purpose may need to simply include a key generating and distributing mechanism.

The following key types support confidentiality:

Images   Symmetric data encryption key

Images   Symmetric key wrapping key; aka key encrypting key

Images   Public and private key transport keys

Images   Symmetric key agreement key

Images   Public and private static key agreement keys

Images   Public and private ephemeral key agreement keys

Authentication

Broadly, authentication is a way to verify the origin of information. If the information is just data or an executable, authentication would verify its integrity as well as the identity of the person or system that created the information. If authentication is part of an access control system and the information consists of user or system identity, authentication will verify that identity in order to allow authorization services to make access control decisions. Both symmetric and asymmetric cryptography techniques may be used, such as digital signature and MAC. The main idea is that possession of the key proves the authenticity of an information originator. Both data at rest and in transit may employ key-based authentication.

The following key types support authentication:

Images   Private signature key

Images   Public signature verification key

Images   Symmetric authentication key

Images   Public and private authentication keys

Data Integrity

Data integrity is a security feature that protects data against unauthorized alteration either in transmission or in storage. An unauthorized alteration may include substitution, insertion, or deletion and may be intentional or unintentional. None of these can happen unnoticed if data integrity control is in place. Both symmetric and asymmetric cryptography techniques, such as digital signature and MAC, may be used. The following key types support data integrity:

Images   Private signature key

Images   Public signature verification key

Images   Symmetric authentication key

Images   Public and private authentication keys

Non-Repudiation

Non-repudiation is concerned with providing data integrity and authentication in a special way that allows a third party to verify and prove it. It is provided by asymmetric key cryptography and a digital signature relying on a signer’s private key. This security feature requires an especially rigid control of the keys, because it should prevent a signing party from successfully denying its signature. The following keys may be found when non-repudiation is supported:

Images   Private signature key

Images   Public signature verification key

Authorization

Authorization is the component of access control that is responsible for granting an object access to a particular resource after that object has already proved its identity (authenticated). A Kerberos ticket-granting service is a typical example of a key used for authorization. In more general terms, the following keys may be used for authorization:

Images   Symmetric authorization key

Images   Private authorization key

Images   Public authorization (verification) key

Cryptographic Strength and Key Size

The strength of cryptography depends on the strength of the algorithms, protocols, and keys and the strength of the security around the keys. In cryptographic applications that require key generation, key distribution, and encryption, whole suites of algorithms are used. For example, the first phases of IPSec and SSL include key negotiation and exchange, which may employ RSA cryptography. The following phases include generating a symmetric session key and using that key for encrypting data in transit with 3DES or AES. The strength of cryptography in this case is defined by the weakest link in this chain, so if in this example the key exchange employs very short public and private keys, the symmetric encryption key can be successfully intercepted and the transmitted data can be decrypted. Many factors should be considered, including the data and key’s life span, volume of the data to be encrypted with the same key, the key size, and the way the keys are generated. For example, if a key is generated straight from a password, the entropy in the key is significantly reduced, because a smart brute force attack can generate just the keys deriving from ASCII characters that meet password policy requirements.

The difficulty of breaking a key using brute force grows exponentially with key length (i.e., number of bits), because each bit doubles a number of possible combinations for brute force. A long key gives better security but worse performance, so figuring an optimal key size is important. It should also depend on the projected key’s lifetime. Temporary, or so-called ephemeral, keys generated for one session or one connection, may be shorter. The long-life keys protecting data at rest for years should be as long as possible.

One of the important characteristics of the keys is a crypto period. It is defined [NISTSP800-57-1] as the time span during which a specific key is authorized for use by legitimate entities, or during which the keys for a given system will remain in effect 50. Duration of a crypto period limits a “window of opportunity” for successful cryptanalytic or any other attacks and volume of information that can be exposed if the keys are compromised. Generally, the shorter the crypto period, the better security, although more frequent key generation, revocation, and replacement may be costly and may create additional risk. Many factors should be taken into consideration when the crypto period for each key type is defined. NIST recommendations [NISTSP800-57-1] regarding crypto period, which assume usage of the environment with the goal of achieving better operational efficiency, are in Table 3.2.

Originator Usage Period (OUP) is a period of time in the crypto period of a symmetric key during which cryptographic protection may be applied to data.

As was mentioned earlier in this chapter, both the continuous progress in cryptanalysis techniques and increasing computer power available for breaking the keys, should be considered. By Moore’s law, computer speed doubles every 18 months, so the key size should be selected accordingly to protect the encrypted data against a brute force attack51. If a computer is N time faster, the key size should increase by log2 N bits.

Images

Table 3.2 - Recommended Crypto Periods for the Key Types

A successful brute force attack on a symmetric key algorithm, which in the case of perfect key entropy essentially consists of an exhaustive search of all the keys, would require on 2N, divided by 2, where N is a size of the key in bits cycles. In addition to the key length, there is an effective key length factor. In the case of 3DES, which assumes total 3 * 56 = 168 bit key encryption, the effective key size is 80 bits if the first and third encryption rounds are performed using the same 56-bit key, and it is 112 bits if all three rounds employ unique 56-bit keys. As already discussed in the example of the keys derived from an ASCII or EBCDIC password, and as will be demonstrated in the example of asymmetric key strength, there is another factor which impacts the strength in addition to the effective key length. This factor is a key space.

Breaking asymmetric key cryptography may need much less resources than breaking symmetric key cryptography. This stems from the nature of asymmetric key cryptography. For example, an RSA private key, which is a target of asymmetric cryptography attack, should be generated to meet certain RSA criteria. When this key is generated, as well as its counterpart public key, it is being derived from the space of large prime numbers. It means the key space for an exhaustive search is much smaller. Also, some advances in research of factoring large numbers indicates that increasing the size of public/private keys is required. RSA indicated that if an easy solution to the factoring problem is found and further increase of RSA key size beyond 2048 bits is required, then using the RSA algorithm may become impractical. Elliptic curve cryptography (ECC) is based on the elliptic curve discrete logarithm problem, which may take a full exponential time to solve. It is comparable with symmetric encryption strength. That is why ECC is considered the most likely successor of the RSA algorithm in the asymmetric cryptography area.

Comparison of symmetric, RSA, and ECC asymmetric cryptography and the corresponding key lengths are shown in Table 3.3 [NISTSP800-57-1].

Images

Table 3.3 - Comparable Key Strength

Some observations regarding comments in this table [NISTSP800-57-1]

  1. Column 1 indicates the number of bits of security provided by the algorithms and key sizes in a particular row. Note that the bits of security are not necessarily the same as the key sizes for the algorithms in the other columns, due to attacks on those algorithms that provide computational advantages. Because some combinations of 0 and 1 for 2TDES and 3TDES keys can be predicted, it takes less computational power to guess the key value, which is equivalent to a shorter key.

  2. Column 2 identifies the symmetric key algorithms that provide the indicated level of security (at a minimum), where 2TDEA and 3TDEA are specified in [SP800-67], and AES is specified in [FIPS197]. 2TDEA is TDEA with two different keys; 3TDEA is TDEA with three different keys.

  3. Column 3 indicates the minimum size of the parameters associated with the standards that use finite field cryptography (FFC). Examples of such algorithms include DSA as defined in [FIPS186-3] for digital signatures, and Diffie–Hellman (DH) and MQV key agreement as defined in [SP800-56A]), where L is the size of the public key, and N is the size of the private key.

  4. Column 4 indicates the value for k (the size of the modulus n) for algorithms based on integer factorization cryptography (IFC). The predominant algorithm of this type is the RSA algorithm. RSA is specified in [ANSX9.31] and [RSA PKCS#1]. These specifications are referenced in [FIPS186] for digital signatures. The value of k is commonly considered to be the key size.

  5. Column 5 indicates the size of the key for algorithms based on elliptic curve cryptography (ECC)

Key Life Cycle

Key life cycle should be analyzed for each key type in a crypto system in order to build a secure, cost-effective cryptographic architecture. Four major phases should be considered:

Images   Preoperational phase: The key is not generated yet, but preactivation processes are taking place. It may include registering a user’s attributes with the key management system, installing the key policies, and selecting algorithms and key parameters, and initial installation or update of the software or hardware cryptographic module with initial key material, which should be used just for testing and then replaced for production operation. Finally, the key material will be generated and optionally (depending on the application) distributed in a secure manner to other entities. For a more detailed description of these processes, see the sections “Key Creation” and “Key Distribution” of this chapter. The keys must be registered, which essentially includes binding them to the subject’s identity. For PKI, it is implemented in the X509 certificate, which binds a public key with subject’s name (usually DN), alternative name (usually e-mail) and some other attributes, and signs this binding by a digital signature of a trusted CA. For symmetric keys, it may be another mechanism, for example, implemented in a Kerberos Key Distribution Center (KDC).

Images   Operational phase: Key material is ready for normal operational use (encryption, decryption, signing, etc.). In many cases, the key is stored in the crypto module hardware or disk storage that meets certain requirements; for example, FIPS 140-2 [FIPS 140-2]. Key material availability is important, and backup and recovery mechanisms should be used to support this requirement. However, if a key may be recovered by means other than backup/restore methods, such as regenerating or rederiving it, it reduces the probability of compromising the key’s backups. Even if the key is not lost, it may need to be updated or changed during the operational phase. The main reasons are either the key policy regarding crypto period expiration or suspected or real key compromise. The key change may be accomplished either by simple rekeying, or replacing the old key with a completely independent new key, or updating the old key. The former method is used usually when a key is compromised, and it requires key redistribution. In the latter case, a new key is produced from the previous one, based on the protocol known to all parties, so no key redistribution is required. According to their policies, encryption/decryption and signing/verification key pairs have their own lifetime (crypto period). PKI applications automatically start trying to update the keys that enter into the transition period after a particular time interval, which is normally a percentage of the key lifetime.

Images   Postoperational phase: Key material is not in operation, but access to the keys still may be needed. This need may be associated, for example, with the need to decrypt a document or verify a signature on the document after expiry of the crypto period. The keys for this purpose are stored in an archive in encrypted form, with access and integrity control. There is a special recovery process in place, usually available for designated administrators, to obtain the keys from archive. It is a good practice to have on-site and off-site (backup) archives. Not all the key types may, and should, be archived. Further destruction of the keys stored in the archive may be warranted by the applicable policies. More details about archiving, revocation, and destruction of the keys is in one of the following sections of this chapter.

Images   Key destruction is performed either when the key is compromised or when its crypto period and retention in the archive have expired, according to the policy.

Key Creation

A key generation process is a part of the key establishment function of the key management preoperation phase [NISTSP800-57-1].

The security of cryptography is based on the secrecy of the keys and the virtual impossibility of deducing the keys from the cipher or other sources. Another principle of cryptography that relates specifically to key generation is to avoid weak keys and make a key space “flat,” deriving from random numbers. Theoretically, any n-bit key’s key space, based on the random number generator, is a 2N. In reality, in many cases, it is significantly smaller, which translates into lower resources required to break the key by specialized brute force. The older basic encryption tools generated keys from ASCII characters [SCHNEIER], which reduced the key space at least by half and also invited dictionary attacks. In 1995, two PhD students found that a release of Netscape’s SSL implementation chose the key from a recognizable subspace (bound to the clock) of the total key space. It significantly simplified attacks on SSL traffic. Another key generation weakness was discovered in one open source system that used a “predictable random” number generator [TECHREV-OPENSSL]. Originating the keys only from true random numbers may help to avoid this flaw. Another potential weakness of keys stems from a specific algorithm and its implementation, when knowledge of just one portion of a key is enough to decrypt a cipher and deduce the whole key. As in the previous discussion about key size, we need to look at the process of key generation in context: type of the keys, purpose of the keys, crypto application, and operation environment. It may be difficult to evaluate these factors because some of them may be proprietary. For crypto systems that support applications for the federal government, the FIPS 140-2 [FIPS 140-2 ] and the second draft of FIPS 140-3 [FIPS 140-3] are clearly defining requirements for key generation 52. These standards give a good benchmark for commercial systems as well and help to avoid the crypto system design flaws as described earlier. As FIPS 140-2 defines [FIPS 140-2]:

Random Number Generators (RNGs) used for cryptographic applications typically produce a sequence of zero and one bits that may be combined into sub-sequences or blocks of random numbers. There are two basic classes: deterministic and nondeterministic. A deterministic RNG consists of an algorithm that produces a sequence of bits from an initial value called a seed. A nondeterministic RNG produces output that is dependent on some unpredictable physical source that is outside human control.

A seed key, in its turn, is defined as “a secret value used to initialize a cryptographic function or operation.” NIST has documented approved methods of producing random numbers [ANNEXC-FIPS 140-2]. It also has certain criteria of entering the seed during a key generation process, both for the case when the keys are generated inside a crypto module or outside. Entering the keys into the crypto module may be manual (e.g., keyboard) or electronic (e.g., smart cards, tokens, etc.). A seed key, if entered during key generation, may be entered in the same manner as cryptographic keys. Physical security requirements for the crypto modules that generate the keys are also described in FIPS 140-2 and 140-3 (draft) and include temper-resistant measures.

Although crypto key generation both for symmetric and asymmetric cryptography is based on RNG, the process is different for generating symmetric keys and asymmetric key pairs.

Images   For asymmetric cryptography, the key pairs are generated according to the approved algorithms and standards. A static key pair can be generated by the end entity or by a facility that securely distributes the key pairs or by both the end entity and the facility in concert. A private signing key supporting the non-repudiation property should be generated on the end entity site and never leave that site. Ephemeral asymmetric keys are usually generated for the establishment of other keys, have a short life, and may be generated by the end entities and key distribution facilities. The following is a brief description of the RSA key pair, as one of the asymmetric key pair generation processes.

Images   An RSA key pair consists of an RSA private key, which in digital signature applications is used to compute a digital signature, and an RSA public key, which in the digital signature applications is used to verify the digital signature. In encryption and decryption applications, the RSA private key is used to decrypt data and the RSA public key is used to encrypt the data. As described in [FIPS 186-3] 53:

An RSA public key consists of a modulus n, which is the product of two positive prime integers p and q (i.e., n = pq), and a public key exponent e. Thus, the RSA public key is the pair of values (n, e) and is used to verify digital signatures. The size of an RSA key pair is commonly considered to be the length of the modulus n in bits (nlen). The corresponding RSA private key consists of the same modulus n and a private key exponent d that depends on n and the public key exponent e. Thus, the RSA private key is the pair of values (n, d) and is used to generate digital signatures. Note that an alternative method for representing (n, d) using the Chinese Remainder Theorem (CRT) is allowed as specified in PKCS #1. In order to provide security for the digital signature process, the two integers p and q, and the private key exponent d shall be kept secret. The modulus n and the public key exponent e may be made known to anyone. Guidance on the protection of these values is provided in SP 800-57. This Standard specifies three choices for the length of the modulus (i.e., nlen): 1024, 2048 and 3072 bits.

A CA for signing certificates should use a modulus whose length nlen is equal to or greater than the moduli used by its subscribers. For example, if the subscribers are using an nlen = 2048, then the CA should use nlen ≥ 2048. RSA keys shall be generated with respect to a security strength S.

Parameters p and q are randomly generated and should be produced from seeds from a random or pseudorandom generator [NIST SP 800-90]. These prime numbers’ seeds should be kept secret or destroyed after the modulus n is computed.

Images   For symmetric cryptography, the keys may be generated from a random number generation method or regenerated from the previous key during a key update procedure. Another way is to derive the key from a master key using approved FIPS140-2 derivation functions, but eventually they are also coming from a random number. For secure key distribution purposes, split knowledge procedures can be used, and in that case, different components of the key may be generated in different locations, or may be created at one location and then split into components. Each key component will provide no knowledge of the entire key value (e.g., each key component must appear to be generated randomly). The principle of split knowledge is that if knowledge of k (where n is a total number of components and k is less than or equal to n) components is required to construct the original key, then knowledge of any k − 1 key components will not provide any information about the original key other than, possibly, its length. In addition, a simple concatenation of key components should not produce a key.

Key Distribution and Crypto Information in Transit

Key distribution mainly belongs to the key establishment function of the preproduction phase of key life cycle. By the NIST definition [NISTSP800-57-1], “it is the process of transporting a key and other keying material from an entity that either owns the key or generates the key to another entity that is intended to use the key.” In many cases, it is a hard problem, which is solved differently for different types of keys. For example, data encryption keys are originated and distributed differently compared with signature verification keys, and public keys are distributed differently from secret keys. This problem is more difficult for symmetric key applications, which need to protect the keys from disclosure. In any case, key distribution should use certain protection mechanisms to meet the following requirements, either the business dictates manual distribution or automated or a combination of both:

  1. Availability of the keys for a recipient after transmission by a sender (redundant channels, “store and forward” systems, and retransmission as a last resort).

  2. Integrity, which should detect modification of keys in transit. MAC, CRC, and digital signature can be used. Physical protection is required as well.

  3. Confidentiality. It may be achieved by key encryption or by splitting the key and distributing its components via separate channels (“split knowledge”). Physical protection should apply as well.

  4. Association of the keys with the intended application usage and related information may be achieved by appropriate configuration of the distribution process.

Symmetric Keys Distribution

Symmetric keys may be distributed manually or electronically, by using a public key transport mechanism, or they may be previously distributed for transport using other encryption keys. Keys used only for encrypting data in storage should not be distributed at all, except for backup or other specially authorized entities. A description of methods of symmetric keys distribution follows.

Splitting the keys. It is formally defined in FIPS 140-2 as a process by which a cryptographic key is split into multiple key components that individually share no knowledge of the original key. These components can be subsequently input into, or output from, a cryptographic module by separate entities and combined to recreate the original cryptographic key. Thus, FIPS 140-2 Level 3 requires

Images   A cryptographic module that separately authenticates the operator entering or outputting each key component.

Images   Cryptographic key components must be directly entered into or output from the cryptographic module without traveling through any enclosing or intervening systems where the key components may inadvertently be stored, combined, or otherwise processed.

Images   At least two key components must be required to reconstruct the original cryptographic key.

In practical examples, several components of the key can be stored on devices with different ports and network connections, as well as on different crypto tokens that require individual authentication by designated security officers.

Manual Key Distribution

The process should ensure that the keys are coming from an authorized source and are received by the intended recipient. Also, the entity delivering the key should be trusted by both the sender and the receiver. A key in transit should be encrypted with a key intended for key wrapping.

Electronic Distribution of Wrapped Keys (Key Transport)

It requires a preliminary distribution of key-wrapping keys. In many implementations, a public key of an asymmetric key pair is used as a key-wrapping key; therefore, a recipient in possession of the private key will be able to “unwrap” the symmetric key. If symmetric cryptography is used for wrapping the keys, those key-wrapping keys should be distributed via a separate channel of communication.

Public and Private Keys Distribution

As was discussed in the review of the methods of cryptography, one of the main advantages of using public and private key cryptography is easier key distribution of public keys. A party in possession of private keys can sign its message, and any receiving party who obtains the sender’s public key can verify the signature. Likewise, any sender can obtain an encryption public key of a recipient, and only the recipient in possession of the counterpart private key can decrypt the encrypted data. While confidentiality of the public key is not needed, authenticity is. That is why public keys are usually distributed wrapped in public key certificates, issued and signed by a trusted certificate authority. The certificate, along with a public key, contains the subject’s name and other attributes that indicate how the public key can be used. For easy access by relying parties, the certificates are either delivered with a signed message or made available in the directories and other publicly accessible distribution points.

Private keys are managed and distributed differently. If a signing private key must support non-repudiation, it should be generated and remain only in possession of the owner of this key, so no distribution applies. Although generating of public/private key pairs often takes place on a subject’s node and the keys are placed in the crypto store on its hard drive or a hardware module, in some applications this is not the case. As described in the “Key Creation” section of this chapter, sometimes asymmetric key pairs, which do not have to support digital signatures with non-repudiation, may be generated on one of the servers of the public key infrastructure (either registration or distribution server), in cooperation between the servers and the subscriber, who is the owner of the keys. In such applications, private keys should be delivered to a subscriber via a secure encrypted channel with mutual authentication. Depending on policy, private decryption keys are also put in escrow, so that data may be decrypted if a subscriber leaves the organization or loses access to the keys.

When the public/private key pair is generated on the subscriber’s site, there is no need to distribute the private key—it stays where it was generated and where it belongs.

A private key of a key pair generated on a central facility will be distributed only to the intended owner of that key, using either secure manual distribution or electronic distribution with security measures similar to those for symmetric key distribution, for example, authentication, encrypted channel, split knowledge, etc.

As was mentioned earlier, distributing static public keys does not require encrypted channels or split knowledge techniques, but it has its own specifics. A relying party, who obtains the keys either for verifying an owner’s signature or for encrypting a message for the key owner, should have a high level of assurance that,

  1. The key really belongs to the subject.

  2. The key is associated with certain attributes belonging to the subject.

  3. The key is valid.

  4. The key is allowed by its policy to be used for the intended purpose.

All of these issues are addressed by using Public Key Infrastructure (PKI) and issuing X.509 certificates containing the subject’s public keys and attributes. The certificates are digitally signed by a PKI Certificate Authority and can be distributed via open channels, manually, or via e-mail or published by LDAP, HTTP, and FTP servers. Because each subject’s certificate is signed by a CA, a relying party should treat that CA’s certificate itself as an anchor of trust. Distribution of the trusted CA’s certificate is usually done via other channels. It can be preinstalled by a software manufacturer or obtained from other distribution points. More details about asymmetric key management are provided in a separate PKI section later in this chapter.

Key Storage

The ultimate goal of key management is to prevent any unsanctioned access without impeding legitimate use of the keys by crypto applications and service and key life cycle management processes. Key storage should meet several requirements, which may be different for different type of keys [NISTSP800-57-1]. Generally, these requirements are broken into several categories.

Keys may be maintained within a crypto module when they are in active use, or they may be stored externally under proper protection and recalled when needed. The protection of the keys in storage should provide

Images   Integrity (by CRC, MAC, digital signature, checksums, parity check, etc.): In addition to logical integrity, the keys stored in HSM may be physically protected by the storage mechanism itself. For example, a crypto store may be designed so that once the key is installed, it cannot be observed from outside. Some key-storage devices (specifically those that meet FIPS 140-2 level 3) are designed to self-destruct when threatened with key disclosure or when there is evidence that the key device is being tampered with.

Images   Confidentiality (by encryption, wrapping, and logical access control). Also, physical security is important (see earlier comments related to integrity).

Images   Association with Application and Objects (making sure that the key belongs to a designated object; e. g., encapsulating public keys with the object DN in a signed certificate or storing private signing keys in the object’s protected key store).

Images   Assurance of Domain Parameters (making sure that domain parameters used in the PKI keys exchange are correct). Domain parameters are used in conjunction with some public key algorithms such as DSA and ECDSA (called p, g, and q parameter) to generate key pairs, to create digital signatures, or when generating shared secrets that are subsequently used to derive keying material.

Images

Table 3.4 - Protection requirements by key types [NIST SP 800-57-1]

In addition to key protection, the key store should also provide availability. Keys should be available for authorized users and applications for as long as data is protected by these keys. A secure key and escrow is usually used for this. After the crypto period expires, the keys should be placed in an archive, which can be combined with backup storage.

Overall security requirements and business reasons may define the type of crypto store. Keys stored in the computer files may be more easily accessible than those that are stored in the hardware such as smart cards, external token devices, or PCMCIA. Usually the vendors of crypto applications such as PKI CA or application gateways, which perform signing and decrypting of SOAP messages, provide the users with the choice, so the implementer may decide to either use a hardware security module as a key store or to store the keys in the files. An application’s independence from hardware crypto device vendors is achieved by using standard RSA PKCS#11 compliant interfaces 54. A key file store is usually protected with an additional level of access control, so even a root user may not be able to access the key database if he or she does not have permissions and credentials for it.

Symmetric and private keys must be destroyed if they have been compromised or when their archive period (according to the policy) expires. Through the crypto period, when copies of the keys are made, these events should be documented; therefore, the keys’ destruction should apply to all the copies. Specific methods used for key destruction are warranted by application and business requirements and acceptable risk. There are no specific requirements for destroying public keys. More specifically, processes for key destruction are described in NIST SP800-57 Part 2 [(“Recommendation for Key Management—Part 2: Best Practices for Key Management Organization”)] 55. Zeroization is a technical term for destroying the keys by causing the storage medium to reset to all zeroes [NIST SP800-21] 56. Automatic zeroization is required when an attempt is made to access the maintenance interface or tamper with a device meeting FIPS 140-2 level 3 requirements [FIPS 140-2]. The key management policy should describe in detail the process of zeroization in a specific key management system.

If it is believed that an encryption key of data at rest was compromised, this data should be reencrypted with a new key. This whole process is called key rotation, and it includes decrypting data with the old encryption key (which is believed to be compromised) and rekeying this data with the new encryption key. With large volumes of data, data availability may be affected. That is why a lot of efforts are made to limit the need to rekey data. It is achieved by using a randomly generated value as an encryption key and making it unavailable directly for any user, including administrators. The application administrator may control encryption and decryption processes, but the key contents will never be disclosed.

Key Update

Modern key management is highly automated. For most phases of the key life cycle, manual steps are not required. Although creating a seed sometimes may require users to move a mouse or enter a long sequence of random key strokes, users do not manually select, communicate, or transcribe real crypto keys. Modern key management makes a periodic key update in accordance with key policies easier. One example is an automatic key and certificates rollover in the PKI. When a key or certificate is approaching the “valid to” date and time, an automated key update will kick off. Another example is session encryption keys. When an encrypted connection just starts, an initial key is negotiated and exchanged between parties using the key encryption key. When a session duration or volume of transferred data exceeds the limits, a new session is established between the same parties, and a new session key will be renegotiated and used for transactions. More frequent key updates will reduce the chance of a successful cryptoanalytic attack and the volume of confidential data at risk. On the other hand, it may reduce system performance and increase the chance of key compromise in the case of misconfiguration. All the pros and cons should be evaluated, and keys’ life span and frequency of key updates should be reflected in the policy.

Change of keys in the operational phase of the key life cycle may be caused by several reasons, as was alluded to earlier: key compromise, crypto period approaching the expiration date, or just a desire to reduce the volume of data encrypted with the same key.

A new key may be produced by auto rekeying or by an automated function updating an existing key. Rekeying is producing a key that is completely independent on the key it replaces. Rekeying is applicable when an old key is compromised, the key approaches its expiration date, or a new session key must be established according to the requirements.

Another way of changing the keys is by applying a nonreversible function to an existing key. Updating an old key in this manner does not require new key distribution or exchange between parties, so it may be less expensive. Parties may agree to exchange the keys on a particular day and time or upon exchanging a certain volume of encrypted data or on other conditions. This method does not apply in the case of key compromise.

Key Revocation

Whenever a key is compromised, it cannot be trusted to provide required security, cannot serve its purpose, and should be revoked. Generally, the key is considered to be compromised if it is released to or discovered by an unauthorized entity or this event is suspected to have happened. A key may enter the compromised state from any state except the destroyed state. Although the compromised key cannot be used for protection, in some cases it still may be used to process cryptographically protected information, with reasonable caution and suspicion. For example, a digital signature may be validated if it can be proved that the data was signed before the signing key was compromised and that the signed data has been adequately protected.

Key revocation applies both to symmetric and asymmetric keys and the process should be formally described in the Key Management Policy. In the asymmetric key management systems and PKI, technically, a public key is revoked (or most often, a public key certificate, containing that key), but as a result, its counterpart private key is also getting automatically revoked.

Information about key revocation may be sent as a notification to the involved parties, which would indicate that the continued use of the key is not recommended. This notification should include complete information about the revoked key, the date and time of revocation, and the reason. Based on the revocation information provided, other entities could then make a determination of how they would treat information protected by the revoked keying material.

Another method is to provide the participating entities (i.e., relying parties) with the access point for obtaining the status of the key material. For example, if a signature verification public key is revoked because an entity left the company, the information may be published in the Certificate Revocation List (CRL). But an application may still honor the signature if it was created before the certificate was published in the CRL 57. At the same time, if a signing private key was compromised in an unknown time frame, but eventually the public key certificate was revoked, the situation should be assessed more carefully. Certificate revocation in more detail will be reviewed in the section titled “Public Key Infrastructure.”

A symmetric key that is used to generate a MAC may be revoked so that it is not used to generate a new MAC for new information. However, the key may be retained so that archived documents can be verified.

The recommended approach to the key revocation policy is to reflect it in the life cycle for each particular key. Thus, when a key is used in communication between just two parties, the entity revoking the key just informs another entity. On the other hand, if the key is used within an infrastructure with multiple relying parties, the revoking entity should inform the infrastructure, which should make the information about key revocation available to all relying parties.

Key Escrow

Generally, escrow is defined as something delivered to a third person (usually called an “escrow agent”) to keep, and to be returned to the delivering entity under certain proof and conditions. This “something” may be a document, money, or a key.

In cryptography applications, a key escrow system operates with two components of one key, and these two components are entrusted to two independent escrow agents. For government applications [FIPS 185], these key components will be presented by the escrow agent to a requester, which may be an entity related to the owner of the key or a law enforcement organization, upon certain conditions, authorizations, and authentication 58.

This approach is implemented in electronic surveillance of encrypted telecommunications involving specific electronic devices. The key components obtained by the requester are entered into this device and enable decryption. Neither of the escrow agents can perform decryption, because it has just one component of the key. In applications for the private sector, the key components may be kept by two officers; therefore, if a key owner entity is not available, its encrypted information may be decrypted upon directions to the escrow officers from a higher official. Two types of risk exist in this schema: (1) collusion and (2) failure of reassembling and using the key for its intended purpose.

In order to support escrow capabilities in telecommunication, the U.S. government adopted the symmetric encryption algorithm SKIPJACK and a Law Enforcement Access Field (LEAF) method, which presents one part of a key escrow system enabling decryption of encrypted telecommunications. Both the SKIPJACK and the LEAF creation method are implemented in electronic devices. The devices may be incorporated in security equipment used to encrypt (and decrypt) telecommunications data 59.

Decryption of lawfully intercepted telecommunications may be achieved through the acquisition and use of the LEAF, the decryption algorithm, and the two escrowed key components.

Backup and Recovery
Backup

According to 800-57-2 [NISTSP800-57-2], key recovery is a stage in the life cycle of keying material; mechanisms and processes that allow authorized entities to retrieve keying material from key backup or archive 60. Key backup and recovery is a part of the KMS contingency plan, which according to 800-57-1 [NISTSP800-57-1],... “is a plan that is maintained for disaster response, backup operations, and post-disaster recovery to ensure the availability of critical resources and to facilitate the continuity of operations in an emergency situation.”

As in the life cycle of any system, data can become unusable because of many reasons such as file corruption, hardware failure, configuration errors, etc. But in the case of cryptosystems management, backup should be considered only if there are no other ways (such as rekeying or key derivation) to provide continuity. These specific recommendations apply because of the risk associated with key backup, and the fact that key and other crypto information backup compromise is detrimental to the KMS operations. When planning key backup, the following questions should be answered: what key material needs to be backed up, where and how will the backup be stored, how will the backup and recovery procedures be performed, and who is responsible.

Not all the keys and cryptographic information should be backed up. For instance, private signing keys should not be backed up, to avoid any questionable situation with non-repudiation. However, in the specific case of the CA’s signing key, this does not apply, because unrecoverable loss of this key would prevent new certificates from being issued until the CA was rekeyed. Special security measures apply for the backup of this key, such as storing it on a removable hardware crypto token protected by multiple keys and passwords that is stored in a safe under administrative control. Separation of duties in this schema prevents collusions and key compromise. Ephemeral keys and shared secrets, which are generated during key negotiations and used for one session for data in transit, do not need to be backed up. An RNG seed should not be backed up either, because it is not used immediately for data encryption and is needed only for key generation.

It is important to mention that the life span of the key backup should be equal to or longer than the life span of the encrypted or signed data. Another specific feature that makes key backup and recovery different compared with similar processes for other data is the criticality of a key’s availability and, thus, backup redundancy. Both competing requirements for minimizing risk of the backed-up keys’ disclosure and redundant storage for robust key recovery should be considered.

Key Recovery

Keys may not be available for cryptographic operations when the key material stored as a file in the system or on a hardware device/token is corrupted, the key owner either loses access to the key material (i.e., forgotten/lost password) or is not available when the organization needs access to the data, and some other situations. Keys may need to be recovered to enable decryption of data previously encrypted with a lost key or to verify the integrity and authenticity of previously signed data if a signature verification key is lost. The key recovery process acquires a key from backup storage and makes it available for the decryption or verification process.

Public Key Infrastructure

At this point, the security architect should have fundamental knowledge about asymmetric cryptography, public key certificates, and PKI. The following will provide additional in-depth information regarding PKI and associated subjects. Before delving into the subjects in the following subsections, several points should be noted that will influence the context of those subsections:

Images   The most important aspects of certificates, their life cycle, purpose, restrictions, and the way these certificates are supposed to be managed should be documented in the Certificate Policy (CP) and Certificate Practice Framework (CPF) [RFC 3647] 61. CPF includes one or more Certificate Practice Statements (CPSs) that address “... the practices that a certification authority employs in issuing, managing, revoking, and renewing or rekeying certificates.”

Images   There are two categories of end entities that use PKI services: subscribers and relying parties. Subscribers are getting registered with PKI and subsequently generating or receiving private and public key pairs and receiving their certificates from a certificate authority (CA). Relying parties have access to the subscribers’ public certificates, which they use for secure exchanges with subscribers. They trust the Certificate Authority, which is the heart of the PKI; hence, they rely on the PKI, which issues and supports certificates issued to subscribers.

Images   Depending on the policies that rule PKI and the need for relying applications, there may be one-key two-key, and sometimes multikey pair implementations. A single key pair PKI application uses one public/private key pair for all applications needs, chiefly for data decryption/encryption and signing/verification.

Images   Interoperability and integration of PKI with other IT infrastructure components and both client and server applications are very important issues for successful implementation and deployment. Although many relevant standards exist and are widely adopted, it should not be taken for granted that in the process of PKI enrollment, the keys will be placed in the right location on the client (subscriber) part and the certificate will be stored and published in a location that every relying party is aware of.

Key Distribution

The main reason why asymmetric cryptography and public certificates gained popularity is the manner in which they address the key distribution problem. In order to support authentication and confidentiality, each party needs to have its own symmetric keys dedicated for data exchange with one correspondent. If A is going to encrypt for B, C, and D, it would have to have individual keys for B, C, and D and send the keys to those parties in a secure manner. Each of B, C, and D would have to do the same. In sum, the two main problems of symmetric cryptography are the number of the required keys and the problem of their secure distribution. Public key cryptography solves both problems because of its nature. Keys come in pairs, and only the private key should be kept secret and should not be distributed. Its counterpart public key may be available to all parties, and hence may be published for public access on any server; that is, file server, LDAP, or Web server.

For encryption, a recipient’s public key may be used by any party wanting to encrypt data for that recipient, who is in possession of the counterpart private key. It guarantees that only this recipient in possession of that private key will be able to decrypt the data.

For digital signature verification, each recipient may obtain a public verification key of the sender and be confident in the authenticity of the signature, because only the sender holds the private key that is used to produce the digital signature being verified.

The main problem that exists with public key distribution is to guarantee the key’s integrity and binding to the identifier of the holder of the counterpart private key. This problem is solved by using X.509 public key certificates, which bind the subject name to a public key, and this binding is sealed by the signature of the PKI Certificate Authority. Because the CA signature is trusted by all parties, the integrity of the public key and its binding with the subject are trusted too.

Public key distribution, which is implemented via certificate distribution, boils down to publishing these certificates on a server accessible by relying parties or just attaching the certificate to an encrypted or signed message. Private keys should not be distributed at all.

Certificate and Key Storage

When talking about PKI certificates and the keys, we should always remember the guidance provided in CP and CPS documents. The purpose of the certificates and their keys will dictate how they should be handled and stored.

For two-key-pair applications, where the encryption key pair and the corresponding public key certificate are created by the CA, the encryption public key certificates are most often placed in the subscriber’s Directory entry and also in the PKI/CA database. Copies of the decryption private key and the encryption public key certificate will be securely sent to the subscriber and will be stored on the subscriber’s machine on the disk or HSM. Decryption private keys should never be published, but they should be backed up. In a two-key-pair PKI, the subscriber generates the signing key on its machine and securely stores the signing private key on the disk or HSM. It sends only the verification public key to the CA in a secure manner. The signing private key is not sent to the CA, and it is never backed up in the CA’s database. When the CA receives the verification public key, it generates a verification public key certificate. A copy of this certificate is stored in the CA database, and is also sent to the subscriber. Often, when the PKI subscriber sends a signed message to any recipient, it attaches the verification certificate to it. So, a relying party does not have to access the directory to retrieve this certificate, which is required for signature verification.

For one-key-pair applications, a dual-usage key pair is generated on the subscriber’s machine and stored on the disk or HSM. A copy of the dual-usage private and public key will be sent to the CA. The private key will be stored in the CA database. The CA will use the public key to generate a dual-usage public key certificate and will put it in the user’s Directory entry. A copy of this certificate will be stored in the CA database. It will be also sent to the subscriber and will be stored on its machine on the disk or HSM.

In summary, there are several places where certificates and public and private keys are stored: PKI/CA database, Directory server, and subscriber’s machine. The specifics are highly dependent on PKI implementation and CPS directives.

PKI Registration

PKI consists of many components: Technical Infrastructure, Policies, Procedures, and People [PKIREGAG]. Initial registration of subscribers (either users, or organizations, or hardware, or software) for a PKI service has many facets, pertaining to almost every one of the PKI components. There are many steps between the moment when a subscriber applies for a PKI certificate and the final state, when keys have been generated and certificates have been signed and placed in the appropriate locations in the system. These steps are described either explicitly or implicitly in the PKI CPS.

Reference to the CP and CPS associated with a certificate may be presented in the X.509.V3 certificates extension called “Certificate Policies.” This extension may give to a relying party a great deal of information, identified by attributes “Policy Identifier” and “Policy Qualifier” in the form of Abstract Syntax Notation One object IDs (ASN.1 OID).

One type of Policy Qualifier is a reference to CPF, which describes the practice employed by the issuer when registering the subscriber (the subject of the certificate). Here we focus on the following:

  1. How the subject proves its organizational entity.

  2. How the person, acting on behalf of the subject, authenticates himself in the process of requesting a certificate.

  3. How the certificate issuer can be sure that the subject, whose name is in the certificate request, is really in possession of the private key, to which the public key is presented in the certificate request along with the subject’s name.

How the Subject Proves Its Organizational Entity

Authentication requirements in the process of registration with PKI depend on relations between the CA and the organization, the nature of an applying End Entity (EE) and CP, which is stating the purpose of the certificate. Thus, the organization may have its internal CA or may use a commercial CA to serve its all certificates needs. It may issue certificates of low assurance, which support just internal e-mail digital signature, or issue high assurance certificates, which encrypt and authenticate high value transactions between the organization and external financial institutions. Among end entities, there can be individuals, organizations, applications, elements of infrastructure, etc.

Organizational certificates are usually issued to the subscribing organization’s devices, services, or individuals within the organization. These certificates support authentication, encryption, data integrity, and other PKI-enabled functionality when relying parties communicate. Among organizational devices and services may be,

Images   Web servers with enabled TLS, which support the server’s authentication and encryption

Images   Web services security gateways, which support SOAP messages’ authentication and signatures’ verification, encryption, and decryption

Images   Services and devices, signing content (software codes, documents, etc.) on behalf of the organization

Images   VPN gateways

Images   Devices, services, and applications supporting authentication, integrity, and encryption of Electronic Data Interchange (EDI), B2B, or B2C transactions

Images   Smart cards for end user authentication

Among procedures enforced within applying organizations (before a certificate request to an external CA is issued) are the following:

Images   An authority inside the organization should approve the certificate request.

Images   The authority should verify that the subject is who he or she claims to be.

Images   After that, an authorized person (authorized submitter) within the organization will submit a certificate application on behalf of the organization.

Images   The organizational certificate application will be submitted for authentication of the organizational identity.

Images   Depending on the purpose of the certificate, an external certificate issuer will try to authenticate the applying organization, which may include some but not all of the following steps, as in the following example [VeriSignCPS]:

Images   Verify that the organization exists.

Images   Verify that the certificate applicant is the owner of the domain name, which is the subject of the certificate.

Images   Verify employment of the certificate applicant and if the organization authorized the applicant to represent the organization.

A correlation between the level of assurance provided by the certificate and the strength of the process of validation and authentication of the EE registering with PKI and obtaining that certificate is always taking place.

How a Person, Acting on Behalf of the Subject, Authenticates to Request a Certificate (Case Studies)

Individual certificates may serve different purposes, for example: for e-mail signing and encryption, and for user authentication when they are connecting to servers (Web, directory, etc.) to obtain information or for establishing a VPN encryption channel. These kinds of certificates, according to their policy, may be issued to anybody who is listed as a member of a group (for example, an employee of an organization) in the group’s directory and who can authenticate itself. An additional authorization for an organizational person may or may not be required for PKI registration.

An individual who does not belong to any organization can register with some commercial CAs with or without direct authentication and with or without presenting personal information. As a result, an individual receives his or her general use certificates. Different cases are now briefly described.

Online Certificate Request without Explicit Authentication

As in the example with VeriSign certificate of Class 1, a CA can issue an individual certificate (aka Digital ID) to any EE with an unambiguous name and e-mail address. In the process of submitting the certificate request to the CA, the keys are generated on the user’s computer, and initial data for a certificate request, entered by the user (user name and e-mail address) is encrypted with a newly generated private key. It is all sent to the CA. Soon the user receives by e-mail his or her PIN number and the URL of a secure Web page to enter that PIN in order to complete the process of issuing the user’s certificate. As a consequence, the person’s e-mail address and ability to log into this e-mail account may serve as an indirect minimal proof of authenticity. However, nothing prevents person A from registering in the public Internet e-mail as person B and requesting, receiving, and using person B’s certificate.

Images

Figure 3.9 - Authentication of an Organizational Person

Authentication of an Organizational Person

The ability of the EE to authenticate in the organization’s network (e.g., e-mail, domain) or with an organization’s authentication databases may provide an acceptable level of authentication for PKI registration. Even just the person’s organizational e-mail authentication is much stronger from a PKI registration perspective than authentication with public e-mail. In this case, user authentication for PKI registration is basically delegated to e-mail or domain user authentication. In addition to corporate e-mail and domain controllers, an organization’s HR database, directory servers, or databases can be used for the user’s authentication and authorization for PKI registration. In each case, an integration of the PKI registration process and the process of user authentication with corporate resources needs to be done (see Figure 3.9).

A simplified case occurs when a certificate request is initiated by a Registration Authority upon management authorization. In this case, no initial user authentication is involved.

Individual Authentication

In the broader case, a PKI registration will require a person to authenticate potentially with any authentication databases defined in accordance with CPS. For example, to obtain a purchasing certificate from the CA, which is integrated into a B2C system, a person will have to authenticate with financial institutions, which participate in the Internet purchasing transactions. In many cases, an authentication gateway or server will do it, using a user’s credentials (see Figure 3.10).

Images

Figure 3.10 - Authentication of an Organizational Person

Dedicated Authentication Bases

In rare cases, when a PKI CPS requires user authentication that cannot be satisfied by the existing authentication bases, a dedicated authentication database may be created to meet all CPS requirements. For example, for this purpose a prepopulated PKI Directory may be created, where each person eligible for PKI registration will be challenged with his or her password or personal data attributes. Among possible authentication schemes with dedicated or existing authentication databases may be a password with additional personal challenge-response data, such as mother’s maiden name, make and year of a first car, biometrics, and others.

Face-to-Face

The most reliable, but most expensive method to authenticate an EE for PKI registration is face-to-face authentication. It is applied when the issued certificate will secure either high risk and responsibility transactions (certificates for VPN gateways, CA and RA administrators) or transactions of high value, especially when the subscriber will authenticate and sign transactions on behalf of an organization. To obtain this type of certificate, the individual must personally present and show his or her government-issued ID or badge and other valid identifications to the dedicated corporate registration security office and sign a document, obliging use of the certificate only for assigned purposes. All the procedures and sets of ID and documents that must be presented before an authentication authority are described in CPS.

Proof of Possession

A group of the key PKIX-CMP messages, sent by the EE in the process of initial registration, includes “Initialization Request,” “Certification Request,” and “PKCS10 Certification Request” messages. The full structure of these messages is described in [CRMF] 62 and [PKCS10] 63. Certificate request messages, among other information, include “Public Key” and “Subject” name attributes.

The EE has authenticated itself out-of-band with the registration authority (RA) on the initialization phase of initial registration. Now an additional proof, that the EE, or the “subject,” is in possession of a private key, which is a counterpart of the “public key” in the certificate request message, is required. It is a proof of binding, or so-called “Proof of Possession”, or POP, which the EE submits to the RA.

Depending on the types of requested certificates and public/private key pairs, different POP mechanisms may be implemented:

Images   For encryption certificates, the EE can just provide a private key to RA/CA, or the EE can be required to decrypt with its private key a value of the following data, which is sent back by RA/CA:

Images   In the direct method, it will be a challenge value, generated and encrypted and sent to the EE by the RA. The EE is expected to decrypt and send the value back.

Images   In the indirect method, the CA will issue the certificate, encrypt it with the given public encryption key, and send it to the EE. The subsequent use of the certificate by the EE will demonstrate its ability to decrypt it, and hence the possession of the private key.

Images   For signing certificates, the EE just signs a value with its private key and sends it to RA/CA.

Certificate Issuance

A certificate can be looked at as an electronic equivalent of a subject’s ID document, which is issued for particular purposes in accordance with the organization’s policy. Technically, the sanctioned and expected usage of the certificate is represented in the X.509 certificate “Key Usage” attribute. A relying party application is capable of verifying this attribute; therefore, the certificate will be used only within the scope of its key usage. For example, encryption certificates are issued for encrypting data for a recipient whose name is in the certificate, and verification certificates are issued to verify a signature of the sender who signed the message. Very often, one certificate is issued to serve many purposes. In any case, a relying party has information to help decide how much security the certificate can support, provided that the issuance and management of this certificate is trustworthy. Two main issues relating to this question are: Is an issuing CA trustworthy and how is the information in the certificate secured?

Images   A certificate is digitally signed by an issuing CA, and it can be trusted only if the CA is trusted. A guard verifying a person’s ID first looks to see what organization or country issued that ID and if the issuing authority is in the trusted list. In the same way, a relying party that verifies a certificate will verify if an issuing CA is in the trusted list. Technically, the application checks if that CA’s certificate is installed in a designated storage and if it is not in the revocation list.

Information in the certificate is secured by a digital signature of the issuing CA. Anybody can view and browse the certificate and each of its attributes, including the subscriber (“subject”) name with its public key and the CA (“issuer”) name with its key identifier and the thumbprint. The certificate binds together all the attributes, including the most important: the subject’s name and its public key. The integrity of this binding is preserved by the signature of the trusted issuing CA. Some details of this process already have been mentioned in the previous section, “PKI Registration.” Before signing that binding, the CA has to receive evidence that a subscriber public key is associated with the subscriber. One of the ways to obtain it is a proof of possession of the private key. The subscriber, who generated a key pair, signs a message containing its public key and its common name with its private key. The message is sent to the CA directly or via the registration authority (RA) and is used as material for the public certificate. Other certificate attributes and extensions are defined by certificates’ templates. The format of the X509.V3 certificate is presented in Figure 3.11.

Images   Once all the attributes are filled in, the certificate is digitally signed by the CA and sent to the subscriber or published in the Directory.

Images

Figure 3.11 - Format of X509.V3 certificate

Trust Models

One fundamental purpose of PKI is to represent the trust relationship between participating parties. When a verifying party verifies another participant’s certificate, generally it verifies a chain of trust between that participant’s certificate and the verifier’s anchor of trust. If that party’s certificate was issued by a CA that is directly trusted by the verifier, then that CA is an anchor of trust. Otherwise, there are one or more intermediate CAs that constitutes a chain between the anchor of trust and the party’s certificate to be verified. In both cases, the anchor of trust is based on a preconfigured certificate, in most cases provided out of band or as a configuration process. Several models described in the following text are available to provide chains of trust for PKI applications supporting multiple communities.

Subordinate Hierarchy

Very often, the current and long-term business reasons suggest putting two or more CAs into a hierarchical trust relationship. The CA hierarchy contains one root CA on top of the hierarchical tree and one or more levels of CA hierarchy of subordinate CAs (Figure 3.12). Each of the subordinate CAs has one immediate superior and may have one or more subordinates (the root CA is a top superior). A superior CA issues and signs CA certificates for its immediate subordinates. Only the root CA can issue and sign its own certificate. For crypto application operations, a subordinate issuing CA does not need to certify its superior. Each participant should know and trust a root CA, which establishes an anchor of trust. A relying party that trusts the root CA needs to validate a path from the root CA to the sender’s certificate.

This model is good for internal enterprise applications, but the hierarchy may be hard to implement between enterprises because it must have in place the shared cross-enterprise certificate policies and one shared root of trust. For more details about certificate chains issued by hierarchical PKI, see the section titled “Certificate Chains” later in this chapter.

Images

Figure 3.12 - Hierarchical PKI CAs.

Cross-Certified Mesh

Cross-certified mesh is probably the most general model of trust between CAs and participating PKIs. The hierarchical model described earlier may be interpreted as a mesh with some constraints. The mesh model is good for intercommunity and dynamically changing enterprise PKI applications, especially for nonhierarchical organizations. When organizations need to establish trust relations, they cross-certify their CAs, which does not require a change of anchors of trust or other elements of existing PKIs. This model is also good for merging previously implemented PKIs into one PKI. Cross-certification includes an exchange of participating PKIs’ public verification keys and having each participating PKI sign the received key by its internal root CA. When more than two PKIs participate in the mesh, they create a web of trust by mutually cross-certifying each to each. Although no changes to each participating PKI are required, certificate verification in this model of trust is more difficult to implement because there may be multiple certificate paths. More details are given in the section titled “Cross-Certification” later in the chapter.

Bridge CA

When a cross-certified mesh is too dynamic and grows too fast to include n CAs, it may not scale well because it is supposed to include and support n(n − 1) cross-certifications and also because of potentially ambiguous verification paths. A bridge CA model may be helpful in this case. A large and very comprehensive implementation of this model is Federal Bridge Certification Authority (FBCA), which is well described in [FBCACP] and [FPKIATO]64. As in the FBCA example, any bridge CA issues and manages cross-certificates for participating PKIs. By creating a mesh of participating root CAs, the bridge CA model allows participating parties to mutually validate each other’s certificate paths. More details are given in the section titled “Cross-Certification” later in this chapter.

Trusted List

The most well-known example of a Trusted List model is a set of publicly trusted root certificates embedded in the Internet browsers. When a client system verifies another party’s certificate, it tries to chain that certificate to one of the certificates in the list of trusted roots. A fundamental difference between this model and the previously discussed hierarchical, mesh, and bridge models is in the fact that the parties to be verified have to accommodate the relying parties’ trusted roots. It moves management overheads from PKI CAs to the clients and in an environment with many CAs, may require a large number of root certificates to be included in the list of trusted CAs.

Certificate Chains

Each CA’s certificate is located in the certificate chain, which starts at the root CA certificate and includes the certificates of all superior subordinate CAs. Certificates of the issuing CAs that issue certificates for end entities are at the end of this chain (see Figure 3.12). Several considerations should be taken into account:

Images   The validity of an issuing CA’s certificate depends on the validity and life span of the whole certificate chain. If a root or an intermediate certificate at a higher level of hierarchy expires, validation of the end entity certificate issued by the issuing CA down the hierarchy should show negative. If any certificate in the chain is revoked, validation should show negative also. That is why the higher CA’s certificates and certificate revocation lists (CRLs) usually have a life span that is significantly longer. The decreasing life span of the certificates in the chain is presented in Figure 3.13.

Images

Figure 3.13 - Hierarchical PKI certificate life span

Images   The higher hierarchical CAs require higher security, because a compromise means revoking of all the subordinate and issuing CAs on lower levels as well as all the end entities’ certificates. As a result, more certificates will be compromised and will require revocation. It is a common practice in PKI to take a root CA offline and lock its key storing medium in a secured safe store. It needs to be returned online only when its root certificate and immediate subordinate CA’s certificates are approaching the end of life and need to be reissued to maintain validity of all subordinate chains or when a new immediate subordinate CA certificate should be issued or when one of the existing subordinate CAs is compromised and its certificate must be revoked. Another reason to put the root CA online is to let it update and publish the CRL when approaching the CRL expiration or when the CRL must be updated.

Images   The main purpose of the CA hierarchy and the use of certificate chains is to establish an anchor of trust (based on the root CA) and create a hierarchical model of trust. Distributing only a root CA certificate among all the relying parties is easier than distributing multiple CA’s certificates. A relying party that needs to validate a certificate issued by an issuing CA in the hierarchical PKI starts walking up the chain until it reaches a root certificate, which is expected to be installed as a trusted CA certificate.

Certificate Revocation

When the private key of a subscriber is compromised or is suspected to be compromised, when an attribute of the certificate (e.g., rank) changes, or when trust between the CA and the subscriber is changed (for example, the employee holder of the certificate leaves the company), the issued certificate should be revoked immediately. The revocation process, from its origination to execution, should be described in the CPS. To inform relying parties, the revoked certificate is placed on a CRL, and the CA reissues, signs, and publishes the updated CRL. Regularly, or immediately after the revocation event, the latest CRL is published in a commonly available space (such as the directory server).

When a relying party (an application) receives a signed message, it should try to verify the signature. First, it checks to make sure the associated certificate has not expired, and second, it checks to make sure the certificate is still valid. Depending on business requirements, there can be two scenarios:

Images   A relying party is required to validate the certificate with the instantaneous revocation data. This is a real-time validation, and the relying party does not use any cache CRL.

Images   A relying party is only required to use a valid nonexpired CRL. A CRL is considered valid if a trusted CA has signed it and the current time is between the “this Update” and “next Update” CRL attributes. So any caching mechanisms can be used to store the CRL on the validating site.

Traditional CRL Model

A relying party checks a certificate against the latest published CRL. If the certificate is not in the CRL, it is assumed valid. Two cases are possible when the application is making this check:

Images   It does not have the current CRL in cache and has to retrieve it from a directory or other repository. This may also be the case if the real instantaneous certificate status is required. Requesting the CRL, the application will try to obtain the most current one.

Images   It does have the current CRL in cache. In this case, no instantaneous status is required, and, therefore, certificate validation can be done without retrieving the CRL from a repository.

In applications with a large number of subscribers and relying parties and with a high revocation rate, the CRL request rate can be very high, and CRLs themselves can be very long. This may introduce network and CRL-repository performance problems.

Modified CRL-Based Models

Several methods described in [CRMOD] and in [DCRL] attempt to address the aforementioned problems 65.

Overissued CRLs Reduce Peak Request Rate

Importantly, when the cache CRL is acceptable, the CRL requests’ distribution peaks exponentially at the moment when all relying parties try to request the CRL the first time or when they try to do it after their cached CRLs expire.

As described in [CRMOD], one remedy for reducing the peak of CRL requests may be to issue the CRLs before they expire or to overissue CRLs. Because the CRL validity time remains the same, new overissued CRLs will have a shifted “next update” expiration time. Hence, relying parties will request replacement of their expired CRLs at different times. In other words, this method will spread out CRL requests. As shown in [CRMOD], if a CRL is valid for 24 hours and is reissued every 6 hours instead of every 24 hours, the peak request rate is reduced almost four times. This method can be recommended for applications that allow CRL cache and for which the expected revocation rate is low.

Segmented CRLs Reduce CRL Size

The idea is to reduce the size of the CRL or the portion of the CRL that a relying party needs to download, although this measure cannot reduce the peak request rate. Certificates may be allocated to different CRLs, based on some criteria or at random. This method was implemented in CRL distribution points (CRLDP), as in [RFC2459] 66. X.509v3 certificates have a standard extension attribute called “CRLDistributionPoints” that provides the URI for the CRL, which is designated to the certificate if this certificate is revoked. When a relying party needs to validate the certificate, it requests this particular CRL. Designation of a particular CRL segment (CRLDP) to certificates is supported by CAs.

This method is recommended for applications that allow CRL caching and that have a high certificate-revocation rate. If instantaneous revocation data is required, it is also better than the method suggested earlier in traditional CRL or in the earlier section of this chapter titled “Over-Issued CRLs Reduce Peak Request Rate.”

Delta CRLs

As was described earlier DCRL, the idea of using delta CRL was introduced to reduce the peak bandwidth in PKI applications, allowing caching but also requiring fresh certificate revocation information. A delta CRL provides only certificate revocation information changes since the full base CRL was issued. Base CRL is a traditional CRL containing all nonexpired revoked certificates. Its validity is much longer-lived than delta CRL, and a relying party does not request it frequently. Delta CRL validity is short. Hence, a relying party has to request it often. Because it contains only certificates revoked since the latest base CRL was issued, its size is supposed to be small. With delta CRL, an average request rate for the base CRL drops significantly, although the peak rate does not.

In DCRL, Cooper describes a further modification of delta CRL, which should allow a significant reduction of the base CRL request peak rate as well. It is a so-called sliding window delta CRL, which combines delta CRL with the CRL over-issuing method. Every time a delta CRL is issued, a full CRL is reissued as well. As described in the section titled “Over-Issued CRLs Reduce Peak Request Rate,” it spreads out CRL requests from relying parties and reduces peak base CRL requests.

This sliding window delta CRL method promises the ability to supply very fresh CRL data combined with relatively low values for CRL-based validation methods, peak request rate, and bandwidth. Choosing an optimal window size is crucial.

Online Certificate Status Protocol

The Online Certificate Status Protocol (OCSP) is presented in [OCSP]67. A relying party sends to the validation server its request on the status of the certificate in question. The server returns its signed response. The request/respond formats are presented in the following text. Because the OCSP request and response data chunks are significantly smaller than with all CRL-based ones, the bandwidth required for OCSP is lower compared with the CRL models for the same validation rate. On the other hand, the need to sign all OCSP responses implies higher power requirements on the validation server.

The following three items are data structures. These represent OCSP transactions.

OCSP Request

Images   Protocol version

Images   Service request

Images   Target certificate identifier

Images   Optional extensions

OCSP Response

Images   Version of the response syntax

-   Name of the responder

-   Responses for each certificate in a request

-   Optional extensions

-   Signature algorithm OID

-   Signature computed across hash of the response

Images   Response for Each Certificate in a Request

-   Certificate identifier

-   Certificate status value (GOOD, REVOKED, UNKNOWN)

-   Response validity interval

-   Optional extensions

Unlike CRL-based models, an OCSP model is designed to provide instantaneous certificate status upon the certificate status request. Also, unlike CRL-based models, OCSP provides only the status of requested certificates, and it cannot be used by offline clients; for example, wireless devices wishing to authenticate the network to which they are attaching.

Cross-Certification

Cross-certification is a way of establishing trust between entities that are subscribers for different PKI certificates services and which have been issued certificates by different nonrelated CAs. In other words, it is a way of establishing a third-party trust. To make this happen, two CAs need to establish trust between each other, which is implemented via CA cross-certification. Cross-certification has a lot of implications. Complete understanding of Certificate Policy and Practice of each CA is required, because each party needs to know how much it can trust to the certificates issued by another CA, what are the enrollment, issuing, and revocation procedures of another CA, and what is the liability. Legal agreements and documents may be required as well.

How Applications Use Cross-Certification

If company A wants to trust company B, it should receive from company B its verification key, then issue a cross-certificate containing this verification key, and sign this certificate with company A’s signing key. If an entity in company A receives a message signed by company B, it will trust the signature because A certified its verification key with its signature. Trust does not necessarily have to be mutual. If company B does not want to trust A, it does not have to cross-certify A, although A cross-certified B; that is why we should also be specific if cross-certification is one-way or two-way (or mutual) trust. In addition to cross-certification, the companies should provide each other access to the end users’ certificates.

Consider two cases when users of two companies that cross-certified their CAs will exchange secure messages (see Figure 3.14).

Images   An employee of A is going to send a signed and encrypted message to an employee of company B. He needs to find that employee and his certificate in the search base or through directory lookup.

Images   The employee of A verifies the cross-certificate issued by company A, to make sure that company B is still trusted. This is done by verification of its signature and validity dates and also checking if that certificate is not in the authority revocation list (ARL) of company A.

Images   Now the employee of A can verify if the encryption certificate of the recipient in company B is valid. It is done by checking integrity, validity dates, and presence in the CRL of company B. Also, the certificate should be signed by a key associated with the public key of the CA of company B, which is available in the cross-certificate.

Images

Figure 3.14 - Cross-certified sites’ exchange

Images   If all the foregoing verifications are successful, then a user in company A will send the message with its signature and will encrypt this message with the public key from the certificate of the user of company B.

Images   Recipient B, after decrypting the message, will try to verify the signature of sender A.

Images   It validates the issuer of the verification certificate attached to the message by comparing the signature with a public key of the CA of company A, which is available in the cross-certificate issued by company B. It also verifies if the cross-certificate is valid and is not in company B’s ARL.

Images   The user of company B also validates the verification certificate’s integrity, its validity dates, and presence in company A’s CRL.

How Cross-Certification Is Set Up

With all the considerations mentioned at the beginning of this section, both mutual and unilateral cross-certification may be done online and offline.

Online Cross-Certification

Online cross-certification requires TCP/IP connectivity between both CAs as well as their Directories. A company A, which wants to trust company B, has to give company B special access credentials (one time password, generated by CA A) to access A. A CA administrator of company B enters these credentials and connects to A to complete cross-certification. It securely sends online the CA B verification public key. CA A generates and signs a cross-certificate containing the public key of B. Depending on implementation and the PKI vendor, the process may be automated, and the new cross-certificate issued by A may be sent over to B and imported into its database and Directory.

Offline Cross-Certification

Offline cross-certification is the only method for CAs that does not have network connectivity. Nevertheless, the Directories should have connectivity for cross-certification as well as for online cross-certification. As in the case of online cross-certification, we assume that CA A wants to trust CA B, and CA B wants to be trusted by A. However, unlike in the online case, the offline cross-certification process is started by B generating its PKCS#10 certificate request with its verification public key and sending it to company A. After A verifies the request, it issues and signs the cross-certificate, stores it, and also sends it to B. The administrator of trusted CA B will import the cross-certificate issued by CA A into its database and Directory.

Once unilateral cross-certification is complete, the process may be repeated in the opposite direction if mutual cross-certification is required for business needs.

Cross-Certificates’ Revocation

In order to break the trust between CAs, the cross-certified parties revoke the cross-certificates they issued. For example, if company A revokes the cross-certificate for company B, users of A will not be able to verify messages signed by users of B and will not be able to encrypt messages for users of B. Cross-certificates may need to be revoked for one of the following reasons:

Images   Partnership between companies A and B is terminated and their users do not need to use each other’s certificates.

Images   The cross-certified CA is not trusted anymore.

Images   The cross-certified CA reissued its certificate and regenerated keys; thus, a new public key should be used for the cross-certificate.

Revoked cross-certificates are added to the ARL, which is used for any certificate verification as was mentioned earlier.

How Cross-Certification with a Bridge CA Is Implemented in Practice

The preceding example just described how cross-certification between two CAs works. With more than two CAs participating in this model and the CAs’ mesh growing, certificate verification difficulties will start increasing. The bridge CA model helps to resolve these problems. Diagrams representing a cross-certified mesh and bridge CA are presented in Figure 3.15. Let us use FBCA [FBCACP] as an example for the Bridge CA case study:

Images   FBCA can be looked at as a nonhierarchical hub that would allow trust paths between participating PKIs to be created.

Images

Figure 3.15 - Cross-certified mesh and bridge CA models.

Images   The FBCA would issue certificates to Principal CAs, which are CAs within participating PKIs, designated to cross-certify those PKIs directly with FBCA through the exchange of cross-certificates. The number of cross-certificates to support n PKIs is n * 2.

Images   Each PKI participating in FBCA, should be represented to FBCA by a single Principal CA. If a participating PKI is a hierarchical PKI, the Principal CA is typically its root CA. If a participating PKI is a mesh PKI, the Principal CA may be any designated CA within that PKI.

Images   The issued certificate will be posted in the FBCA directory. All cross-certified entities are notified by FBCA when the certificates are issued.

Images   Now, the subscribers of PKIs registered with FBCA may exchange signed and encrypted messages. The certificate’s verification path will span FBCA and a sender’s certificates.

Images   The certificate trust path between one participating PKI and all others can be discontinued by revoking its FBCA cross-certificate. A new CRL will be published in the FBCA directory.

Design Validation

Review of Cryptanalytic Attacks

Attacking a cryptosystem reveals its weaknesses and flaws. Cryptanalysis can be used to improve the design of a cryptosystem and may result in the discarding of a cryptographic algorithm altogether. The security architect should understand how cryptanalytic attacks can be used in validating cryptographic design as part of an architecture.

Attack Models

The following basic types of attacks are based on having some knowledge of the algorithm used. These attacks are characterized by the degree of plaintext and ciphertext the cryptanalyst has access to:

Images   Ciphertext-only attack: A sample of ciphertext, and preferably a large volume of ciphertext encrypted with the same algorithm, is required. While this is one of the most difficult attacks to execute, a successful attack reveals plaintext, and if completely successful, the key. This type of attack can reveal flaws in the algorithm. While cryptographic algorithms used in real-world applications must be vetted for weaknesses against ciphertext-only attacks, some protocols such as WEP have been found vulnerable to this type of attack.

Images   Known-plaintext attack: Having some plaintext and the corresponding ciphertext is necessary. An objective of this type of attack is to determine the key and decipher all messages encrypted by it. This type of attack can be very practical when the corresponding plaintext is discoverable or can be deduced. Users of ZIP file archive encryption are very susceptible to this type of attack when a small portion of their archive is decrypted and becomes available for use in a known-plaintext attack.

Images   Chosen-plaintext attack: This attack involves choosing the plaintext with the corresponding ciphertext.

Images   Chosen-ciphertext attack: This attack involves choosing the ciphertext to be decrypted and gaining access to the resulting plaintext.

Symmetric Attacks

Variations on the attack models can be used in a controlled environment to reveal weaknesses in a cryptosystem and analyze an algorithm’s strength. Two common attacks applied to the testing of symmetric ciphers are the techniques of differential cryptanalysis and linear cryptanalysis:

Images   Differential cryptanalysis: These techniques involve a chosen plaintext attack with the aim of recovering the secret key. The basic method of differential cryptanalysis involves investigating the differences in ciphertext produced from pairs of chosen plaintext having specific differences. While this type of attack was used for determining a theoretical weakness in DES, the amount of chosen plaintext necessary (in excess of 1015 bytes) makes it impractical. DES was designed to resist differential cryptanalysis [Coppersmith].

Images   Linear cryptanalysis: These techniques are based on a known-plaintext attack using pairs of known block-cipher plaintext and corresponding ciphertext in order to generate a linear approximation of a portion of the key. Instead of trying to keep track of differences propagated by chosen plaintext, linear cryptanalysis seeks to keep track of Boolean information in pairs of known plaintext and corresponding ciphertext to generate a probability in the confidence level of a specific key value. This method of attack is also commonly used to test block algorithms. While a theoretical attack against DES using linear cryptanalysis exists, it is considered impractical due to the amount of known plaintext–ciphertext required (243 plaintexts) [Matsui].

Asymmetric Attacks

The algorithm in asymmetric cryptosystems is often based on solving some sort of mathematical problem such as the difficulty of integer factorization. As a result, applying the attack models to asymmetric cryptosystems can involve finding improved or faster methods of solving the various mathematical problems. Designing or integrating with a particular asymmetric scheme must take into account mathematical discoveries such as more efficient methods of finding discrete logarithms or integer factorization.

Hash Function Attacks

The principle applied in determining the collision resistance of hash functions is based on the birthday problem in probability theory. This type of attack is known as the Birthday attack. The term birthday pertains to the probability that in a set of randomly chosen people some pair of them will have the same birthday 68.

Many cryptographic hash algorithms such as MD5 and SHA-1 are built from one-way functions, which are limited in their ability to provide collision resistance. When validating security in cryptosystems, one must consider that hash functions are applied in digital signature schemes and used as building blocks in MAC construction.

Network-Based Cryptanalytic Attacks

Cryptanalytic attacks can be facilitated by network communications such as protocols involved in the transmission of data or in operations such as key exchange. The following network-based attacks target more than just the cryptographic algorithm and exploit weaknesses in areas such as communication protocols or transmission methods:

Images   Man-in-the-Middle attack: This technique involves intercepting and forwarding a modified version of a transmission between two parties. In this type of attack, the transmission is modified before it arrives at the receiving party. Safeguarding a protocol against this attack usually involves making sure authentication occurs at each endpoint in the communication. For example, in a Web service design, it may be necessary to require mutual SSL authentication involving use of a mutually trusted certification authority.

Images   Replay attack: This attack involves capturing and retransmitting a legitimate transmission between two parties. Using this technique, impersonation or key compromise and unauthorized access to information assets may be possible. Protecting a protocol against this attack usually involves use of session tokens, time-stamping of data, or synchronization of transmission. For instance, IPSec provides an antireplay service using verification of sequence numbers, and the AH protocol in IPSec employs one-way hash functions to protect against impersonation.

Images   Traffic Analysis attacks: Observing traffic flow in encrypted communications can reveal information based on message volume or communication patterns, or show which parties are communicating. Protection against traffic analysis is a concern not only in the design of military signals intelligence systems, but in the design of commercial systems as well. For instance, SSH, when operating in interactive mode, transmits every key stroke as a packet. Packet analysis can therefore reveal information about the password lengths. A general countermeasure to protect against traffic analysis is to use traffic padding where feasible. Another approach to protecting messages traversing untrusted networks from traffic analysis involves anonymizing the message sender. This can be done by using a chain of proxy servers where the message is encrypted separately at each proxy in order to make the source and destination of the communicating parties more difficult to determine.

Attacks against Keys

An ideal goal for cryptanalysis is to extract the secret key. Observing how keys are used is important in validating a cryptographic design. Using the same key to encrypt larger volumes of data increases the success of a cryptanalytic attack. Also, the use of an appropriate key length is important, as noted earlier in the chapter.

Testing for weak keys during generation is another basic element of validating a cryptosystem. Understanding the ability of a random number generator to introduce entropy during key generation is an important factor in this, because greater randomness makes determining the key more difficult.

Secret keys must also be protected from unauthorized access and should remain encrypted when stored. In a cryptographic system where multiple secret keys are necessary, for example, with a tape encryption appliance device, it is common to encrypt individual working keys with a top-level master key. The storage of the top-level secret key used in such a cryptosystem can be done using key shares, a technique also known as split-knowledge. This involves splitting the key into multiple pieces and granting access to each share to separate individuals. This ensures that no one individual has access to the stored master key. To ensure security of this master secret key when it is in use within the cryptosystem, logical access controls and physical controls such as tamper-proof enclosures are used. In validating cryptosystems, it is essential to check that the subsystem components and processes that protect the secret key are functioning as intended. The following attacks against keys are variations on the cryptanalytic attack models and are also important in validating cryptosystems:

Images   Meet-in-the-Middle attack: This attack applies to double encryption schemes such as 2DES and 3DES; it works by encrypting known plaintext using each possible key and comparing results obtained “in the middle” from decrypting the corresponding ciphertext using each possible key. This known plaintext attack was used against DES to show that encrypting plaintext with one DES key followed by encrypting it with a second DES key is no more secure than using a single DES key, and it reduces the strength of 3DES to only 112 bits.

Images   Related-Key attacks: These forms of attack involve relationships between keys that become known or are chosen while observing differences in plaintext and ciphertext when a different key is used. For instance, two keys that transform all plaintexts identically can be considered equivalent, a simple relation. It is beneficial to employ a related-key attack against stream ciphers because they typically employ a common key in combination with some varying nonsecret initialization vector (IV). An example of using a related-key attack to demonstrate that an encryption scheme is insecure is with WEP, in which the RC4 stream cipher uses a keystream comprising a WEP secret key and an exposed IV.

Brute Force Attacks

For a cryptosystem to be considered secure, a successful brute force attack must be computationally infeasible. For symmetric key ciphers, this involves an exhaustive search of the key space in order to determine plaintext. The result of the brute force is the secret key used for encrypting the ciphertext. Besides providing an indication of the security of the cryptosystem, a successful brute force attack involving a particular secret key would mean that any ciphertext encrypted with that particular key and potentially all future derived keys would become readable via the attack.

A brute force attack against asymmetric key ciphers involves applying computing resources to solving the underlying mathematical problem the algorithm is based on, such as in factoring large integers for RSA public-key encryption. The computational feasibility of solving a particular problem such as factoring an integer of a particular size gives an indication of the strength of the algorithm.

Side-Channel Cryptanalysis

Side-channel attacks are based on information gained from the physical implementation of the cryptosystem. These attacks mainly deal with obtaining and analyzing information that originates from the cryptosystem hardware rather than weaknesses in the cryptographic algorithm. Side-channel attacks can be based on the execution time of a cryptographic algorithm, power consumption within a cryptographic module, or electromagnetic emanations from a computer. Side-channel cryptanalysis requires substantial technical knowledge of the underlying hardware.

Susceptibility to side-channel cryptanalytic attack is an important consideration for any architecture where cryptography is applied. The following are some of these types of attacks:

Images   Timing attacks: This attack requires the ability to accurately measure the time required to perform a particular operation within a cryptosystem. Timing attacks are based on detailed hardware performance characteristics such as memory cache hits and CPU instruction time for a given key and input data (plaintext or ciphertext). By using the baseline performance characteristics for a specific piece of hardware where the cryptosystem is implemented, successful attacks against protocols and algorithms such as Diffie–Hellman, RSA, DSS, and others can be executed [Kocher].

Images   Differential Fault Analysis: This method involves introducing hardware faults into the cryptosystem in order to determine the state of internal data. This type of attack can be used to read the state of memory in order to determine a secret key. The technique can be applied to various types of semiconductor memory, including integrated circuits that are frozen and removed or to smart card memory that is read nondestructively [Samyde et al.].

Images   Differential Power Analysis: In this method, power consumption measurements in a hardware device such as a smart card are made during encryption operations while ciphertext is recorded. The attack can be used to reveal a secret key [Kocher et al.].

Risk-Based Cryptographic Architecture

Computing power, which grows by Moore’s law; new developments in cryptanalysis; virtually borderless enterprise network topologies; wider than ever exposure to targeted attacks from different domestic and foreign entities—all these factors are constantly moving cryptographic standards higher. At the same time, the main purpose of cryptography—supporting confidentiality, integrity, and availability—should be served in the most productive and cost-effective manner. We move from DES to AES, from a key size of 56 bits to 256 bits, but as was stated in the section titled “Key Management,” the strength of any crypto system or application is more than just strength of the algorithms and key size.

More often, crypto systems are compromised because of weaknesses in key management and processes around it. A risk-based approach to the crypto systems and applications design should help to architect a balanced and cost-effective solution that meets business requirements. Many areas of cryptographic architecture for government agencies are driven by regulatory compliance with appropriate prescriptive guidance documented in the NIST FIPS publications referred to earlier. Designing cryptography for private sector industries, where regulatory compliance is less strict or not required at all, leaves more room for the security architect making design decisions. But it may still be a good idea to follow a risk-based approach. The security architect can use either qualitative or quantitative measures when assessing the risk. If quantitative risk assessment data is available, each design option may be assessed based on the cost, benefits, and the risk. Otherwise, qualitative methods should be used. Risk is assessed to determine the impact of given losses and the probability that these losses will occur.

The process of designing a cryptographic system is similar to the process used for any other IT system. The architecture should include appropriate crypto modules, components, and methods, and should be integrated with the surrounding infrastructure and supported applications in order to address the organization’s needs and meet the requirements. Based on the requirements, several cryptographic methods may be needed. For example, both symmetric and asymmetric cryptography may be needed in a system, each performing different functions (e.g., symmetric encryption, asymmetric digital signature, and key establishment). It is important to be able to demonstrate traceability from the requirements back to the policies, goals, and risks to be mitigated. The following areas may be considered for a cryptographic system high-level design:

Images   Hardware- and software-based components

Images   Security of cryptographic modules

Images   What cryptography should be used for a network environment

Images   What approved algorithms will be used, and key lengths

Images   Key management infrastructure

Images   Integration with hosting infrastructure and supported applications

Images   Interoperation with external organizations

Images   User interface

Images   User acceptance

Images   User training

It is important for the security architect to develop confidentiality, integrity, and availability objectives. These objectives are at a high level and should address security, in general, and cryptography, specifically. The design should include software and hardware, procedures, physical security considerations, environmental requirements, etc.

After preliminary high-level and requirements-based architecture design is done, a preliminary risk assessment should be performed, and specific unique requirements associated with each component should be finalized. After the risk assessment has been performed, policies should be developed regarding the use of evaluated systems and cryptographic modules operating within the designed system.

Identifying Risk and Requirements by Cryptographic Areas

Risk management includes two components: assessing the risk and selecting and implementing appropriate countermeasures. The largest areas of risk addressed by cryptography are unauthorized disclosure and data modification, or confidentiality and integrity. Although the risks cannot be completely eliminated, they can be reduced to an acceptable level by using cryptographic controls. The risk management process ensures that the threats are known, as well as their impact, and that cost-effective countermeasures are applied. Risk assessment includes assessment of the assets and current protecting mechanisms, identification and assessment of the threats, assessment of potential losses and their likelihood, and their classification by criticality and sensitivity, and, finally, identification of potential mitigating controls and cost-benefit analysis.

Most often, the type of risk assessment that is performed is a qualitative analysis, rather than a formal quantitative analysis, and the results are used in developing the system requirements and specifications. The scope of risk analysis varies depending on the sensitivity of the information and the number and types of risks that need to be addressed. The next task is to identify categories of cryptographic methods and techniques that meet the requirements and mitigate the specific risks. There may be more than one method that can mitigate each risk.

Traceability from the requirements back to the policies and associated risk assessment is important. The following table is based on the data presented in the NIST SP800-21-1 [SP800-21-1] and demonstrates logical dependencies between risk and requirements for different cryptographic areas.

Images

Images

Images

Note that the risk of each cryptographic area should be assessed for each individual system and application. If the risk is higher than an acceptable level, technical requirements should be strengthened.

Case Study

To clarify how all this information fits together, walk through the following use case and the process of defining requirements, identifying risks, and then proposing cryptographic methods that meet those requirements and mitigate the risks.

The use case involves secure communications for a device management function. Cryptography for communication between a management console and management server and between the management server and managed devices should support confidentiality, authentication, authorization, and integrity. In this scenario, the business function being performed is firewall rule changes. So the management console will be a firewall administrator’s workstation, the management server will be the system that affects firewall rulebase changes, and the managed devices will be firewalls. For simplicity’s sake, the scope will exclude firewall monitoring, audit logging, and device network configuration functions, and the design will be vendor neutral (refer to Figure 3.16).

Information flow in this use case is represented by flow “1” between the management console and management server, and by flow “2” between the management server and the managed devices. For the rulebase management scenario:

Images

Figure 3.16 - Cryptography for communication between a management console and management server and between the management server and managed devices

Images   Managed devices may receive and store very sensitive configuration and corporate information, and its unauthorized disclosure may be detrimental.

Images   Integrity of the management data in transit and in store is crucial.

Images   Availability of these encrypted communication channels is important, but lost connectivity for a short time will not lead to major losses.

The functional requirements will be:

Images   Provide secure communication between the manager’s console and the management server.

Images   Provide secure communication between the management server and managed devices.

The following risks are defined:

Images   Unauthorized disclosure of data in transit between the console and server and server and managed devices.

Images   Unauthorized and undetected modification of data in transit between the console and server and server and managed devices.

Images   An unauthenticated or unauthorized console gets access to the server.

Images   An unauthenticated or unauthorized server gets access to a managed device.

Images   Data in transit is modified in a nonauthorized manner (man-in-the-middle attack).

Images   Unauthorized disclosure, modification, and substitution of secret/private keys.

Images   Unauthorized substitution and modification of public keys.

The intent is to apply cryptography as the means to meet the requirements and address the risks. For the use case, a security architect would recommend the following:

  1. The management server includes an internal CA, which issues certificates for every console and managed device.

  2. Key pairs are generated on each new console and device added to the system. After that, the keys are sent to the management server’s CA for issuing certificates.

  3. TLS tunnels with Mutual Authentication (MTLS) between the management server, and devices and console provide required access control, integrity, and confidentiality of the data in transit.

  4. A FIPS 140-2-compliant hardware cryptographic module provides required physical and logical security, including protection of the private and secret keys, RNG, and AES 128 encryption for TLS sessions. Keys never appear in cleartext. These measures ensure mandatory key management.

  5. Access to the consoles and management server with an internal CA requires 2-factor authentication. All data flows between the management server and console and devices goes encrypted via MTLS tunnels.

The recommendations we propose for our scenario fall into the following cryptographic areas. For a sound design, they should meet the following requirements:

Cryptographic Area:

Images   Cryptographic algorithms (identify FIPS-approved algorithms and other cryptographic algorithms): Encryption.

Images   Cryptographic algorithms: Digital signatures.

Images   Cryptographic key management: Specify random number generation, key generation, key establishment, key entry and output, key storage, and key destruction.

Technical and Assurance Requirement:

Images   FIPS-approved AES algorithm or three key TDEA algorithm; conformance tests.

Images   Digital Signature Algorithm (DSA), RSA, ECDSA, digital signature generation/verification; message digest; random/pseudorandom number generation; hash function. Algorithms for generating primes p and q; private key generation; conformance tests. Cryptographic requirements addressed in overall system/product requirements.

Images   Key entry/output: Levels 1, 2—plaintext. Levels 3, 4—encrypted keys or split knowledge for manual distribution.

Images   Key Destruction: Zeroize all plaintext cryptographic keys and other unprotected CSPs.

Images   Specification of the FIPS-approved key generation algorithm; documentation of the key distribution techniques.

Images   NIST-approved key generation algorithms.

Images   Use of error detection code (message authentication code).

Images   Encrypted IVs.

Images   Key naming.

Images   Key encrypting key pairs.

Images   Random number generation.

Images   Cryptographic requirements addressed in overall system/product requirements.

It is important to look at the cryptographic areas and requirements both individually and as a whole. The following are some common design flaws that might be found in this or other scenarios:

  1. Using a strong cryptographic algorithm when the RNG supporting key generation is weak does not protect against key weaknesses and potential compromise.

  2. The RNG may be very strong, but key management has a flaw and does not provide sufficient protection for private and secret keys.

  3. Remote access control to the system incorporates strong two-factor authentication, but local access and physical access control are very weak.

  4. Bulk data encryption is performed on the management server data store using the wrong block cipher mode; for example, ECB for bulk data encryption.

  5. Encryption is implemented without any integrity checking; for example, HMAC.

  6. Key exchange is performed with weak or no authentication.

  7. Faulty key distribution methods; for example, sending the key in cleartext!

  8. Symmetric key is not changed when the IV or the counter space is exhausted.

An unbalanced architecture design will produce a weak crypto system in which properly selected and designed components cannot compensate for the weak ones that introduce additional risks.

Cryptographic Compliance Monitoring

A risk-based cryptographic architecture will employ components in a manner that meets business requirements while allowing for a business-defined acceptable level of risk. To manage this risk effectively, there must be control over the risk-impacting factors in designs of solutions employing cryptography. For instance, a digital signature scheme must employ an acceptable cryptographic hashing function, such as those specified in NIST FIPS 180-4, the Secure Hash Standard [FIPS 180-4]69. So, cryptographic compliance monitoring in the context of design of a cryptosystem would mean assessing conformity to cryptographic standards.

Security requirements for an IT solution can include confidentiality, integrity, or one of the other benefits of cryptography. The source of these requirements can be regulations or standards specified by legal frameworks, corporate governance, or policies. For instance, corporate standards requiring confidentiality of personal information may require encrypting the information. So, cryptographic compliance monitoring in the design of an IT solution would mean measuring adherence to regulations in a larger context, where cryptographic controls are used to satisfy a regulatory requirement.

Cryptographic Standards Compliance

Cryptographic standards provide for assurance that a particular security level that is required can be maintained. An example is NSA Suite B, which includes FIPS-197 (AES) and complements encryption algorithms for hashing, digital signature, and key exchange that U.S. federal agencies must adhere to. Making certain that products follow the same standards such as NSA Suite B will also help ensure compatibility.

The Computer Security Division at NIST coordinates test suites for many of the NIST cryptographic standards. The Cryptographic Algorithm Validation Program (CAVP)70, established by NIST and the Communications Security Establishment Canada (CSEC)71, provide validation testing via accredited third-party laboratories for a number of cryptographic algorithms. CAVP validation of an algorithm used in a cryptosystem is a prerequisite for another validation program established by NIST, the Cryptographic Module Validation Program (CMVP)72. CMVP validates adherence of cryptographic modules to Federal Information Processing Standards (FIPS)140-2 Security Requirements for Cryptographic Modules, and other FIPS cryptography-based standards. CAVP and CMVP programs maintain and publish validation lists for algorithms and cryptographic modules. The CAVP list provides validated implementations of cryptographic algorithms showing vendor, validation date and certification number, operational environment, and other information such as key sizes and block cipher mode. The CMVP list shows FIPS-140-2 validated modules by vendor, validation date and certification number, and other details related to FIPS-140-2.

Cryptographic standards will also appear in corporate security policies and standards such as those indicating when to use encryption or specifying the level and strength of encryption to use, including key lengths and crypto periods.

Determining if cryptographic controls meet governmental or corporate standards is a function of compliance monitoring. Determining compliance with such cryptographic standards should be performed as part of an assessment of an IT system’s design. It is important that this be completed during the requirements phase of an IT project, so that a given solution is developed to meet these standards.

Compliance with security standards for cryptography can also occur at the user level. For example, an organization may require that its personnel use encryption when sending certain types of data via e-mail. It should be noted that monitoring user-level actions such as these may be difficult, and require specialized services such as those provided by data leak protection solutions.

An additional area where compliance is associated with a cryptosystem is the notion of a compliance defect that may exist within a cryptosystem. A compliance defect may be thought of as the inability of a cryptosystem to securely perform one of its functions, and is a noncompliance in a more general sense. Don Davis defines a compliance defect in a cryptosystem as a rule of operation that is both difficult to follow and unenforceable [Davis]. According to Davis, public key cryptography has five unrealistic rules of use corresponding with the crucial moments in a key pair’s life cycle. Davis calls these compliance defects and specifies them as follows:

  1. Authenticating the user (issuance): How does a CA authenticate a distant user when issuing an initial certificate?

  2. Authenticating the CA (validation): Public key cryptography cannot secure the distribution and validation of the root CA’s public key.

  3. Certificate revocation lists (revocation): Timely and secure revocation presents enormous scaling and performance problems. As a result, public key deployment is proceeding without a revocation infrastructure.

  4. Private key management (single-sign on): The user must keep his long-lived private key in memory throughout his login session.

  5. Passphrase quality (PW-Change): There is no way to force a public key user to choose a good passphrase.

Industry- and Application-Specific Cryptographic Standards Compliance

It is important that cryptographic controls themselves be compliant with standards. It is likewise important to satisfy the requirements of standards that specify how a cryptographic benefit must be employed. For instance, the California Information Practice Act (SB1386) 73 specifies that if customer information is encrypted when it is stored and transmitted, it is exempt from costly notification procedures in the event of a breach. Regulations applying to information systems that include cryptographic requirements will often specify use of a cryptographic control in a general sense, such as needing “encryption” or requiring a “key management system.”

Payment Card Industry Data Security Standard

One industry standard involving encryption is the Payment Card Industry Data Security Standard (PCI DSS), which requires protection and encryption of cardholder data. In the PCI standard, the essential requirement in protecting cardholder data is to not store it at all if possible. When it must be stored, the PCI standard specifies general attributes for cryptographic controls while enumerating the operational methods that must be employed. For instance, hash functions, cryptography, and key generation must be “strong.” In relation to cryptographic requirements, PCI focuses on operational procedures and administrative controls such as key-custodian acceptance of responsibilities.

So, auditing a system for PCI compliance will include these types of cryptographic requirements. To see the PCI DSS 2.0 requirements relating specifically to encryption, refer to the portions of the standard that describe key management as well as protection and encryption of cardholder data at the PCI Security Standards Council Web site: https://www.pcisecuritystandards.org/security_standards/index.php

Health Insurance Portability and Accountability Act

A governmental regulation that includes requirements for the benefits that cryptography can provide are the provisions of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) enacted by the U.S. Congress. These requirements are found in the Administrative Simplification provisions of the act (HIPAA, Title II), which among other things addresses the security and privacy of health data.

Requirements relating to cryptography are found in the Final Rule on Security Standards [Final Rule]. The Final Rule is where the U.S. Department of Health and Human Services stipulates three types of security safeguards required for compliance: administrative, physical, and technical. Within the technical safeguards, the Final Rule provides a set of security standards as follows:

Images   Access Control

Images   Audit Controls

Images   Integrity

Images   Person or Entity Authentication

Images   Transmission Security

These standards support the Final Rule by providing a minimum security baseline intended to help prevent unauthorized use and disclosure of Protected Health Information (PHI). Within each standard, policies, procedures, and technologies are either required and must be adopted, or are subject to individual evaluation (known as “addressable implementation specifications”).

Encryption for data at rest falls under the Access Control standard, because encryption used with an appropriate key management scheme can be used to deny access to PHI, except for authorized individuals. The standards in the Final Rule state that encryption of data at rest is an addressable implementation specification, leaving it up to the system owner to determine whether it is required.

While the use of specific cryptographic technologies is not stipulated in the Audit Controls and the Integrity standards, meeting these standards is required by HIPAA. Thus, cryptographic hashing algorithms and digital signatures can be used as a basis for supporting the Integrity standard. The same cryptographic controls can support Audit Controls by ensuring that a change to a transaction log’s integrity can be detected.

The Final Rule makes person or entity authentication a mandatory requirement without providing specifics. The Person or Entity Authentication standard does not specify use of any particular technology, allowing “covered entities to use whatever is reasonable and appropriate.”

Transmission Security is required in the Final Rule, which stipulates

The covered entity must implement technical security mechanisms to guard against unauthorized access to electronic protected health information that is transmitted over an electronic communication network.

The Transmission Security standard includes integrity controls to ensure that electronically transmitted PHI is not improperly modified. The standard also specifies encryption as a mechanism to protect electronic PHI being transmitted over open network such as the Internet. Both of these controls, integrity and encryption, are addressable implementation specifications and hence can be optional.

So, when considering HIPAA compliance, one will certainly need to address whether or not cryptographic controls are required 74. Auditing a system for HIPAA compliance must take into account the basis for a decision when cryptographic controls are deemed unnecessary in meeting a HIPAA standard. In order to monitor cryptographic compliance for a system that processes, transmits, or stores data subject to HIPAA standards, these system-specific cryptography requirements must be defined.

International Privacy Laws

Not all industry or government regulations explicitly stipulate use of particular cryptographic controls such as encryption or digital signatures. Privacy laws, in particular, deal with confidentiality of data but generally do not stipulate that encryption be used as the means of control. Use of encryption to protect confidentiality is left to the detailed security requirements defined for a particular solution, based on the security risk present.

An example of a privacy law where confidentiality is required is Article 17 of the European Union Data Protection Directive [EU Data Protection] which states:

Member States shall provide that the controller must implement appropriate technical and organizational measures to protect personal data against accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access, in particular where the processing involves the transmission of data over a network, and against all other unlawful forms of processing75.

Audit Readiness and Compliance

Audit readiness and compliance oversight must take into account the requirements for cryptographic controls when addressing cryptographic compliance. In general, the need for cryptographic compliance with industry and government regulations is dependent upon criticality of data and security risk. For a given solution, the business requirements, type of data, and how the data will be accessed stored and transmitted all contribute to the need for compliance with such standards. The security architect should be prepared to engage in an audit and explain complex systems, such as an organization’s implementation of PKI, and why specific design decisions were chosen.

Summary   Images

There are many moving parts in a well- constructed security architecture. These parts have to be managed and monitored, pulling together infor-mation, business needs, risks, threats, vulnerabilities, and users into a dynamic solution that can be volatile. The security architect needs to provide solutions to the business that will safeguard data as well as ensuring that it remains available for use by authorized users on demand. The cryptography domain illustrates for the security architect the many options that are available for them to create systems that provide for data confidentiality, integrity and availability. The challenge that the security architect has is in finding the right balance in their designs, allowing for users to create, store, consume, and manage data over time, while also ensuring that the needs of the business are met, and risks are mitigated.

Images   References

SHA-3 NIST. Announcing Request for Candidate Algorithm Nominations for a New Cryptographic Hash Algorithm (SHA–3) Family, Office of the Federal Register. National Archives and Records Administration. Available at http://csrc.nist.gov/groups/ST/hash/federal_register.html.

S/MIME IETF S/MIME Working Group. Internet Draft. Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 3.2 Message Specification. Expires March 18, 2009.

PEM IETF Privacy-Enhanced Electronic Mail Working Group. RFC 1421, RFC1422, RFC 1423, RFC 1424. Proposed Standards. February 1993.

PGP Copyright © 1990-2001 Network Associates, Inc. and its Affiliated Companies. All Rights Reserved., PGP Freeware for Windows 95, Windows 98, Windows NT, Windows 2000 & Windows Millennium User’s Guide Version 7.0. January 2001.

802.11 IEEE Std 802.11-2007 (Revision of IEEE Std 802.11-1999). IEEE Computer Society. June 12, 2007.

WEP ibid.

802.11i IEEE Std 802.11i™-2004 (Amendment to IEEE Std 802.11™, 1999). IEEE Computer Society. July 23, 2004.

WPA B. Bing (Ed.). Understanding and achieving next-generation wireless security. Emerging Technologies in Wireless LANs: Theory, Design, and Deployment. Cambridge University Press, New York, 2008.

WPA2 ibid.

SP800-121 NIST Special Publication 800-121. Guide to Bluetooth Security. September 2008.

FC-SP H. Dwivedi. SANs: Fibre Channel Security. Securing Storage: A Practical Guide to SAN and NAS Security. 2005.

CIK Department of Defense Security Institute. STU-III Handbook for Industry. February 1997.

P1619 https://siswg.net/.

EDI NIST. Federal Information Processing Standards Publication 161-2. April 29, 1996.

CMS IETF Network Working Group. Cryptographic Message Syntax (CMS). RFC 3852. Proposed Standard. July 2004.

WS-Security M. O’Neill et al. Introduction to WS-Security. Web Services Security. 2003.

NSA Suite B Cryptography http://www.nsa.gov/ia/industry/crypto_suite_b.cfm.

Modes A. Menezes, P. van Oorschot, and S. Vanstone. Block Ciphers. Handbook of Applied Cryptography. 1996.

RFC2144 C. Adams, Entrust Technologies. The CAST-128 Encryption Algorithm. RFC2144. Informational Memo. May 1997.

CMEA D. Wagner, B. Schneier, J. Kelsey. 17th Annual International Cryptology Conference. Cryptanalysis of the Cellular Message Encryption Algorithm. August 1997.

GOST State Standards Committee of the USSR. Cryptographic Protection for Data Processing Systems, Cryptographic Transformation Algorithm, GOST 28147-89. Government Standard of the U.S.S.R. July 1, 1990.

RC2 D. Wagner, B. Schneier, J. Kelsey. ICICS ’97. Related-Key Cryptanalysis of 3-WAY, Biham-DES, CAST, DES-X, NewDES, RC2, and TEA. November 1997.

RC5 A. Biryukov, E. Kushilevitz. Advances in Cryptology—EUROCRYPT ’98. Improved Cryptanalysis of RC5. June 1998.

RC6 R. Rivest, M.J.B. Robshaw, R. Sidney, Y.L. Yin. RSA Laboratories. The RC6 Block Cipher. August 20, 1998.

TEA http://143.53.36.235:8080/tea.htm.

Twofish http://www.schneier.com/twofish.html.

DH W. Diffie, M. Hellman. New Directions in Cryptography. IEEE Trans. Information Theory. Vol. 22, No. 6, pp. 644–654, November 1976.

RSA R.L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM. Vol. 21, Issue 2, pp. 120–126, February 1978.

PKCS #1 B. Kaliski, J. Staddon, RSA Laboratories. PKCS #1: RSA Cryptography Specifications Version 2.0. RFC 2437. Informational Memo. October 1998.

Merkle-Hellman Knapsack A. Menezes, P. van Oorschot, and S. Vanstone. Public-key encryption. Handbook of Applied Cryptography. CRC Press, Boca Raton, FL. 1996.

Tunnels V. Klima. IACR ePrint archive Report 2006/105. Tunnels in Hash Functions: MD5 Collisions within a Minute. Version 2. April 2006.

SHA-1 Collisions X. Wang, Y.L. Yin, and H. Yu. Finding Collisions in the Full SHA-1. Advances in Cryptology, Crypto’05. 2005.

Collisions X. Wang, D. Feng, X. Lai, and H. Yu. IACR ePrint archive Report 2004/199. Collisions for Hash Functions MD4, MD5, HAVAL-128 and RIPEMD. August 17, 2004.

Dedicated Hash ISO/IEC 10118-3:2004. Information technology—Security techniques—Hash-functions—Part 3: Dedicated hash-functions. March 1, 2004.

SCHNEIER Bruce Schneier. Applied Cryptography. Second edition, John Wiley & Sons, New York, 1996.

NISTSP800-57-1 NIST Special Publication 800-57. Recommendation for Key Management—Part 1: General (Revision 3). NIST July 2012.

SP800-67 NIST Special Publication 800-67. Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher. Revised May 19, 2008.

FIPS197 Federal Information Processing Standards Publication 197. Announcing the ADVANCED ENCRYPTION STANDARD (AES). November 26, 2001.

SP800-56 NIST Special Publication 800-56A. Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised). March 2007.

RSA PKCS#1 PKCS #1 v2.1: RSA Cryptography Standard. RSA Laboratories. June 14, 2002.

FIPS 140-2 Federal Information Processing Standards Publication 140-2. Security Requirements for Cryptographic Modules. May 25, 2001.

TECHREV-OPENSSL Technology Review. Alarming Open-Source Security http://www.technologyreview.com/news/410159/alarming-open-source-security-holes/.

FIPS 140-3 Federal Information Processing Standards Publication 140-3 (DRAFT). (Will Supersede FIPS PUB 140-2, May 25, 2001).

ANNEX C-FIPS 140-2 Annex C Approved Random Number Generators For FIPS PUB 140-2. Draft. October 2007.

FIPS 186-3 Federal Information Processing Standards Publication. FIPS 186-3. Digital Signature Standard (DSS). June 2009.

SP800-90 NIST Special Publication 800-90. Recommendation for Random Number Generation Using Deterministic Random Bit Generators (Revised). March 2007.

NIST SP800-21 NIST Special Publication 800-21. Guideline for Implementing Cryptography in the Federal Government. November 1999.

FIPS 185 Federal Information Processing Standards Publication 185. Escrowed Encryption Standard. November 1994.

NIST SP800-56, 56A, 57 part 1 and 57 part 2.

FIPS186-3 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION. Digital Signature Standard (DSS). March 2006.

ANS X9.42-2003 (Public Key Cryptography for the Financial Services Industry: Agreement of Symmetric Keys Using Discrete Logarithm Cryptography).

ANS X9.63-2001 (Public Key Cryptography for the Financial Services Industry: Key Agreement and Key Management Using Elliptic Curve Cryptography).

RFC 3647 IETF Network Working Group. RFC 3647. Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices Framework. November 2003.

PKIREGAG A. Golod, PKI registration. Information Security Management Handbook. 4th Ed., Auerbach Publications, Boca Raton, FL, 2003.

VeriSignCPS Verisign Certificate Practice Statement, version 2.0. 2001.

CRMF IETF Network Working Group. RFC 2511. Certificate Request Message Format. March 1999.

CRMOD Cooper, D. A Model of Certificate Revocation. Proceedings of the Fifteenth Annual Computer Security Applications Conference. December 1999.

DCRL Cooper, D. A More Efficient Use of Delta-CRL. Proceedings of the 2000 Symposium on security and privacy.

RFC2459 Housley, R., W. Ford, W. Polk, D. Solo, Internet X.509 Public Key Infrastructure Certificate and CRL Profile. RFC2459. January 1999.

OCSP M. Myers, R. Ankney, A. Malpani, S. Galperin, C. Adams, X.509 Internet Public Key Infrastructure. Online Certificate Status Protocol—OCSP. RFC2560. June 1999.

Coppersmith D. Coppersmith. The DES Encryption Standard (DES) and its strength against attacks. IBM J. Res. Develop., Vol. 38 No. 3. May 1994.

Matsui M. Matsui, Advances in Cryptology—CRYPTO ’94. The First Experimental Cryptanalysis of the Data Encryption Standard. 1994.

Kocher P. Kocher. Timing Attacks on Implementations of Diffie–Hellman, RSA, DSS, and Other Systems. Proceedings of the 16th Annual International Cryptology Conference on Advances in Cryptology, LNCS, Vol. 1109, pp 104–113, August 1996.

Samyde et al D. Samyde, S. Skorobogatov, R. Anderson, and J. Quisquater. On a New Way to Read Data from Memory. Proceedings of the First International IEEE Security in Storage Workshop, pp 65-69, December 2002.

Kocher, et al P. Kocher, J. Jaffe, and B. Jun. Differential Power Analysis. Proceedings of the 19th Annual International Cryptology Conference on Advances in Cryptology, LNCS, Vol. 1666, pp 388-397, August 1999.

FIPS 180-4 Federal Information Processing Standards Publication. FIPS 180-4. Secure Hash Standard (SHS). March, 2012.

[Davis] D. Davis. Compliance Defects in Public-Key Cryptography. Proceedings of the 6th USENIX Security Symposium. July 1996.

[Final Rule] Department of Health and Human Services HIPAA Security Rule, Office of the Federal Register, National Archives and Records Administration, available at http://www.cms.hhs.gov/SecurityStandard/Downloads/securityfinalrule.pdf.

[EU Data Protection] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 available at http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML

[RIPEMD-160] http://homes.esat.kuleuven.be/~bosselae/ripemd160.html.

[RFC4301] S. Kent, K. Seo, BBN Technologies. Network Working Group. Proposed Standard. Security Architecture for the Internet Protocol. RFC 4301. December 2005.

[RFC2404] C. Madson, Cisco Systems Inc., R. Glenn, NIST. Network Working Group. The Use of HMAC-SHA-1-96 within ESP and AH. RFC2404. November 1998.

[RFC 2406] S. Kent, BBN Corp, R. Atkinson, @Home Network. Network Working Group. IP Encapsulating Security Payload (ESP). RFC2406. November 1998.

RFC4650 M. Euchner. Network Working Group. HMAC-Authenticated Diffie–Hellman for Multimedia Internet KEYing (MIKEY). RFC4650. September 2006.

FBCACP Certificate Policy for Federal Bridge Certification Authority Version 2.25, December 9, 2011.

http://www.idmanagement.gov/fpkipa/documents/fbca_cp_rfc3647.pdf

FPKIATO Federal Public Key Infrastructure (FPKI) Architecture. Technical Overview. 2005.

http://www.idmanagement.gov/fpkima/documents/FPKIAtechnicalOverview.pdf

Images   Review Questions

1. What cryptographic hash function would be the acceptable replacement for MD4?

  1. MD5

  2. RIPEMD

  3. RIPEMD-160

  4. SHA-1

2. An IPSec Security Association (SA) is a relationship between two or more entities that describes how they will use security services to communicate. Which values can be used in an SA to provide greater security through confidentiality protection of the data payload?

  1. Use of AES within AH

  2. SHA-1 combined with HMAC

  3. Using ESP

  4. AH and ESP together

3. Suppose a secure extranet connection is required to allow an application in an external trusted entity’s network to securely access server resources in a corporate DMZ. Assuming IPSec is being configured to use ESP in tunnel mode, which of the following is the most accurate?

  1. Encryption of data packets and data origin authentication for the packets sent over the tunnel can both be provided.

  2. ESP must be used in transport mode in order to encrypt both the packets sent as well as encrypt source and destination IP Addresses of the external entity’s network and of the corporate DMZ network.

  3. Use of AH is necessary in order to provide data origin authentication for the packets sent over the tunnel.

  4. Source and destination IP Addresses of the external entity’s network and of the corporate DMZ network are not encrypted.

4. What is the BEST reason a network device manufacturer might include the RC4 encryption algorithm within an IEEE 802.11 wireless component?

  1. They would like to use AES, but they require compatibility with IEEE 802.11i.

  2. Their product must support the encryption algorithm WPA2 uses.

  3. RC4 is a stream cipher with an improved key-scheduling algorithm that provides stronger protection than other ciphers.

  4. Their release strategy planning includes maintaining some degree of backward compatibility with earlier protocols.

5. What is true about the Diffie–Hellman (DH) key agreement protocol?

  1. The protocol requires initial exchange of a shared secret.

  2. The protocol depends on a secure communication channel for key exchange.

  3. The protocol needs other mechanisms such as digital signatures to provide authentication of the communicating parties.

  4. The protocol is based on a symmetric cryptosystem.

6. What is the main security service a cryptographic hash function provides, and what is the main security property a cryptographic hash function must exhibit?

  1. Integrity and ease of computation

  2. Integrity and collision resistance

  3. Message authenticity and collision resistance

  4. Integrity and computational infeasibility

7. What is necessary on the receiving side in order to verify a digital signature?

  1. The message, message digest, and the sender’s private key

  2. The message, message digest, and the sender’s public key

  3. The message, the MAC, and the sender’s public key

  4. The message, the MAC, and the sender’s private key

8. What is a known plaintext attack used against DES to show that encrypting plaintext with one DES key followed by encrypting it with a second DES key is no more secure than using a single DES key?

  1. Meet-in-the-middle attack

  2. Man-in-the-middle attack

  3. Replay attack

  4. Related-key attack

9. What is among the most important factors in validating the cryptographic key design in a public key cryptosystem?

  1. Ability of a random number generator to introduce entropy during key generation

  2. Preimage resistance

  3. Confidentiality of key exchange protocol

  4. Crypto period

10. What factor would be most important in the design of a solution that is required to provide at-rest encryption in order to protect financial data in a restricted-access file sharing server?

  1. Encryption algorithm used

  2. Cryptographic key length

  3. Ability to encrypt the entire storage array or file system versus ability to encrypt individual files

  4. Individual user access and file-level authorization controls

11. A large bank with a more than one million customer base implements PKI to support authentication and encryption for online Internet transactions. What is the best method to validate certificates in a timely manner?

  1. CRL over LDAP

  2. CRLDP over LDAP

  3. OCSP over HTTP

  4. CRLDP over ODBC

12. A car rental company is planning to implement wireless communication between the cars and rental support centers. Customers will be able to use these centers as concierge services, and rental centers will be able to check the car’s status if necessary. PKI certificates will be used to support authentication, non-repudiation, and confidentiality of transactions. Which asymmetric cryptography is a better fit?

  1. RSA 1024

  2. AES 256

  3. RSA 4096

  4. ECC 160

13. A key management system of a government agency’s PKI includes a backup and recovery (BR) module. PKI issues and manages separate certificates for encryption and verification. What is the right BR strategy?

  1. Back up all certificates and private keys

  2. Back up all private keys and verification certificates

  3. Back up decryption keys and all certificates

  4. Back up signing keys and all certificates

14. A company needs to comply with FIPS 140-2 level 3, and decided to use split knowledge for managing storage encryption keys. What is the right method for storing and using the key?

  1. Store the key components on the encrypted media.

  2. Create a master key and store it on external media owned by the first security officer.

  3. Store key components on separate external media owned by a different security officer.

  4. Publish key components on an LDAP server and protect them by officers’ asymmetric keys encryption.

15. An agency is using symmetric AES 128 cryptography for distributing confidential data. Because of its growth and key distribution problems, the agency decided to move to asymmetric cryptography and X.509 certificates. Which of the following is the BEST strength asymmetric cryptography to match the strength of the current symmetric cryptography?

  1. RSA 2048

  2. ECC 160

  3. ECC 256

  4. RSA 7680

16. One very large company created a business partnership with another, much smaller company. Both companies have their own PKI in-house. Employees need to use secure messaging and secure file transfer for their business transactions. What is the BEST strategy to implement this?

  1. The larger company creates a PKI hierarchical branch for the smaller company, so all parties have a common root of trust.

  2. The larger company enrolls all employees of the smaller company and issues their certificates, so all parties have a common root of trust.

  3. Companies should review each other’s CP and CPS, cross-certify each other, and let each other access each other’s search database.

  4. Employ an external third-party CA and have both company’s employees register and use their new certificates for secure transactions.

17. When applications of cross-certified PKI subscribers validate each other’s digitally signed messages, they have to perform the following steps:

  1. The signature is cryptographically correct, and sender’s validation certificate and sender’s CA cross-certificate are valid.

  2. Validate CRL and ARL.

  3. Validate sender’s encryption certificate, ARL, and CRL.

  4. The signature is cryptographically correct, and sender’s CA certificate is valid.

18. A company implements three-tier PKI, which will include a root CA, several sub-CAs, and a number of regional issuing CAs under each sub-CA. How should the life span of the CA’s certificates be related?

  1. Root CA = 10 years; sub-CA = 5 years; issuing CA = 1 year

  2. Root CA = sub-CA = issuing CAs = 5 years

  3. Root CA = 1 year; sub-CA = 5 years; issuing CA = 10 years

  4. Root CA = 5 years; sub-CA = 10 years; issuing CA = 1 year

19. Management and storage of symmetric data encryption keys most importantly must provide

  1. Integrity, confidentiality, and archiving for the time period from key generation through the life span of the data they protect or the duration of the crypto period, whichever is longer.

  2. Confidentiality for the time period from key generation through the life span of the data they protect or duration of crypto period, whichever is longer.

  3. Integrity, confidentiality, and archiving for the duration of the key’s crypto period.

  4. Integrity, confidentiality, non-repudiation and archiving for the time period from key generation through the life span of the data they protect or duration of crypto period, whichever is longer.

20. Management and storage of public signature verification keys most importantly must provide

  1. Integrity, confidentiality, and archiving for the time period from key generation until no protected data needs to be verified.

  2. Integrity and archiving for the time period from key generation until no protected data needs to be verified.

  3. Integrity, confidentiality and archiving for the time period from key generation through the life span of the data they protect or the duration of crypto period, whichever is longer.

  4. Integrity and confidentiality for the time period from key generation until no protected data needs to be verified.

 

1   NIST announced a public competition in a Federal Register Notice on November 2, 2007 to develop a new cryptographic hash algorithm called SHA-3. The competition was NIST’s response to advances made in the cryptanalysis of hash algorithms.

NIST received sixty-four entries from cryptographers around the world by October 31, 2008, and selected fifty-one first-round candidates in December 2008, and fourteen second-round candidates in July 2009. On December 9, 2010, NIST announced five third-round candidates – BLAKE, Grøstl, JH, Keccak and Skein, to enter the final round of the competition.

The winning algorithm, Keccak (pronounced “catch-ack”), was created by Guido Bertoni, Joan Daemen and Gilles Van Assche of STMicroelectronics and Michaël Peeters of NXP Semiconductors. Keccak will now become NIST’s SHA-3 hash algorithm.

See the following for full information on the entire 5 year process to pick the new SHA-3 algorithm: http://csrc.nist.gov/groups/ST/hash/sha-3/index.html

See the following for the original Federal Register Notice, November 2, 2007, announcing the NIST competition to develop a new cryptographic hash algorithm: http://csrc.nist.gov/groups/ST/hash/documents/FR_Notice_Nov07.pdf

2   The ISO/IEC 13888-1, -2 and -3 standards provide for a series of non-repudiation services as follows:

  • Non-repudiation of Origin: This service will verify a signed message’s originator and content through a data validity check.

  • Non-repudiation of Delivery: This service will digitally sign an X.400 proof of delivery message.

  • Non-repudiation of Submission: This service will digitally sign an X.400 proof of submission message.

  • Non-repudiation of Transport: This service will provide proof that a delivery authority has delivered the message to the intended recipient.

3   See the following for the historical documents that establish the United States Government’s guidance on Data at Rest encryption:

  1. Office of Management and Budget – Memo M-06-16 – “Protection of Sensitive Agency Information” http://www.whitehouse.gov/sites/default/files/omb/memoranda/fy2006/m06-16.pdf

  2. DoD Policy Memo, July 03, 2007 Encryption of Sensitive Unclassified Data at rest on Mobile Computing Devices and Removable Storage Media: http://www.dod.gov/pubs/foi/privacy/docs/dod_dar_tpm_decree07_03_071.pdf

  3. DON CIO – Message DTG 091256Z OCT 07 - “DON Encryption of Unclassified Data at Rest Guidance”

See the following for the NIST definitions of different data states:

  1. The first citation comes from the Federal Register / Vol. 74, No. 79 / Monday, April 27, 2009 / Rules and Regulations, page 19008

DEPARTMENT OF HEALTH AND HUMAN SERVICES
45 CFR Parts 160 and 164
Guidance Specifying the Technologies and Methodologies That Render Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals for Purposes of the Breach Notification Requirements Under Section 13402 of Title XIII (Health Information Technology for Economic and Clinical Health Act) of the American Recovery and Reinvestment Act of 2009; Request for Information Supplementary Information: II. Guidance Specifying the Technologies and Methodologies That Render Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals

http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/federalregisterbreachrfi.pdf

The second citation comes from the Federal Register / Vol. 74, No. 162 / Monday, August 24, 2009 / Rules and Regulations, page 42742

DEPARTMENT OF HEALTH AND HUMAN SERVICES
45 CFR Parts 160 and 164
RIN 0991–AB56
Breach Notification for Unsecured Protected Health Information Supplementary Information: II. Guidance Specifying the Technologies and Methodologies That Render Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals

http://www.gpo.gov/fdsys/pkg/FR-2009-08-24/pdf/E9-20169.pdf

4   See the following for FIPS PUB 140-2 Security Requirements for Cryptographic Modules: http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf

5   See the following for ISO/IEC 18033-2:2006: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=37971

6   See the following for ISO/IEC 11770-1:2010: http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?ics1=35&ics2=040&ics3=&csnumber=53456

7   There are several other ISO/IEC 11770 sub standards that the security architect will want to become familiar with in this context:

ISO/IEC 11770-2:2008 Security Techniques – Key Management – Part 2: Mechanisms using symmetric techniques

ISO/IEC 11770-3:2008 Security Techniques – Key Management – Part 3: Mechanisms using asymmetric techniques

ISO/IEC 11770-5:2011 Security Techniques – Key Management – Part 5: Group Key Management

8   See the following for ISO/IEC 11568-1:2005 Banking- Key Management(retail) – Part 1: Principles: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=34937

9   See the following for ISO/IEC 13888-1:2009 Security Techniques – Non-repudiation – Part 1: General: http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?ics1=35&ics2=040&ics3=&csnumber=50432

There are two other ISO/IEC 13888 sub standards that the security architect will want to become familiar with in this context:

  1. ISO/IEC 13888-2:2010 Security Techniques – Non-repudiation – Part 2: Mechanisms using symmetric techniques

  2. ISO/IEC 13888-3:2009 Security Techniques – Non-repudiation – Part 3: Mechanisms using asymmetric techniques

10   See the following for ISO/IEC 7498-2:1989 Information processing systems – Open Systems Interconnection – Basic Reference Model – Part 2: Security Architecture: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=14256

11   See the following for information on S/MIME and the current state of the S/MIME working group: http://datatracker.ietf.org/wg/smime/charter/

12   See the following for historical information on PEM: http://www.csee.umbc.edu/~woodcock/cmsc482/proj1/pem.html

13   See the following for an overview of PGP and how it works: http://www.pgpi.org/doc/pgpintro/

14   See the following for information on IPSEC: http://datatracker.ietf.org/wg/ipsec/charter/

See the following for a good overview and detailed descriptions of how IPSEC works and all of the parts that make up IPSEC: http://www.unixwiz.net/techtips/iguide-ipsec.html

15   See the following for the TLS v1.2 RFC: http://tools.ietf.org/html/rfc5246

16   See the following for the PPP Encryption Control Protocol RFC 1968: http://tools.ietf.org/html/rfc1968

17   See the following to download the IEEE 802.11-2012 copy of the standards: http://standards.ieee.org/getieee802/download/802.11-2012.pdf

18   See the following for NIST Special Publication 800-97 Establishing Wireless Robust Security Networks: A Guide to IEEE 802.11i: http://csrc.nist.gov/publications/nistpubs/800-97/SP800-97.pdf

19   See the following for the Bluetooth Special Interest Group’s web site: https://www.bluetooth.org/About/bluetooth_sig.htm

20   See the following for an overview of security weaknesses with Bluetooth: http://www.yuuhaw.com/bluesec.pdf

See the following for a detailed explanation of the Correlation Attack on Bluetooth Keystream Generator E0: http://lasecwww.epfl.ch/pub/lasec/doc/YV04a.pdf

21   See the following for the NIST Special Publication 800-121 Revision 1: Guide to Bluetooth Security: http://csrc.nist.gov/publications/nistpubs/800-121-rev1/sp800-121_rev1.pdf

22   See the following for the INCITS xxx-200x T11/Project 1570-D/Rev 1.74 Fibre Channel Security-Protocols (FC-SP) draft: http://www.t10.org/ftp/t11/document.06/06-157v0.pdf

See the following for the T11 FC-SP-2 overview for the Fibre Channel Security Protocols v2 project:

http://www.t11.org/t11/stat.nsf/0704e303a54f2e42852566cf007ac45d/00720b2204288f8e8525713700668728?OpenDocument

23   See the following for information on Kerberos:

  1. History and background: http://web.mit.edu/kerberos/

  2. Current activities and information on cross platform development: http://www.kerberos.org/

24   Extensible Authentication Protocol, or EAP, is an authentication framework frequently used in wireless networks and Point-to-Point connections. It is defined in RFC 3748, which made RFC 2284 obsolete, and was updated by RFC 5247.

See the following for RFC 5247: http://tools.ietf.org/html/rfc5247

25   See the following for the OASIS KMIP Technical Committee charter: https://www.oasis-open.org/committees/kmip/charter.php

See the following for the main home page for the KMIP TC: https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=kmip

26   See the following for the IEEE 1619 SISWG P1619.2 Wide-Block Encryption working group home page: http://siswg.net/index.php?option=com_content&task=view&id=36&Itemid=75

27   See the following for a good general review of EDI information and solutions: http://www.edibasics.co.uk/

28   See the following for RCF 4130: http://tools.ietf.org/html/rfc4130

29   See the following for the NSA Suite B Cryptography home page: http://www.nsa.gov/ia/programs/suiteb_cryptography/

Suite B Cryptography is formalized in CNSSP-15, National Information Assurance Policy on the Use of Public Standards for the Secure Sharing of Information Among National Security Systems, dated March 2010.

See the following for a fact sheet on NSA Suite B: http://www.cas.mcmaster.ca/~soltys/math5440-w08/NSA_Suite_B.pdf

30   See the following for FIPS-197: http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf

31   See the following for FIPS-186-3: http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf

32   See the following for NIST SP 800-56A: http://csrc.nist.gov/groups/ST/toolkit/documents/SP800-56Arev1_3-8-07.pdf

33   See the following for FIPS-180-4: http://csrc.nist.gov/publications/fips/fips180-4/fips-180-4.pdf

34   The following documents provide guidance for using Suite B cryptography with internet protocols:

-   IPsec using the Internet Key Exchange (IKE) or IKEv2: “Suite B Cryptographic Suites for IPsec,” RFC 6379 http://tools.ietf.org/html/rfc6379

-   “Suite B Profile for Internet Protocol Security (IPsec),” RFC 6380 http://tools.ietf.org/html/rfc6380

-   TLS: “Suite B Profile for TLS,” RFC 6460 http://tools.ietf.org/html/rfc6460

-   “TLS Elliptic Curve Cipher Suites with SHA-256/384 and AES Galois Counter Mode (GCM)” RFC 5289 http://tools.ietf.org/html/rfc5289

-   S/MIME: “Suite B in Secure/Multipurpose Internet Mail Extensions (S/MIME),” RFC 6318 http://tools.ietf.org/html/rfc6318

-   SSH: “AES Galois Counter Mode for the Secure Shell Transport Layer Protocol,” RFC 5647 http://tools.ietf.org/html/rfc5647

-   “Suite B Cryptographic Suites for SSH, “RFC 6239 http://tools.ietf.org/html/rfc6239

35   See the following for FIPS 197: http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf

36   See the following for RFC2144: http://www.ietf.org/rfc/rfc2144.txt

37   See the following for CMEA: http://www.schneier.com/paper-cmea.pdf

38   DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing Triple DES. On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES (3DES) through the year 2030 for sensitive government information. The algorithm is also specified in ANSI X3.92, NIST SP 800-67 and ISO/IEC 18033-3 as a component of TDEA.

39   See the following for GOST: http://tools.ietf.org/html/rfc5830

40   See the following for an overview of IDEA: http://www.rsa.com/rsalabs/node.asp?id=2254

41   See the following for an overview of LOKI 97: http://www.unsw.adfa.edu.au/~lpb/research/loki97/

See the following for the design details of LOKI 97: http://www.unsw.adfa.edu.au/~lpb/papers/ssp97/loki97b.html

42   See the following citation for the original article by A. Sorkin on Lucifer: A. Sorkin, (1984). LUCIFER: a cryptographic algorithm. Cryptologia, 8(1), 22–35, 1984.

43   See the following for the NIST SKIPJACK and KEA Algorithm Specifications Version 2.0 document: http://csrc.nist.gov/groups/ST/toolkit/documents/skipjack/skipjack.pdf

44   See the following for the original research paper that defined TEA and its implementation: http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-355.pdf

45   See the following for the paper Twofish: A 128-Bit Block Cipher: http://www.schneier.com/paper-twofish-paper.pdf

46   See the following for RFC 3447 : http://tools.ietf.org/html/rfc3447

47   See the following for overviews of the APT phenomenon and its impact:

  1. http://static.usenix.org/event/lisa09/tech/slides/daly.pdf

  2. http://go.secureworks.com/advancedthreats

  3. http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf

  4. http://www.reuters.com/article/2011/08/03/us-cyberattacks-idUSTRE7720HU20110803

  5. http://googleblog.blogspot.com/2010/01/new-approach-to-china.html#!/2010/01/new-approach-to-china.html

48   See the following for some examples of research in this area:

  1. http://eprint.iacr.org/2010/106.pdf

  2. http://hal.archives-ouvertes.fr/docs/00/74/79/47/PDF/81.pdf

  3. http://hal-ujm.ccsd.cnrs.fr/docs/00/69/96/14/PDF/2012_Cosade_Fischer.pdf

  4. http://jestec.taylors.edu.my/Vol%206%20Issue%204%20August%2011/Vol_6_4_411_428_AL%20JAMMAS.pdf

  5. http://www.cryptography.com/public/pdf/leakage-resistant-encryption-and-decryption.pdf

49   See the following for the full paper: http://www.cse.msstate.edu/~ramkumar/ipsec_overheads.pdf

50   See the following for NIST SP 800-57 Recommendation for Key Management – Part 1: General (Revision 3), July 2012: http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf

51   Moore’s Law specifically stated that the number of transistors on an affordable CPU would double approximately every two years. In 2000 the number of transistors in the CPU numbered 37.5 million, while in 2009 the number went up to 904 million.

See the following for information on Moore’s Law: http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html

52   See the following for information on the current status of the efforts around the FIPS 140-3 drafting process: http://csrc.nist.gov/groups/ST/FIPS140_3/

53   See the following for FIPS 186-3: http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf

54   See the following for information on the RSA PKCS #11 Cryptographic Token Interface Standard: http://www.rsa.com/rsalabs/node.asp?id=2133

See the following for the PKCS #11 Base Functionality v2.30: Cryptoki – Draft 4: ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-30/pkcs-11v2-30b-d6.pdf

55   See the following for NIST SP800-57 Part 2 Recommendation for Key Management—Part 2: Best Practices for Key Management Organization: http://csrc.nist.gov/publications/nistpubs/800-57/SP800-57-Part2.pdf

56   See the following for NIST SP 800-21[second edition], Guideline for Implementing Cryptography in the Federal Government: http://csrc.nist.gov/publications/nistpubs/800-21-1/sp800-21-1_Dec2005.pdf

57   Another problem is that many applications do not check CRLs by default e.g., most Web browsers.

58   See the following for FIPS-185: http://www.itl.nist.gov/fipspubs/fip185.htm

59   See the following for RFC 4949, Internet Security Glossary, Version 2, [page 122], which is the basis of the definition of Law Enforcement Access Field (LEAF): http://tools.ietf.org/html/rfc4949

See the following for a discussion of what LEAF is in relation to the Clipper Chip and its application: http://www.rsa.com/rsalabs/node.asp?id=2349

See the following for a discussion of what the Clipper Chip is: http://www.rsa.com/rsalabs/node.asp?id=2318

See the following for a discussion of Project Capstone and the SKIPJACK algorithm (commonly referred to as CLIPPER): http://www.rsa.com/rsalabs/node.asp?id=2317

60   See the following for NIST SP800-57-2: http://csrc.nist.gov/publications/nistpubs/800-57/SP800-57-Part2.pdf

61   See the following for RFC 3647: http://www.ietf.org/rfc/rfc3647.txt

62   See the following for RFC 2511 Internet X.509 Certificate Request Message Format: http://www.ietf.org/rfc/rfc2511.txt

63   See the following for the current version of the PKCS#10 Standard: http://www.rsa.com/rsalabs/node.asp?id=2132

See the following for RFC 2986 PKCS #10: Certification Request Syntax Specification Version 1.7: http://tools.ietf.org/html/rfc2986

64   See the following for an overview of the IDManagement.GOV web site, specifically, the Federal PKI Management Authority documents section: http://www.idmanagement.gov/pages.cfm/page/Federal-PKI-Management-Authority-documents

The FBCACP and the FPKIATO documents can be found here:

FBCACP Certificate Policy for Federal Bridge Certification Authority Version 2.25, December 9, 2011.

http://www.idmanagement.gov/fpkipa/documents/fbca_cp_rfc3647.pdf

FPKIATO Federal Public Key Infrastructure (FPKI) Architecture Technical Overview 2005.

http://www.idmanagement.gov/fpkima/documents/FPKIAtechnicalOverview.pdf

65   See the following for discussions of CRMOD and DCRL:

CRMOD -- Cooper, D. A Model of Certificate Revocation. Proceedings of the Fifteenth Annual Computer Security Applications Conference. December 1999.

Can be downloaded here: http://csrc.nist.gov/groups/ST/crypto_apps_infra/documents/acsac99.pdf

DCRL -- Cooper, D. A More Efficient Use of Delta-CRL. Proceedings of the 2000 Symposium on security and privacy.

Can be downloaded here: http://csrc.nist.gov/groups/ST/crypto_apps_infra/documents/sliding_window.pdf

A good overview of PKI and research such as the papers cited above through NIST can be found here: http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkiresearch.html

66   See the following for RFC 2459: http://www.ietf.org/rfc/rfc2459.txt

67   See the following for RFC 2560: http://www.ietf.org/rfc/rfc2560.txt

68   See the following for a discussion of the Birthday Attack problem [page 66]: http://www.rsa.com/rsalabs/faq/files/rsalabs_faq41.pdf

69   See the following for FIPS PUB 180-4 Secure Hash Standard (SHS), March 2012: http://csrc.nist.gov/publications/fips/fips180-4/fips-180-4.pdf

70   See the following for an overview of the NIST CAVP: http://csrc.nist.gov/groups/STM/cavp/index.html

71   See the following for the CSEC web site: http://www.cse-cst.gc.ca/index-eng.html

72   See the following for an overview of the NIST CMVP: http://csrc.nist.gov/groups/STM/cmvp/index.html

73   See the following for the full text of SB1386: http://leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html

74   On January 2, 2013, the United States Department of Health & Human Services announced that losing a single laptop containing sensitive personal information about 441 patients will cost a non-profit Idaho hospice center $50,000, marking the first such penalty involving fewer than 500 data-breach victims. The patient data was unencrypted at the time of the laptop loss.

See the following for the official HHS News Release announcing the settlement: http://www.hhs.gov/news/press/2013pres/01/20130102a.html

75   See the following for the full text of the Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset