Cryptography for network security
In this chapter, we provide an overview of security concepts and architecture for network cryptography on IBM z Systems. Then, we describe guiding principles for configuring network cryptography.
This chapter includes the following sections:
2.1 Security concepts and architecture for network cryptography on System z
This section describes security concepts and architecture for network cryptography on System z for the following topics:
2.1.1 Basics of cryptography for network security
Cryptography is a widely used and often referenced term. It applies to many areas in information technology. For example, cryptography is the only manageable way to protect valuable data on modern disk storage systems from unauthorized use. Or think about credit cards and the way the personal information is stored on them. To protect this information, cryptography is used.
The same principles apply to network security. You have to use systems that use cryptographic algorithms to provide proper protection of data that is transmitted over the network. Review the four basic principles of cryptography with a focus on network security:
Confidentiality
Confidentiality is the most prominent idea associated with cryptography; that is, data is scrambled using a known algorithm and secret keys such that the intended party can descramble the data but an attacker cannot.
Authentication
Authentication is the process of proving your identity to your communication partner.
Integrity
Integrity checking ensures that what we receive was what you sent, and vice versa (for example, that no one has altered a transmission, or that the decimal point in a transferred number is exactly where it is supposed to be).
Non-repudiation
Non-repudiation ensures that I know you agreed to what was exchanged, and not someone masquerading as you. Non-repudiation implies a legal liability. I know you and only you agreed to the matter at hand and, therefore, you are legally and contractually obligated. This is the same as a signature on a contract.
Therefore, you could say that network security using appropriate cryptography can ensure the following principles of communication in a network. If two or more partners are communicating over the network they can be certain of the identity of the other parties. The participants in the conversation can be sure that data sent over the network is not forged or modified in any way. They can also be sure that no one else but their partners are able to read what has been sent. And finally, no participant can deny having sent something that has been received by the other participants.
Of course, no cryptosystem is unbreakable. No cryptosystem offers a 100% safety to deliberate attacks from third parties. The more effort you make to implement cryptography the right way, the safer you can be. But no effort guarantees complete safety.
2.1.2 Definition of a secure communication model for networks
All secure communications in a network follow a set of common concepts, regardless of what cryptosystem is used to establish a secure communication. Imagine two partners that want to start a secure communication over a generally unsecure network. Figure 2-1 illustrates the components and concepts that are involved in such a communication.
Figure 2-1 A secure communication model for networks
When two participants want to communicate securely over the network by using cryptographic protection, a trusted third party is often used to establish trust between the participants and to assist in key distribution. This trusted third party is a necessary element for secure communication to receive and exchange encryption keys and for authentication purposes. Consider, for example, the concept of Public Key Infrastructure (PKI). The certificate authority (CA) as a trusted third party is essential in a PKI. The CA is a globally trusted third party within the PKI. Authentication works only on the basis of trustworthiness of the CA. For a detailed description of the concept of PKI, see “Public Key Infrastructure” on page 41. When both parties share that they trust the third party, they may also trust each other, because the third party offers services to confirm the identity of the respective partners.
To cryptographically protect a message as it traverses a network, the clear text message has to be encrypted into ciphertext before sent over the network. This is accomplished using a secure session based on well-known cryptographic algorithms that are negotiated between the communication partners.
These algorithms assure the authenticity, integrity, and privacy of the message. The partners also have to agree on the key values to use. This process is called key exchange and usually involves help from the trusted third party.
In 2.2.2, “Defining a cryptography strategy within your organization” on page 73 we describe in more detail where security endpoints are placed within an enterprise network and how to connect to external networks.
Another component of this communication model is the attacker (Eve). In a world with no attackers trying to gather important information and misusing it, no cryptography and network security is needed. But this is no longer the case. The imaginary enemy with unlimited time, skills, and resources must be considered by everyone every time communication security is evaluated.
2.1.3 Applications of cryptosystems for network security
The secure transmission of data over the network demands cryptographic systems. We now provide a summary of common cryptographic systems to achieve secure communication by the standards of the four basic principles of security.
Key management
An important basis for the proper operation of a cryptosystem is key management. Secret keys must never be compromised or made available to anyone outside the cryptosystem. This is true for all keys used in cryptosystems making use of symmetric algorithms. For cryptosystems that make use of asymmetric algorithms, no one but the owner of a private key is ever allowed to possess it. If these conditions fail, the cryptosystem is compromised and is no longer capable of assuring data protection. The protection of keys is vital to the security of a cryptosystem.
Key management in a closed environment
In high-security environments like the System z mainframe, key management can be implemented using cryptographic hardware that is installed and managed by a centralized security facility. Master Keys and key-exchange keys can be installed centrally, and the hardware facility can be delivered to the users with the necessary keys installed. In z/OS, these tasks are accomplished by the Integrated Cryptographic Service Facility (ICSF) for software key management, CryptoExpress cards for key storage, and the Trusted Key Entry for secure key entry and retrieval. For a more detailed introduction to these key management capabilities of z/OS see section “9.12 Integrated Cryptographic Service Facility” in the IBM Redbooks publication titled Security on the IBM Mainframe: Volume 1 A Holistic Approach to Reduce Risk and Improve Security, SG24-7803.
Cryptosystems for data privacy and authentication
Encrypting and decrypting large amounts of data with cryptosystems making use of asymmetric algorithms is expensive (in reference to time and resources). Therefore, symmetric algorithms, such as AES and Triple DES, are used for bulk data encryption. The disadvantage of symmetric algorithms, however, is that both partners (the party that encrypts the data and the party that decrypts the data) must be in possession of the same key. Key management or safe distribution of keys in insecure networks is a problem with symmetric cryptosystems, even more so because data encryption keys need to be changed frequently to make an adversary’s task more difficult and limit the potential damage if a key is compromised. Therefore, today mostly hybrid cryptosystems are in place to overcome the shortcomings of both approaches. One example for such a hybrid system is the public key infrastructure, described in the following section.
Public Key Infrastructure
Public key cryptosystems can be used to transmit keys used to encrypt data to a recipient over the network. Data that has been encrypted with the public key of the recipient can only be decrypted using the recipient’s private key. If someone (the recipient) makes the public key publicly known, anyone (a sender) can encrypt a message that only that person can decrypt, for example:
1. A sender can generate some random secret from which a symmetric key can be derived.
2. The sender encrypts that secret using the recipient’s public key.
3. The sender sends the encrypted secret to the recipient.
4. The recipient decrypts the secret using the private key.
5. The recipient derives the symmetric key from the secret using an agreed-to method.
6. After both sides have the symmetric key, they can use it to encrypt or decrypt larger messages.
This method works well and has reasonable performance because asymmetric algorithms are used to encrypt and decrypt only small amounts of data.
The problem with this method arises from the question: How can someone publish a public key in a secure manner? If a malicious user sends a public key to Bob claiming it's Alice's key, how can Bob be sure that the key really belongs to Alice? In this situation, Digital Certificates and a Public Key Infrastructure (PKI) can help. Recall the secure communication model described in 2.1.2, “Definition of a secure communication model for networks” on page 39. There, we stated that secure communication over the network usually requires a trusted third party to confirm the identities of the participants in the conversation. A PKI will do exactly this job in a secure manner. A PKI is a hierarchy of authorities that issue certificates and attest to their authenticity,
Public key infrastructures provide a way to ensure the authenticity of a certain identity which possesses a private key. Authentication is the second cryptographic principle that PKIs can be used for.
To achieve authentication, PKI introduces the concept of a trust chain or trusted authority. These trusted parties are called certificate authorities (CA). A CA is a trusted third party within the PKI. The private key of a CA is generally trusted within the PKI. A CA is used to sign certificates issued to entities (users, servers, and so on) using its private key. The entity receives a signed private and public key pair from the CA. The corresponding public key is published within the PKI. Because the CA is trusted and because everybody can verify a message signed with a private key using the corresponding public key, anybody who trusts the CA can be sure that the private key which has signed the message belongs to the entity which has requested the signing from the CA. The CA itself will make sure by any means that the receiver of a private/public key pair is not forging its identity. The overall concept of this method is trust. If the CA is compromised, then the PKI that uses this CA is also compromised. The trust chain is broken. Therefore, CAs have to be safeguarded at all times. There have been cases in the media, where globally trusted CAs have been compromised which had an enormous impact on secure communication over the internet.
With PKI Services, z/OS offers a powerful PKI infrastructure. PKI Services is part of the Security Server for z/OS and shipped as a base component. It uses z/OS concepts for availability and outstanding cryptographic abilities. We describe PKI Services further later in this chapter.
Kerberos
Kerberos is a network authentication protocol that was developed in the 1980s by Massachusetts Institute of Technology. Kerberos can use a variety of symmetric encryption algorithms, including AES and Triple DES, to provide data privacy, especially for the sensitive data such as password to log in to a server. Kerberos Version 5 is the latest release. Kerberos is implemented as a component of the Network Authentication Service in z/OS, and chosen by Microsoft Corporation as their preferred authentication technology in Windows 2000 and after, and by SUN for Solaris 8. Kerberos is an encryption-based security system that provides mutual authentication between the users and the servers in a network environment.
The Kerberos authentication is heavily based on shared secrets, which are passwords stored on the Kerberos server. Those passwords are encrypted with a symmetrical cryptographic algorithm, which is DES in this case, and decrypted when needed. This fact implies that a decrypted password is accessed by the Kerberos server, which is not usually required in an authentication system that uses public key cryptography. Therefore, the servers must be placed in locked rooms with physical security to prevent an attacker from stealing a password.
For a complete description about the Kerberos Version 5 protocol, see RFC 1510 - The Kerberos Network Authentication Service (V5).
Though it is a formally accepted secure protocol, its support in software is only limited in contrast to others. We do not cover Kerberos in further detail in this book.
Cryptosystems for data integrity
Data integrity checking is the ability to assert that the data that is received over a communication link is identical to the data that is sent. Data can be compromised not only by an attacker, but it can also be damaged by transmission errors (although these are normally handled by the transmission protocols). Although ensuring data integrity over a network requires the use of cryptographic algorithms, these algorithms do not prevent others from viewing the data as it traverses the network as is the case with data privacy.
Message authentication codes
Data integrity in transmission of a message can be ensured using message authentication codes (MAC). A MAC can either be based on a symmetric algorithm or on a cryptographic hash function.
For symmetric algorithms, using a variation of the AES algorithm and a secret key, a MAC is created from the data. The MAC is sent with the message. The receiver performs the same operation using the same key and compares the resulting MAC with the MAC that was sent with the data. If both match, the integrity of the data is assured. MACs rely on the same secret key that is used by both the sender (to create the MAC) and the receiver (to verify the MAC). Because the MAC is derived from a secret key known only to the sender and receiver the MAC can be sent in the clear. An adversary sitting between the sender and the receiver (a so-called “man-in-the-middle” attack) can alter the message but cannot forge the MAC because the key to create the MAC is unknown. The mathematical principle behind using the MAC is that finding a message that fits a certain MAC is as difficult as breaking DES encryption.
A disadvantage to this method is that, as in symmetric cryptosystems, secret keys must be shared by sender and receiver. Furthermore, because the receiver has the key that is used in MAC creation, it is difficult to make it impossible for the receiver to forge a message and claim it was sent by the sender.
Constructing MACs from hash algorithms is a different approach to data integrity. Hash algorithms digest (condense) a block of data into a shorter string (usually 128 or 160 bits). A MAC constructed from a hashing algorithm is called Hash-bases MAC (HMAC).
The principles behind hash algorithms are that the message cannot be recovered from the message digest and that it is hard to construct a block of data that has the same message digest as another given block.
The following list includes some of the common message-digesting algorithms:
SHA-2 SHA-2 is an improved algorithm and generates a 256, 384, or 512-byte hash value. SHA-2 is considered to generate message digest values that are less likely to yield collisions.
MDC-4 The MDC-4 algorithm calculation is a one-way cryptographic function that is used to compute the hash pattern of a key part. MDC uses encryption only, and the default key is 5252 5252 5252 5252 2525 2525 2525 2525. It is used by the TKE.
MD2 This algorithm was developed by Ron Rivest of RSA Data Security, Inc.®. The algorithm is used mostly for PEM certificates. MD2 is fully described in RFC 1319. Because weaknesses have been discovered in MD2, its use is discouraged.
MD5 This algorithm was developed in 1991 by Ron Rivest. The algorithm takes a message of arbitrary length as input and produces as output a 128-bit message digest of the input. The MD5 message digest algorithm is specified by RFC 1321, The MD5 Message-Digest Algorithm. Because weaknesses have been discovered in MD5, its use is discouraged
SHA-1 SHA-1 This algorithm was developed by the US Government. The algorithm takes a message of arbitrary length as input and produces as output a 160-bit hash of the input. SHA-1 is fully described in standard FIPS 180-1.
To construct an HMAC sender of a message (block of data) uses an algorithm (for example SHA-1) to create a message digest from the message. Some key, to which sender and receiver must agree before the creation of the HMAC is appended to the data. The message digest is sent together with the message. The receiver runs the same algorithm over the message using the secret key and compares the resulting message digest to the one sent with the message. If both match, the message is unchanged. Considering attacking schemes and the problems of key management, the same principles hold for HMACs as for MACs.
Cryptosystems for data non-repudiation
Data non-repudiation is crucial to modern ways of doing business over networks and also signing and verifying contracts closed electronically. Any piece of data sent via networks that is sensitive to contract or legal issues or just basically seen as important for you or your customers is in need of a system that denies the refusal of ownership to that data. These systems protect the sender, because he can prove his ownership and they protect the receiver because he can rely on the non-deniability of the authorship the sender.
Digital Signatures
Digital signatures are an extension of data integrity. Although data integrity only ensures that the data received is identical to the data that is data sent, digital signatures provide data non-repudiation. For the purpose of authentication in secure communications, digital signatures provide identification of the communicating parties.
The creator of a message or electronic document that is to be signed uses a hashing algorithm, such as SHA-1 or SHA-2, to create a message digest from the data. The message digest and some information that identifies the sender are then encrypted with the sender's private key. This encrypted information is sent together with the data. The receiver uses the sender’s public key to decrypt the message digest and sender’s identification. The receiver then uses the message digesting algorithm to compute the message digest from the data. If this message digest is identical to the one recovered after decrypting the digital signature, the message is authentic, and the signature is recognized as valid.
With digital signatures, only public-key encryption can be used. If symmetric cryptosystems are used to encrypt the signature, it is difficult to make sure that the receiver (having the key to decrypt the signature) could not misuse this key to forge a signature of the sender. If the private key of the sender is well-protected (kept secret), no one else can forge the sender's signature.
There are significant differences between encryption using public key cryptosystems and digital signatures:
With encryption, the sender uses the receiver’s public key to encrypt the data, and the receiver decrypts the data with a private key. Therefore, everybody can send encrypted data to the receiver that only the receiver can decrypt.
With digital signatures, the sender uses the private key to encrypt the signature, and the receiver decrypts the signature with the sender’s public key. Therefore, only the sender can encrypt the signature, but anyone who receives the signature can decrypt and verify it. The tricky thing with digital signatures is the trustworthy distribution of public keys.
Public Key Infrastructure can provide functions for the proper distribution and management of keys used for digital signatures in an enterprise. See “Public Key Infrastructure” on page 41 for more information about PKI.
2.1.4 Overview of the z/OS TCP/IP cryptographic infrastructure
Cryptography is, as stated before, a widely used concept throughout information technology. Let us take a closer look on the overall architecture for cryptography for network security on the mainframe. z/OS offers a wide range of products and protocols to secure the communication within a network. This architecture is shown in Figure 2-2 on page 45.
Figure 2-2 z/OS TCP/IP cryptographic architecture for applications
In the next sections of this chapter, we describe the following components of z/OS Communications Server, which enable an organization to secure network communication using cryptography:
Transport Layer Security (TLS)
Securing TCP data sent over the network, providing data privacy, data integrity and authentication (to some extent), explained in 2.1.5, “Transport Layer Security on z/OS” on page 46.
Application Transparent TLS (AT-TLS)
A z/OS Communications Server feature that protects application traffic using System SSL based on policy rather than requiring source code changes in the application. AT-TLS policy is provided through the Policy Agent as explained in 2.1.6, “AT-TLS” on page 51.
Virtual private networks using IP Security (IPSec) and Internet Key Exchange (IKE)
Securing any kind of IP traffic (TCP, UDP, ICMP, and so on) using virtual private networks as explained in 2.1.7, “IPSec” on page 54.
OpenSSH for z/OS
Shipped as a part of the Ported Tools for z/OS. Secure access to UNIX System Services providing data privacy and integrity, explained in 2.1.8, “OpenSSH on z/OS” on page 60.
PKI Services z/OS
The z/OS implementation of a Public Key Infrastructure. This component provides a reliable, scalable and secure PKI, including a full-function certificate authority, leveraging the System z cryptographic infrastructure, explained in 2.1.9, “PKI services” on page 65.
2.1.5 Transport Layer Security on z/OS
z/OS uses its own implementation of the Transport Layer Security and Secure Sockets Layer family of protocols. Let us provide a short introduction to TLS and the features that z/OS System SSL offers.
TLS operation in general
Transport Layer Security (TLS) is an IETF-standardized client/server communication protocol based on the Secure Sockets Layer protocol which was developed by Netscape for securing communications that use TCP/IP sockets. It uses asymmetric and symmetric key cryptography for these purposes:
For server authentication using digital certificates
To provide data privacy and integrity using symmetric encryption and MDCs (mostly SHA-1 or MD5 today)
For optional client authentication using digital certificates
The TLS protocol is based on Netscape's SSL version 3 protocol. Given the success and popularity of SSL for securing HTTP traffic, the Internet Engineering Task Force (IETF) adopted SSLv3 with some minor modifications and standardized it in RFC 2246 as TLS version 1.0. As such, SSLv3 was the last version of SSL proper -- all new development of the protocol family is done under the TLS protocol (the latest version as of this writing is TLSv1.2). When a client and server negotiate a TLS/SSL session, one of the things they must agree on is the protocol version that they will use, which might be anywhere between SSLv2 (not recommended) up through TLSv1.2. For our purposes, we will use the terms TLS and SSL interchangeably, but we will prefer the term TLS to refer to the general protocol family.
TLS services are typically invoked by the application program and therefore an application has to be designed to support TLS to benefit from the TLS protection. It is also interesting to notice that, although TLS is well-known as a way to secure HTTP communications, any kind of TCP socket communication is a candidate for TLS protection, with proper code support. For instance, the z/OS TN3270 server and the FTP and LDAP servers and clients are TLS-enabled. This increases the TLS protocol popularity.
There are two main peer authentication options under TLS. With server authentication, the server proves its identity to the client by passing its X.509 certificate to the client during the TLS handshake. With client authentication or mutual authentication, the TLS client also proves its identity by passing its certificate to the server after it has authenticated the server's identity.
Note that the TLS client and server must have the appropriate certificate authority certificates on their key rings to verify the peer certificate. In addition to verifying the validity of a certificate, TLS can also ensure that the peer certificate has not been revoked by checking a Certificate Revocation List (CRL) retrieved from an LDAP directory entry managed by the client’s peer's certificate authority.
TLS consists of two major phases. The first phase is known as the handshake phase, under which the two communication partners establish a secure session with each other, and the second one is known as the data transmission phase, under which application traffic is cryptographically protected using the secure session.
The following list gives an overview of the steps necessary to perform a TLS handshake:
1. A TCP connection is established between with the client and the server.
2. The client indicates that it wants to perform TLS operation with the server by sending a client hello message to the server. This message also advertises the protocol and ciphers that the client is willing to use.
3. The server agrees upon TLS conversation and sends back an acknowledgment to the client called a server hello message. The server hello message indicates which protocol and cipher suite the server has chosen and it also contains the server's certificate (and, optionally a request for the client's certificate).
4. The client verifies the digital certificate sent by the server and optionally sends its own certificate to the server for client authentication.
5. The client and server then exchange messages to establish a set of secret keys that will be used to protect the application data according to the agreed-upon cipher suite. These messages are encrypted, at first with the server's public key, and later with the agreed-upon session keys. After this exchange is complete, the secure session is ready to protect application data.
6. Data transmission phase begins. In this phase, the application data is packaged within TLS records which are cryptographically protected using the negotiated ciphers and keys.
TLS is known to be a demanding protocol in terms of computing resources in the handshake phase. These resources are needed to support the heavy calculation required by the asymmetric cryptography that the handshake uses during this phase. The data transmission phase, on the other hand, does not use asymmetric cryptography. Instead, it uses more efficient symmetric algorithms to protect application data (which can be quite large in some cases). Even though these algorithms are less expensive than those used during the handshake phase, they are still computationally intensive. For this reason, the data transmission phase makes heavy use of the System z CP Assist for Cryptographic Function (CPACF) hardware facility, which implements most of the symmetric algorithms within the System z instruction set.
TLS implementation on z/OS
There are two implementations of SSL/TLS family of products available on z/OS. System SSL providing SSL/TLS functions as part of z/OS Cryptographic Services for C/C++ applications and Java Secure Sockets Extension (JSSE), a pure Java implementation of SSL/TLS.
System SSL
System SSL provides a complete TLS/SSL implementation for C/C++ applications on z/OS. System SSL provides a library with a full set of APIs that TCP sockets applications can use to protect their network traffic. Hence these applications only have to properly call the System SSL API, as opposed to having been designed with complete SSL support embedded. These applications can act either as an SSL client or an SSL server, and should call the API functions accordingly. System SSL supports the SSL V2.0, SSL V3.0 and TLS (Transport Layer Security) V1.0, V1.1 and TLS V1.2 protocols. TLS V1.2 is the latest version of the secure sockets layer protocol, it is only supported with z/OS 2.1.
To negotiate TLS sessions, the TLS implementation (System SSL or JSSE) must have access to the necessary certificates and private keys. System SSL supports three different key and certificate stores:
RACF key ring
ICSF PKCS#11 token
An HFS file built and managed by the gskkyman utility
Figure 2-3 is a schematic view of the z/OS System SSL implementation. The SSL-enabled server, on the right side, invokes the System SSL API in the center of the picture After it has established the TCP connection with the client; that is, after the TCP/IP accept socket function is unblocked by the client’s response.
Note that SSL-enabled applications usually have a set of directives related to the SSL environment, which are set by the user and are transferred to System SSL via the API. For instance, when setting up the z/OS HTTP server to use SSL, a directive in the server configuration file specifies the location of the keys and certificates to be used by the server:
KeyFile racf_keyring_name SAF
When protecting communication with TLS, the z/OS HTTP server invokes a variety of System SSL APIs. One of those APIs is the gsk_environment_open() function, which takes the key ring value as an input parameter.
Upon invocation via the API, the System SSL code encapsulates the application data in cryptographically protected TLS records, which are then sent by the application via regular TCP/IP sockets functions. In the same way the application receives inbound messages, which are used by the System SSL code. System SSL reads TLS records built by the client from the socket, decapsulates them (including decryption) and then passes the cleartext data to the application.
The diagram in Figure 2-3 is not showing the sessionID cache that is maintained in the application address space (unless the sysplex sessionID caching is in use, as explained later).
Figure 2-3 System SSL implementation
System SSL includes software implementation of commonly used cryptographic algorithms. As mentioned previously, some of the TLS operations can be fairly computing intensive and therefore System SSL will make use of the cryptographic hardware in System z whenever possible. If the hardware cryptography is not enabled, or if the selected algorithms are not supported by hardware, System SSL performs the cryptographic algorithms with its own software encryption routines.
Java Secure Socket Extension
The Java Secure Socket Extension (JSSE) enables secure Internet communications using SSL/TLS for Java applications. It provides a framework and an implementation for a Java version of the SSL and TLS protocols and includes functions for data encryption, server authentication, message integrity, and optional client authentication. Using JSSE, developers can provide for the secure passage of data between a client and a server running any application protocol, such as Hypertext Transfer Protocol (HTTP), Telnet, or FTP, over TCP/IP. JSSE was previously an optional package (standard extension) to the Java 2 SDK, Standard Edition (SDK) versions 1.2 and 1.3. JSSE was integrated into the SDK, v 1.4.
The JSSE API is capable of supporting SSL versions 2.0 and 3.0, Transport Layer Security (TLS) 1.0, and from service refresh 10, TLS 1.1 and 1.2. The IBMJSSE2 implementation in the SDK implements SSL 3.0, TLS 1.0, and from service refresh 10, TLS 1.1 and 1.2. It does not implement SSL 2.0.
JSSE includes the following important features:
Included as a standard component of JRE 1.4 and later
Extensible, provider-based architecture
Implemented in 100% Pure Java, so workload can be offloaded to zAAP coprocessors.
Provides API support for SSL versions 2.0 and 3.0 and an implementation of
SSL version 3.0
Provides API support and an implementation for TLS versions 1.0, 1.1 and 1.2
Includes classes that can be instantiated to create secure channels (SSLSocket, SSLServerSocket, and SSLEngine)
Provides support for several cryptographic algorithms commonly used in cipher suites
The TLS/SSL information in this book focuses on System SSL implementation of SSL/TLS, rather than JSSE.
FIPS 140-2 considerations
National Institute of Standards and Technology (NIST) is the US federal technology agency that works with industry to develop and apply technology, measurements, and standards. One of the standards published by NIST is the Federal Information Processing Standard Security Requirements for Cryptographic Modules referred to as 'FIPS 140-2'. FIPS 140-2 specifies a set of requirements that a cryptographic module must meet to be considered secure and reliable in the way it preserves the integrity of its cryptographic algorithms and the way it protects the cryptographic keys it is using.
System SSL provides an execution mode that has been designed to meet the NIST FIPS 140-2 Level 1 criteria. To this end, System SSL can run in either FIPS mode or non-FIPS mode. System SSL by default runs in non-FIPS mode. To meet the FIPS 140-2 Level 1 criteria, System SSL, when executing in FIPS mode, is more restrictive with respect to cryptographic algorithms, protocols and key sizes that can be supported.
Table 2-1 shows the cryptographic algorithms available with System SSL when executing in FIPS mode.
Table 2-1 System SSL FIPS and non-FIPS mode key lengths in bits
 
non-FIPS
FIPS
Algorithm
Size
Hardware
Size
Hardware
RC2
40 and 128
 
 
 
RC4
40 and 128
 
 
 
DES
56
X
 
 
TDES
168
X
168
X
AES
128 and 256
X
128 and 256
X
MD5
40
 
 
 
SHA-1
160
X
160
X
SHA-2
224, 256, 384 and 512
X
224, 256, 384 and 512
X
RSA
512-4096
X
1024-4096
X
DSA
512-1024
 
1024
 
Diffie-Hellman
512-2048
 
2048
 
Use of cryptographic hardware with System SSL
System SSL uses the cryptographic infrastructure in System z through the z/OS ICSF component. ICSF provides a complete set of cryptographic primitives and access to the System z hardware cryptographic. If ICSF is available, then System SSL calls it to perform many of its cryptographic operations, many of which directly use available hardware features. If the relevant hardware features are not available, then ICSF will perform the functions in software. There is no need for additional manual configuration. This is an automated process performed by System SSL itself.
Figure 2-4 on page 51 illustrates when and how cryptographic coprocessors on System z are used for System SSL processing. This picture is also valid for AT-TLS, which is discussed in the next section 2.1.6, “AT-TLS” on page 51.
Figure 2-4 Exploitation of the cryptographic coprocessor in System z in System SSL
For more information about appropriate cryptographic ciphers and what to keep in mind when choosing a cipher, see 2.2.1, “Choosing appropriate cryptographic algorithms for network security” on page 69.
 
Recommended reading: For more information about System SSL, see z/OS Cryptographic Services System SSL Programming, SC14-7495-00
2.1.6 AT-TLS
As we discussed earlier, TLS-enablement of an application program typically requires some fairly extensive programming changes. To make TLS-enablement less costly and more available to TCP applications, z/OS offers a unique feature called Application Transparent Transport Layer Security (AT-TLS). With AT-TLS you can now deploy TLS encryption without the time and expense of re-coding your applications.
AT-TLS invokes SystemSSL on behalf of existing clear-text socket applications without requiring any application changes. Therefore, the term application transparent applies.
AT-TLS uses policy-based networking in z/OS Communication Server using the Policy Agent. We have discussed principles of policy-based networking in z/OS throughout Chapter 3, “TCP/IP security” on page 89.
Socket applications continue to send and receive clear text over the socket, but data sent over the network is protected by System SSL. Support is provided for applications that require awareness of AT-TLS for status or to control the negotiation of security.
AT-TLS provides application-to-application security using policies. The policies are defined and loaded into the stack by Policy Agent. When AT-TLS is enabled and a newly established connection is first used, the TCP layer of the stack searches for a matching AT-TLS policy. If no policy is found, the connection is made without AT-TLS involvement.
Figure 2-5 on page 52 illustrates the operation of AT-TLS if z/OS operates as the server and some remote client is initiating the connection. If a matching AT-TLS rule is found, then a TLS or SSL session will be set up to protect the connection as specified by the action associated with that rule. Figure 2-5 illustrates the flow of AT-TLS operation with z/OS acting as the server of a TCP socket application.
 
Be aware: AT-TLS only supports TCP-based applications. It cannot be used to provide security for other transport layer protocol (like UDP) based applications. To provide security independent of the transport layer protocol, consider taking advantage of IP Security (IPSec) support.
Figure 2-5 AT-TLS operation with z/OS as a server
This diagram illustrates the following flow:
1. The client connection to the server gets established in the clear (no security, TCP handshake only).
2. After accepting the new connection, the server issues a read request on the socket. The TCP layer checks AT-TLS policy and sees that AT-TLS protection is configured for this connection. As such, it prepares for the client initiated SSL handshake.
3. The client initiates the SSL handshake and the TCP layer invoked System SSL to perform the handshake under the authority of the server.
4. Client sends data traffic under protection of the new SSL session.
5. TCP layer invokes System SSL to decrypt the data and then delivers the cleartext inbound data to the server.
All of this is performed transparently to the remote application, because it has no way of determining that the handshake and encryption are being done through AT-TLS rather than through direct System SSL calls. Because AT-TLS simply uses System SSL, almost all of the System SSL features and capabilities are available, including the latest TLS versions and cipher suites
When AT-TLS is enabled, statements in the Policy Agent define the security attributes for connections that match AT-TLS rules. This policy-driven support can be deployed transparently underneath many existing sockets, leaving the application unaware of the encryption and decryption being done on its behalf. Support is also provided for applications that need more control over the negotiation of TLS or need to participate in client authentication. However, these applications must be aware of AT-TLS.
Though we have been focusing on the transparent nature of AT-TLS so far, there are some applications that want to be aware of the TLS/SSL sessions being used to protect their network traffic and others that actually want to actively participate in the decision to use TLS/SSL. As such, IOCTL support is provided for applications that need to be aware of AT-TLS for status or to control the negotiation of security.
AT-TLS application types
Applications have different requirements concerning security. Certain applications need to be aware of when a secure connection is being used. Others might need to assume control if and when a TLS handshake occurs. For this reason, there are different application types supported by AT-TLS. These include the following application types:
Not-enabled applications:
 – Pascal API and web Fast Response Cache Accelerator (FRCA) applications are not supported at all by AT-TLS.
 – By default, when there are no AT-TLS policies in place and an application is considered to be not-AT-TLS enabled. This includes applications that start during the InitStack window and if the policy explicitly says enabled off.
 – A third category applies to FTP and Telnet. They can be not-enabled for AT-TLS in the Policy Agent, but function with their native TLS/SSL capabilities independent of AT-TLS.
 
Basic applications:
 – The AT-TLS policy says enabled on.
 – The application is unchanged and unaware of AT-TLS (no AT-TLS IOCTL calls).
Aware applications
 – The application is changed to use the SIOCTTLSCTL IOCTL to extract TLS session information such as the peer's X.509 certificate or the cipher suite being used. These applications typically want to access the TLS information for additional authentication or reporting purposes.
 – The AT-TLS policy says enabled on.
 – The application is changed to use the SIOCTTLSCTL IOCTL to extract AT-TLS information.
Controlling applications
 – The application logic decides when to initiate a secure session.
 – The AT-TLS policy says enabled On and ApplicationControlled On.
 – Based on that logic, the application uses the SIOCTTLSCTL IOCTL to tell AT-TLS when to start the TLS/SSL handshake and when to terminate it.
z/OS Communication Server supports AT-TLS 1.2 currency with z/OS System SSL. Support is added for the following functions:
 – Renegotiation (RFC 5746) in z/OS 1.12
 – Elliptic Curve Cryptography (RFC 4492 and RFC 5480) in z/OS V1.13
 – TLSv1.2 (RFC 5246) in z/OS 2.1
 – AES GCM cipher suites (RFC 5288) in z/OS 2.1
 – Suite B Profile (RFC 5430) in z/OS 2.1
 – ECC and AES GCM with SHA-256/384 (RFC 5289) in z/OS 2.1
 – Twenty-one new cipher suites
 • 11 new HMAC-SHA256 cipher suites
 • 10 new AES-GCM cipher suites
 – Requires new System SSL support
 – Support for Suite B cipher suites
Implementing TLS protocols directly into applications (without using AT-TLS) requires modification to incorporate a TLS capable API toolkit. Only the z/OS System SSL toolkit supports RACF key rings and the associated advantages (user ID mapping, SITE certificates, and more).
 
 
Recommended reading: For more information about AT-TLS, see IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 4 Security and Policy-Based Networking, SG24-7999.
2.1.7 IPSec
Internet Protocol Security (IPSec) family of protocols enables the secure communication over unsecure networks. Much like TLS, it encrypts the traffic and provides data integrity and data origin authentication. In difference to TLS, IPSec operates at the IP Layer, not the upper protocol layer like TCP. As such, it can be used to secure all traffic on the IP layer, regardless of upper layer protocols. IPSec connections are commonly referred to as virtual private networks.
IPSec defines a unidirectional connection between two endpoints. These connections are mostly referred to as tunnels. There are two types of IPSec tunnels:
Manual IPSec tunnels
The security parameters and encryption keys are configured statically and are managed by a security administrator manually. Manual tunnels are not commonly implemented.
Dynamic IPSec tunnels
The security parameters are negotiated, and the encryption keys are generated dynamically using Internet Key Exchange (IKE).
The characteristics of security in an IPSec communication are defined by Security Associations (SA). The concept of the SA is crucial to IPSec. An SA defines how a particular type of unidirectional traffic is protected between two endpoints. Because IPSec SAs are unidirectionally, they are typically established in pairs, one for inbound traffic and one for outbound traffic. SAs may be defined in varying widths. For example, you may define an SA to protect all traffic between two different networks (a wide SA), only the traffic between two specific IP addresses and ports (a narrow SA), or somewhere in between. Let us now take a look at basic concepts of IPSec.
IPSec uses the z/OS Communication Server policy-based networking infrastructure. IPSecurity policies are defined using the Configuration Assistant for both IP filtering and IPSec protection. The IPSecurity policy is then read by the Policy Agent, which installs the IP filtering and IPSec rules into the TCP/IP stack and the Internet Key Exchange (IKE) daemon on z/OS.
IPSec is defined and maintained by the Internet Engineering Task Force (IETF) through a variety of RFCs, include these primary ones:
RFC 4301: Security Architecture for the Internet Protocol1
This RFC and its associated RFCs define the means of transporting data securely over an IP network
RFC 2409: The Internet Key Exchange (IKE)2
This RFC and its associated RFCs define the initial version of IKE, now called IKEv1
RFC 5996: Internet Key Exchange (IKEv2) Protocol3
This RFC and its associated RFCs define version 2 of IKE
 
Modes of transportation
There are two different modes of encapsulating IP packets under IPSec. The following list illustrates those modes:
Transport mode
Protection of the IP payload information (including transport header, if present), but not the IP header. The IP header remains as-is and is used as usual when the packet travels through the network
Tunnel Mode
Protection of the complete IP packet and inclusion of that packet into an outer IP packet. This concept is called encapsulation. The encapsulating IP packet is provided with a new IP address of the IPSec security endpoint, which does not necessarily have to be the data endpoint. This mode of operation is a true VPN.
Transport mode is only used when the security endpoints of the communication are also the data endpoints. Therefore, transport mode may be used only for host-to-host communication. With tunnel mode, the security endpoint could be different from the data endpoint. For example, the security endpoint is a network device like a router or a firewall, but the data endpoint is a host behind the router. Tunnel mode may be used to secure communication between two networks, a host and a network or two hosts. Figure 2-6 on page 56 illustrates the modes of operation.
Figure 2-6 IPSec modes of operation
Authentication Header protocol
As the name suggests, IPSec Authentication Header (AH) authenticates IP packets, ensuring that they came from a legitimate origin host and that they have not been changed. IPSec AH provides the following features:
Data integrity by authenticating the entire IP packet using a message digest that is generated by algorithms such as HMAC-MD5 or HMAC-SHA
Data origin authentication by using a shared secret key to create the message digest
Replay protection by using a sequence number field within the AH header.
Authentication Header does not provide encryption of the data sent over the network. If you need encryption, you have to use Encapsulating Security Payload.
Encapsulating Security Payload protocol
Encapsulating Security Payload (ESP) provides additional protection beyond AH:
It provides privacy protection by encrypting the IP packet.
It applies authentication, integrity and privacy protection to the entire IP packet, including the IP header. This is possible because ESP completely encapsulates the original IP packet within an outer IP packet. For most users, the authentication protection provided by ESP should be sufficient, and AH should not be necessary if ESP is already being used for encryption.
It is best to use ESP over AH because of the reasons stated previously.
Internet Key Exchange protocol
Recall what we have said before in this section about manual and dynamic IPSec tunnels. Using a manual IPSec tunnel with pre-shared keys managed by a security administrator is not a scalable solution and can weaken the security of the symmetric keys, because these manually configured keys can be compromised easily and cannot be changed during and IPSec session. To refresh the symmetric keys for a given SA, the session has to be closed, then the key can be changed and a new session can be established. In most production environments, this is not and acceptable solution.
The IKE protocol, which is implemented in z/OS Communications Server by the IKE daemon, manages the transfer and periodic changing of security keys between senders and receivers and is required when implementing IPSec dynamic tunnels. Key exchange, defined in IKE, is normally a multistep process:
1. First, the partners authenticate each other and negotiate a secure logical connection, over which IPSec (AH and ESP) SAs can be negotiated between the two partners. called and IKE Security Association (IKE SA) and is sometimes referred to as the phase 1 tunnel or IKE tunnel.
2. After the logical connection is in place, the partners negotiate AH and ESP SAs to be used to protect specific types of data traffic (IPSec or child SA). These SAs are sometimes referred to as the phase 2 tunnel or IPSec tunnel. The IKE messages exchanged to negotiate the phase 2 tunnels are protected by the IKE SA.
3. Thereafter, all of these SAs and their associated session keys are renegotiated periodically. The IKE daemon uses the IP security policies that you define in the Policy Agent and to manages the keys dynamically.
The IPSec command can display, activate, refresh, and deactivate both IKEv1 and IKEv2 tunnels.Figure 2-7 illustrates the concepts of IKE for IPSec processing.
Figure 2-7 IKE flow for IPSec processing
There are two methods of authenticating the IPSec peers when using dynamic tunnels:
Authenticate the IKE partner with a pre-shared key
Authenticate the partner using digital signature mode
A pre-shared key defines secret values that are shared between the IKE peers to authenticate the partner during the Phase 1 IKE exchange and to provide a value to the Diffie-Hellman exchange that produces a cryptographic key to protect and authenticate the Phase 1 IKE negotiations.
Digital signature mode authentication relies on X.509 certificate exchanges to provide verification of the trusted partner. The z/OS IKE daemon (IKED) supports two digital algorithms: RSA and ECDSA. The certificates must be stored in an SAF key ring accessible to the IKE daemon. The local identity of an IKE peer must be configured in the IPSec policy, and it must represent an identity established in x.509 certificate on the local IKE key ring. The remote identity of an IKE peer must also be configured in the IPSec policy, and it must represent an identity established in the x.509 certificate presented by the remote IKE peer during Phase I negotiations.
When implementing digital signature mode on z/OS, certificate management operations can be performed by the local z/OS IKE daemon (IKED) or by a Network Security Services daemon (NSSD) running locally or in another, more secure network zone on z/OS. IKED supports local digital signature mode for IKE version 1 (IKEv1) negotiations only, but NSSD supports both IKEv1 and IKE version 2 (IKEv2).
When IKE is used, SAs for the secure communication are derived using exchange protocols defined in the relevant IKE RFCs. The mode of operation to perform the exchange might vary based on definitions in the IPSec policy. The following list provides an overview of the negotiation modes:
IKEv1 Main Mode
This mode is used to establish a phase 1 tunnel and is also referenced as Identity Protection Mode. When using Main Mode, the identities of the communicating parties will be secured using a previously negotiated secret. Authentication, key exchange and SA information will be transported over the network separately. It takes longer, but it provides more complete protection than aggressive mode.
IKEv1 Aggressive Mode
This mode is an alternative to Main Mode, also used to establish a phase 1 tunnel. It allows for the authentication, key-exchange and SA-related information to be sent together over the network. This is a faster approach than Main Mode using fewer messages, but identity information is transported in clear over the network.
IKEv1 Quick Mode
This mode is used to negotiate phase 2 tunnels after a phase 1 tunnel is in place.
IKEv2
IKEv2 only offers a single exchange mode that is more efficient than those offered by IKEv1. It combines the identity protection of IKEv1 Main Mode with a reduced number of messages (four) similar to IKEv1 Aggressive Mode. In addition, one complete IKEv2 initial exchange negotiates a phase 1 and a phase 2 tunnel. If additional phase 2 tunnels are required, then they can be negotiated using subsequent two-message exchanges (one less than IKEv1 Quick Mode).
FIPS 140-2 mode considerations
As we have described in “FIPS 140-2 considerations” on page 49, System SSL can be configured to operate in FIPS 140 mode. The z/OS IPSec implementation also provides a complete FIPS 140 mode of operation. These restrictions also apply to IPSec operation when FIPS mode is enabled. The IKE daemon, the Network Security Services daemon, and the TCP/IP stack’s IPSECURITY support can be configured for FIPS mode. As with System SSL, the IPSec FIPS 140 mode restricts the set of cryptographic algorithms and key lengths to ensure sufficient strength of the cryptographic operations.
Network Security Services daemon
The Network Security Services daemon (NSS or NSSD) provides several security services to different types of clients. For IPSec environments, NSS offers digital certificate and remote management services to z/OS IKE daemons. For appliances like IBM DataPower SOA appliances, NSS offers digital certificate and SAF-based authentication and authorization services. Access to all of these services is controlled through a thorough set of SAF resources and communications between the NSS server and its clients is protected using TLS/SSL.
IPSec certificate service
You may configure the z/OS IKE daemon to use an NSS server to perform digital signature and certificate-related operations for one or more TCP/IP stacks. NSS is required for IKEv2 digital signature authentication and is optional for IKEv1. NSS IPSec Certificate Services perform all digital signature and certificate-related services required by IKE, including certificate hierarchy validation, certificate revocation checking through CRLs, and HTTP retrieval of Certificate Revocation Lists, certificates and certificate bundles.
IPSec remote management service
The NSS IPSec remote management service allows an NSS server to act as a single point of z/OS IPSec management and control. Under direction of the IPSec command or a management program using the NSS Network Management API, the NSS server can request IPSec monitoring data from its NSS IPSec clients (z/OS IKE daemons) and make IPSec control requests, such as activating, deactivating or refreshing security associations.
XML Appliance certificate service
For appliances like IBM DataPower SOA appliances, this NSS service allows the appliance to generate or verify digital signatures based on X.509 certificates. This service is used for digital certificates associated a secure private key (one that is protected by the master key of a Crypto Express coprocessor).
XML Appliance private key service
For appliances like IBM DataPower SOA appliances, this NSS service allows the appliance to retrieve a private key from the NSS SAF key ring. This service is used for clear private keys (those not protected by a Crypto Express coprocessor).
XML Appliance SAF access service
For appliances like IBM DataPower SOA appliances, this NSS service allows the appliance to perform SAF authentication and authorization checks using RACF or another SAF-compliant external security manager.
 
Recommended reading: For more information about IPSec, see IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 4 Security and Policy-Based Networking, SG24-7999.
2.1.8 OpenSSH on z/OS
OpenSSH is primarily developed by the OpenBSD Project4, and its first inclusion into an operating system was in OpenBSD 2.6. The software is developed outside the US, using code from roughly 10 countries.
On z/OS, OpenSSH is shipped as part of the Ported Tools for z/OS product, among other well-known UNIX utilities such as bzip2, perl, and PHP. For information about current versions, see IBM Ported Tools for z/OS:
OpenSSH is an implementation of the classic UNIX Secure Shell (SSH) family of protocols. SSH is intended to be a secure version of the traditional BSD UNIX command line utilities: rlogin (remote login), rsh (remote shell), and rcp (remote copy).
rlogin starts a terminal session from the local shell a remote host specified as host. The remote host must be running a rlogind service for rlogin to connect to.
rsh executes a command, or provides remote shell access, on the specified remote host, which must be running the rshd service.
rcp copies files between machines. Each file or directory argument is either a remote file name or a local file name.
SSH is actually a client/server protocol and a suite of connectivity tools. The SSH client program is a secure remote login program—that is, a secure alternative to rlogin or rsh. The SSH suite also provides sftp, which is a secure version of FTP, and rcp. The client program requests a connection to the SSHD secure remote login daemon running in the target host. The SSHD daemon handles key exchange, encryption, authentication of both server and user, command execution, and data exchange. By default, SSHD listens to port 22 for incoming connection requests.
OpenSSH, as does SSH, runs on top of the TCP/IP stack, provides security at the application layer, and can be thought of as a protocol with three layered components.
The transport layer, which provides algorithm negotiation and key exchange. The key exchange includes server authentication and results in a cryptographically secured connection. It provides integrity, confidentiality, and optional compression of data.
The user authentication layer uses the established connection and relies on the services provided by the transport layer. It provides several mechanisms for user authentication. These include traditional password authentication and public-key or host-based authentication mechanisms.
The connection layer multiplexes many different concurrent channels over the authenticated connection and allows tunneling of login sessions and TCP-forwarding. It provides a flow control service for these channels. Additionally, various channel-specific options can be negotiated.
Figure 2-8 on page 62 summarizes the interactions between an OpenSSH client and daemon. Notice that a secure channel is established after authentication of the server (the host), and then the communication proceeds with client-to-server authentication:
Each host has a host-specific key (RSA or DSA) used to identify the host. Whenever a client connects, the server responds with its public host key. The client compares the RSA host key against its own database to verify if this key value is recorded as owned by this host. Packets sent by the host will be signed using the corresponding private key.
Then, a Diffie-Hellman key agreement exchange occurs that results in establishing a shared session key and a session number.
The rest of the session is encrypted using a symmetric cipher. The ciphers currently supported are 128-bit AES, Blowfish, 3DES, CAST128, Arcfour, 192-bit AES, or 256-bit AES. The client selects the encryption algorithm to use from those offered by the server. Additionally, session integrity is provided through a cryptographic message authentication code (hmac-sha1 or hmac-md5).
The client then authenticates using one of these negotiated authentication methods:
 – Public Key authentication (RSA or DSA)
 – Hosts file at the server
This method is based on the traditional UNIX .rhosts and .shosts files. It is an inherently insecure authentication mechanism.
 – Password
 – Challenge-response (not supported on z/OS)
Then the commands are executed and the data are exchanged.
The following types of keys are therefore involved in authentication and protection of the secure channel:
Host key An asymmetric key used by the server to provide the server identity
Session key A dynamically generated symmetric key for encrypting the communication
User key An asymmetric key used by the client to prove a user identity
Figure 2-8 OpenSSH processing example
Positioning OpenSSH versus TLS
Making OpenSSH available on z/OS answers the many requests received from UNIX System Services users to have an SSH-like facility.
As shown, SSH was intended to provide a secure alternative to telnet and FTP by setting up an encrypted tunnel between hosts. Whereas TLS by itself does not provide any user service, it actually is a way for programmers who are creating any kind of network applications that use TCP sockets to build in the application strong authentication, data confidentiality, and integrity for data exchange.
The two protocols are overlapping in their capabilities as SSH can also provide a secure tunnel to other applications via its TCP forwarding capability, the latter potentially extending the protection to a range of many TCP-based protocols.
It must also be noted that TLS uses public keys packaged in x.509 certificates, and therefore requires using a PKI, whereas OpenSSH accommodates raw key values that are not embedded and certified in a digital certificate (although, strictly speaking, the IETF specifications open SSH to the use of digital certificates). In that respect OpenSSH requires a lighter infrastructure, as key certification is under the responsibility of a local administrator. However, this approach to administering keys might become difficult on a large scale.
In terms of the type and strength of the encryption algorithms, both SSL/TLS and OpenSSH are similar in that they both use common algorithms and key lengths. However, OpenSSH uses OpenSSL (packaged within OpenSSH) for its encryption services.
The z/OS UNIX Telnet server (otelnetd) does not provide SSL/TLS protection and might not be protected using AT-TLS, because otelnetd is implemented using socket interface that is not supported by AT-TLS (to be clear, however, TN3270 can be protected by AT-TLS). OpenSSH with its remote login function is an excellent, secure alternative to otelnetd. In addition, OpenSSH with SFTP is a popular alternative to using TLS-protected FTP, but today z/OS SFTP implementation does not provide access to MVS data sets on its own. However, third-party products are available to bridge that gap.
z/OS OpenSSH implementation
In this section we describe the contents of the z/OS OpenSSH component.
OpenSSH provides z/OS UNIX System Services users with the capability of using an OpenSSH client (ssh) and server (sshd) running on z/OS. It provides the following suite of connectivity tools:
Secure remote login (ssh), an alternative to rlogin and rsh.
Secure file transfer (sftp and scp), an alternative to FTP and rcp.
 
Attention: The z/OS OpenSSH client cannot be invoked from the z/OS UNIX shell. This is working as designed, as there is no way to hide passwords entered in the shell.
Other basic utilities, such as ssh-add, ssh-agent, ssh-keysign, ssh-keyscan, ssh-keygen, and sftp-server are also included.
The IBM Ported Tools for z/OS implementation of SSHD support both SSH protocol versions 1 and 2 simultaneously. The default SSHD configuration runs only Protocol Version 2.
OpenSSH client and server
Take a moment to review the OpenSSH client and server:
ssh This is the OpenSSH client that establishes the secure channel with the OpenSSH server and performs the secure remote login program.
sshd Secure remote login daemon, a daemon that listens for connections from ssh clients and handles key exchange, encryption, authentication, command execution, and data exchange.
Together, SSH and SSHD provide secure encrypted communications between two hosts over a TCP/IP network. After an SSH session is established, other connections can be forwarded over this secure channel, such as X11, and other TCP-based protocols.
Secure file transfer
Let us take a closer look at secure file transfer using SFTP and scp. Be aware that there are many acronyms used to denote different implementations of FTP (secure or insecure). The term “sftp,” for example, might also stand for “simple file transfer protocol” rather than “secure file transfer program,” as we use it here:
sftp (secure file transfer program)
An interactive file transfer program, similar to the FTP user interface. It performs all operations over an encrypted SSH transport and may also use many features of SSH. SFTP requests flow through the OpenSSH client and requires the sftp-server (SFTP server subsystem) be operating on the server-side of the SFTP protocol. The SFTP server is automatically invoked from SSHD.
scp (remote secure file copy program)
This program also uses the OpenSSH client and server for data transfer. The command syntax is similar to rcp. However, unlike rcp, SCP asks for passwords or pass phrases, if necessary.
Key management
The code involved is specific to z/OS OpenSSH. There is no reuse of other key management code provided by IBM, such as GSKKYMAN, IKEYMAN, and so on. The following utilities are provided for key generation and management:
ssh-keygen Creates public/private key pairs.
ssh-agent Holds private keys in memory, saving you from retyping your passphrase repeatedly.
ssh-add Loads private keys into the agent memory.
ssh-keyscan Gathers SSH public host keys from the client, via an SSH protected conversation. This is used in preparation of server authentication by the client.
RACDCERT Has to be used to create a public/private key pair for z/OS OpenSSH when keys are stored in SAF key rings.
Helper applications
These are some of the helper applications:
ssh-rand-helper Entropy gathering code for random number generation.
ssh-askpass X11 GUI for passphrase entry, called by ssh-add.
ssh-keysign Helper program that performs the digital signatures required for host-based authentication.
OpenSSH support specific to z/OS
OpenSSH for z/OS provides some features which are specific to z/OS and make use of z/OS own security infrastructure. These features will enable you to strengthen the security of UNIX System Services access to your systems if you are using OpenSSH. Most of these features were introduced with version 1, release 2 (1.2) of z/OS OpenSSH:
System Authorization Facility (SAF). z/OS OpenSSH can be configured to retrieve keys from a SAF key ring, rather than a UNIX file. Using a SAF key ring in combination with RACF removes the need for storing sensible keys in the home directories of each user and centralizes them within the RACF database.
When used with SAF key rings, OpenSSH z/OS will also make use of the random number generation facility within the CPACF cryptographic coprocessor.
Multilevel Security. z/OS OpenSSH supports the concept of multilevel security which allows the classification of data based on a system of hierarchical security levels.
Systems Management Facility (SMF). z/OS OpenSSH can be configured to collect SMF type 119 records for both the client and the server.
Access to MVS data sets in OpenSSH for z/OS
OpenSSH for z/OS does not provide access to MVS data sets. You cannot open a sftp session to your UNIX System Services environment and touch any MVS data for transport within sftp. This support is limited to the original z/OS FTP and the z/OS shell environment. The OpenSSH for z/OS, which is part of the ported tools package, does not have that capability.
However, there are third-party vendor products that might offer you support for MVS data set access from within an OpenSSH sftp session. One of those products is Dovetailed Technologies Co:Z SFTP. Co:Z SFTP is a port of the OpenSSH for z/OS sftp implementation with MVS data set access included. For more information, see the “Co:Z SFTP” page on teh Dovetailed Technologies website:
 
 
Recommended reading: For more information about OpenSSH for z/OS, see z/OS IBM Ported Tools for z/OS User’s Guide, SA23-2246-00.
2.1.9 PKI services
z/OS PKI Services deploys a full certificate authority solution running on z/OS. It was introduced as a base element at z/OS V1.3. The product has evolved since then, with several new functions incorporated in each release. Customers have taken advantage of this solution to issue and manage thousands of digital certificates in-house. z/OS PKI Services is a competitive solution using the robustness, availability, scalability, and security provided by System z.For a general introduction of PKI see “Public Key Infrastructure” on page 41.
Although PKI Services is not a component of the TCP/IP stack or z/OS Communication Server, let us take a closer look at the product to see how it can enhance your certificate management capabilities for network security.
z/OS PKI Services is a complete solution for managing the digital certificate lifecycle running under z/OS V1.3 and later.
Certificate generation and administrative functions are driven via customizable web pages. z/OS PKI Services supports browser and server digital certificates. It provides an automatic or administrator approval process and a anduser or administrator revocation process. It deploys email notification for requesters and administrators, informing them about certificate request completion, certificate rejects, expiration warnings, and pending requests.
z/OS PKI Services is closely tied to RACF or any equivalent product that supports R_PKIServ callable services. It supports more functions than the RACDCERT command.
z/OS PKI Services supports multiple revocation-checking mechanisms. It deploys CRLs (Certificate Revocation Lists) and OCSP (Online Certificate Status Protocol). These functions are not supported through RACDCERT commands. z/OS PKI Services provides the Trust Policy plug-in. It is an application interface for checking the certificate validation.
z/OS PKI Services is compliant with the PKIX architectural model, which was built by the IETF (Internet Engineering Task Force) PKIX working group. The PKIX working group was established in 1995 with the intent of developing Internet standards needed to support an x.509-based PKI.
According to this open standard, the digital certificate request has to be in PKCS#10 format; the Certificate Revocation List (CRL) has to be in x.509 V2 format within an LDAP directory as repository. The digital certificate format has to be x.509 V3. For more information about the PKIX standard and components, see the IETF documentation5.
PKI Services structure
Before describing the z/OS PKI Services structure, let us list the z/OS PKI Services solution requirements, and then we fit these elements inside the structure:
IBM HTTP Server for z/OS
RACF of any functional equivalent supporting R_PKIServ callable services
PKI Services daemon
OCSF: Open Cryptographic Services Facility
OCEP: Open Cryptographic Enhanced plug-in (optional)
sendmail (optional)
LDAP directory
ICSF (optional)
Figure 2-9 illustrates the structure of the PKI Services in z/OS.
Figure 2-9 The structure of PKI Services within z/OS
IBM HTTP Server for z/OS
User certificate generation and administrative functions are performed through customizable web pages. IBM HTTP Server for z/OS is the web page interface for z/OS PKI Services. It is a z/OS base element.
The web page logic and contents are defined in a certificate template file. This file contains a mixture of true HTML and HTML-like tags. The Common Gateway Interface (CGI) script tasks read the template file to form the web pages and to control the flow. It also reads all of the input values from the web page and constant values from the template file to build the parameter list to call R_PKIServ callable services.
The CGI task also provides a hook to call PKI Exit, an installation-provided exit routine that is available for the following tasks:
Providing additional authorization checking
Validating and changing parameters
Capturing certificates for further processing
SCEP (Simple Certificate Enrollment Protocol) and OCSP (Online Certificate Status Protocol) client interfaces run on CGIs under IBM HTTP Server, but no web pages are associated with these services.
The IBM HTTP Server for z/OS is not involved in generating key pairs for users or for server certificates. This function is driven by the browser Cryptographic Service Provider (CSP) to the device selected by the user (e.g., smart card, token, or the browser software itself). Key pairs for server certificates are generated by the software in the server that is going to use the certificate.
RACF
RACF provides support for the R_PKIServ callable services that is the core component of z/OS PKI Server solution.
R_PKIServ callable services is the interface between the CGIs and the PKI Services daemon. It performs authorization checking and parameter validation.
User functions such as those that request, export, verify, revoke, renew, and suspend certificate and respond to certificate are performed by RACF through R_PKIServ callable services. Administrator functions such as query, approve, modify, reject, suspend, resume, and revoke are also performed by R_PKIServ callable services. RACF is the only z/OS PKI Services component that is licensed.
RACF can also be used as a certificate store, as addition to VSAM or DB2. When certificates are stored in RACF, they can be easily connected to a user or server. This enhances the possibilities of using PKI Services created and managed certificates for use with network security components. For example, certificates to use for TN3270 users who are supposed to perform client authentication can be created and managed using PKI Services, but a copy of thee certificates can be stored in RACF also. Within RACF, the certificate can be connected to the user ID and thus be used for verifying the connecting of the user to the certificate.
PKI Services daemon
The PKI Services daemon is provided by the z/OS PKI Services code. It is responsible for managing the services threads for incoming requests. It also controls the background threads for certificate approval and for issuing the Certificate Revocation List (CRL). The PKI Services daemon manages the VSAM databases for certificate requests (Object Store) and Issued Certificate List (ICL).
IBM DB2 for z/OS (optional)
By default, PKI Services uses VSAM data sets to store issued certificates and certificate requests. These data sets are perfectly fine if you want to use PKI Services in a monoplex environment, but become fairly complex to manage in a sysplex environment not only in terms of multiple system access but also in terms of performance. Since z/OS V1.13, IBM has offered a second way of storing this data, and that is via DB2 for z/OS. Using DB2, you are automatically under management of DB2 in terms of backup and recovery and this also might have a positive effect on performance in your environment.
OCSF, Open Cryptographic Services Facility
PKI Services requires OCSF to be installed and configured so that the user ID under which the PKI Services daemon runs can use required services.
OCEP, Open Cryptographic Enhanced plug-in (optional)
You need to install and configure OCEP if your installation plans to write an application to implement the use of PKI Trust Policy (PKITP).
sendmail (optional)
You need to configure sendmail if your installation plans to send email notifications to users for certificate-related events, such as certificate expiration.
LDAP directory
Use of an LDAP server is required to maintain information about PKI Services certificates in a centralized location. The z/OS LDAP server is preferred, but you can use a non-z/OS LDAP server if it can support the objectClass and attributes that PKI Services uses. Typical PKI Services use requires an LDAP directory server that supports the LDAP (Version 2) protocol (and the PKIX schema), such as the z/OS LDAP server.
ICSF (optional)
ICSF is preferred but not required. You can begin using PKI Services without installing ICSF and install it later without reinstalling PKI Services. We strongly suggest you use ICSF to store and protect your certificate authority’s private key.
 
Recommended reading: For more information about PKI Services for z/OS, see z/OS V1R11.0 Cryptographic Services PKI Services Guide and Reference, SA22-7693-11.
2.2 Guiding principles for cryptography for network security
In this section, we cover some guiding principles in the area of cryptography for network security on z/OS. These principles should be regarded as a collection of preferred practices that the authors of this book have developed over time in many customer situations. We cover the following topics:
2.2.1 Choosing appropriate cryptographic algorithms for network security
Cryptography has been used in network security for quite a long time now. Information technology has evolved over this time and computing power available to individuals and organizations has increased dramatically and still does. We will now take a look on how this development affects network security when cryptography is used. Cryptographic algorithms are not by default secure. There are several algorithms and variations that are still widely used but seen as unsuitable for protecting critical data not only by subject matter experts but also by government regulation bodies.
The security of cryptographic algorithms is assessed continuously by the industry and computer scientists. Cryptoanalysis describes this approach where cryptosystems and algorithms are examined to prove their security strength and find weaknesses. In this section we give a short overview of the current state of the art for common cryptosystems and algorithms and we will provide notes on how to find relevant information regarding these topics. We will focus on the cryptosystems used in z/OS network security components. Later in this chapter, we will take a closer look at crypto-configuration options for the z/OS network security components described in 2.1.4, “Overview of the z/OS TCP/IP cryptographic infrastructure” on page 44.
Symmetric encryption algorithms
The strength of a symmetric encryption algorithm is based on the length of the key, if you imply that there are no known practical cryptoanalysis attacks against this cryptographic algorithm. The measure of key length is also known as security strength of the algorithm. The security strength of a symmetric algorithm is supposed to be at least 128 bits today. Known attacks against a security algorithm can reduce the real security strength as opposed to the key length. It might be the case that an algorithm with a key length of 128 bits only has a security strength of 80 bits because of known attacks that weaken the algorithm. If so, the algorithm should not be used. It is desirable that only the brute-force method is suitable to break the encryption and gather the key used. Therefore, to receive a ciphertext of an unknown clear text and guess the keys that it takes to get useful information out of the ciphertext. This means to receive a ciphertext of an unknown clear text and guess keys values for as long as it takes to find the correct key to get useful information out of the ciphertext. As of today, a recommended symmetrical algorithm will be known to have no other attack than brute force.
The longer the key, the more operations it takes to guess the key from a received ciphertext. For example, take a random ciphertext that has been encoded using a 32-bit symmetric algorithm. Example 2-1 shows how much time it would take statistically to compute the key out of the ciphertext.
Example 2-1 Duration of a brute force attack against symmetric key algorithms
Key size in bits: 32
Number of possible keys: 232 = 4.3 * 109
Time it takes to guess the key at 1 decryption per microsecond:
232 * 1 microsecond = 36 minutes
We can see now, that using a 32-bit symmetric key is really not a good idea because even if you consider just one decryption operation per microsecond it would only take a maximum of 36 minutes in to guess the key. Starting with 128 bits, these figures become more comfortable, as it would then take 5.4 * 1024 years to guess the key with one decryption per microsecond. Even if today’s computing power were able to reduce the time by a factor of 100 or more, the time to decrypt using brute force would still be measures in hundreds or thousands of years to get one single key.
The following list provides a basic overview about widely accepted and used symmetric keys which offer a sufficient amount of encryption strength and are not known to have weaknesses other than brute force attacks. All of these standards also make use of the cryptographic hardware in System z.
Advanced Encryption Standard (AES) with key lengths of 128, 192 and 256
Data Encryption Standard (DES) with three-key triple encryption known as Triple DES (TDES or 3DES)
The following symmetric ciphers used with z/OS network security components have to be considered carefully nowadays because there are known attacks against them or the key length is not sufficient anymore:
DES with a key length of 56 bit
DES 56 is not sufficient for encryption because of its small key length. As shown before, using sufficient amounts of computing power, keys used can be easily calculated using brute-force.
Rivest Cipher (RC) 2 and RC4
RC2 and RC4 are proprietary and have never been disclosed to public. By using an algorithm that is not published, you weaken your security. There are known attacks which sufficiently reduce the computing power needed to derive a key from a random ciphertext.
Two-key triple DES
This algorithm uses three keys for encryption, each 56 bits long. But two of those keys are equal to each other and therefore the security strength is not 168 as you would imagine but really only 112 bits. Also, there is a known attack reducing the overall security strength to 80 bit if the attacker is in possession of 240 plaintext/ciphertext pairs of 64 bit (one block of data used for encryption).
The frequent change of keys during an encrypted data session using symmetric algorithms enhances the security. If the keys are changed regularly and there is no chance of suggesting a key from its predecessors, an attacker will have practically no chance to discover the keys.
A symmetric cipher is always only as secure as the key exchange algorithm used and the quality of the key. Many protocols will use Diffie-Hellman key exchange, where the underlying mathematical problem to generate the key is believed to be practically unsolvable. This is a method of asymmetric cryptography and therefore requires a public key to be sent between both parties. This public key exchange requires proper authentication and signature of the key sent, because an attacker might perform a man-in-the-middle attack and establish separate communication sessions with both parties forging their identity. PKIs including trusted third parties can provide proper authentication and identification of the key-holding parties and ensure the integrity of the key.
Because of the importance of key authentication using PKIs for symmetrical cryptosystems, special care has to be taken with PKIs and their underlying asymmetric cryptosystems, which we describe in the next section.
Asymmetric algorithms
Asymmetric cryptosystems use an approach where there is a public key and a private key that is kept secret by his owner. We will only take a closer look at Rivest Shamir Adleman (RSA) cryptographic algorithm in this chapter. This algorithm is essential to modern public key infrastructures using certificates.
The security strength of RSA is based on the problem of factorizing large prime numbers. A part of the public and private key, let’s call it N is the product of two large prime numbers p and q. N is the modulus of the RSA operation and known to everybody. The second part of the public and the private key, e and d, are derived from p and q. Let (e,N) be the private and (d,N) be the public key pair. It is not possible to derive d, knowing N and e and vice versa. This is due to the fact that large prime numbers cannot be factorized efficiently. If that was possible, N could be factorized into p and q, and (e,d) could be simply computed again from (p,q). The size of N is commonly known as the key length of RSA.
The key length of RSA is directly connected to the size of the underlying prime numbers, simply because it is the factor of both. Therefore, the key length must be sufficiently high to have the underlying prime numbers as large as possible. The larger these numbers are, the harder it gets to factorize them using appropriate algorithms. A key length of 1024 bit (960 decimal digits) is not sufficient anymore and is commonly rated as providing a security strength of only 80 bits. Current industry recommendations are to use RSA keys of at least 2048 bits.
Elliptic curve algorithms
Public-key cryptosystems based on elliptic curves use a variation of the mathematical problem of finding discrete logarithms. It has been stated that an elliptic curve cryptosystem implemented over a 160-bit field has roughly the same resistance to attack as RSA with a 1024-bit key length. Properly chosen elliptic curve cryptosystems have an exponential work factor (which explains why the key length is so much smaller).
The security strength of an elliptic curve algorithm with a key length of 160 bit is, just as with RSA, only 80 bits. To efficiently protect data secured by elliptic curves, at least 224 bit of key length should be used.
Secure hash functions
For a brief description of hashing algorithms, see 2.1.3, “Applications of cryptosystems for network security” on page 40.
A hashing algorithm takes an arbitrary-length message as input, and produces a small, fixed-length DigestString (usually 128 bits or more). This hash can be thought of as a summary of a message. There are two important things to remember about a message digest algorithm:
The algorithm is a one-way function. This means that there is absolutely no way you can recover a message, given the hash of that message.
It should be computationally unfeasible to produce another message that would produce the same message digest as another message.
The following are the common message digest algorithms:
MD2
Developed by Ron Rivest of RSA Data Security, Inc., this algorithm is mostly used for Privacy Enhanced Mail (PEM) certificates. MD2 is fully described in RFC 1319. Because weaknesses have been discovered in MD2, its use is discouraged.
MD5
Developed in 1991 by Ron Rivest, the MD5 algorithm takes as input a message of arbitrary length and produces as output a 128-bit message digest of the input. The MD5 message digest algorithm is specified in RFC 1321, The MD5 Message-Digest Algorithm. The use of MD5 is no longer recommended, because a variety of weaknesses, including collisions, have been found in it.
SHA-1
Developed by the National Security Agency (NSA) of the US Government, this algorithm takes as input a message of arbitrary length and produces as output a 160-bit hash of the input. SHA-1 is fully described in standard FIPS PUB 180-1, also called the Secure Hash Standard (SHS). Although the integrity of the SHA-1 algorithm remains intact, the security of a hash algorithm against collision attacks is one-half the hash size. This value should correspond with the key size of encryption algorithms used in applications together with the message digest. Because SHA-1 only provides 80 bits of security against collision attacks, this is deemed inappropriate for applications such as digital signature and HMACs. Because of this, in recent years, the industry has discouraged the use of SHA-1 and has begun to recommend the use of SHA-2, with its longer key lengths, for these purposes.
SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512)
Developed by the NSA of the US Government. SHA-2 addresses the weakness of the SHA-1 digest length as described previously. Extensions to the SHS have been developed to generate hashes of 224, 256, 384, and 512 bits, respectively.
Examining your existing network cryptography-related configurations
Within your network cryptography infrastructure, you may have several product configuration points where parameters and definitions for network cryptography are stored. The authors of this book recommend, that you revisit your existing cryptography configurations and examine the cipher suites that are used. Often, product parameterization is performed once, when the product is installed, or only rarely when new functions are implemented. The problem is that cryptography-related configurations are not always audited, except when there is a problem. If there is no attention on these configurations, old and possibly security-weakening values might be used for a long period of time.
As you have seen before, there are tremendous differences in security strength within the range of cryptographic algorithms that are supported on z/OS. For network cryptography, the following component configurations should be examined:
TN3270
FTP
AT-TLS
IPSec
The cryptography-related configuration is either stored in the product configuration files and profiles, or within the policy-based networking infrastructure of z/OS Communication Server.
 
Recommended reading: For more information about configuring network cryptography parameters, see z/OS Communications Server: IP Configuration Reference, SC27-3651-00.
Where to find more information
Government regulation bodies and research facilities issue recommendations about the latest developments and offer best practices and guidelines on which cryptographic algorithms and which security strengths to use. A security-aware enterprise should consult these sources regularly to keep the encryption in the network up to date. If you are under regulation of some kind of government agency, you might be obligated to adhere one of these standards.
The following list provides some sources for further information:
NIST publications, including Transitions: Recommendation for Transitioning the Use of Cryptographic Algorithms and Key Lengths, SP 800-131 A and Recommendation for Key Management, SP 800-57
Recommendation for cryptography issued by the german Bundesamt für Sicherheit in der Informationstechnik (federal office for security in information technology)
A guide on how to determine key lengths for public key cryptosystems used to transfer symmetric keys.
A collection of cryptography recommendations issued by the National Security Agency (NSA) of the US.
2.2.2 Defining a cryptography strategy within your organization
The following is not a technical guiding principle but more related to organizational issues that we want to make you aware of. Recall section 2.1.2, “Definition of a secure communication model for networks” on page 39 where we defined a communication model for secure networks introducing the communicating parties and the potential attacker. This communication model can be easily expanded to represent an enterprise network communication model. The communicating parties may be users, servers, network segments, or even enterprises. In between, the communicating parties may be network infrastructure, such as routers, firewalls, and gateways.
The communication model also describes endpoints for security, meaning that information transported over the network is somehow transformed by cryptography. From this point forward, no clear text is sent over the network until the data arrives at the opposite security endpoint. This is not necessarily the designated receiver; it might also be a network device on the way.
For example, if a branch employee wants to communicate with a server in the internal network of the company, all data leaving the computer of the employee can be secured by using cryptography. When this data enters the internal network of the customer, the gateway can be the security endpoint. Incoming data can be decrypted by the gateway and then sent over the internal network in clear text.
When talking about end-to-end security, people might think about security from sender to the actual receiver, but that might not be the case in reality. Figure 2-10 on page 74 illustrates several scenarios for security endpoints within a network infrastructure.
Figure 2-10 End-to-end enterprise network security
Take for example scenario B. Between the gateway from the branch office to the internet and the gateway to the internal company network, a VPN is to be implemented. This VPN secures data transferred over the internet. But neither within the branch office nor in the internal network of the company, the data is secured. This type of approach should be used only in situations where the integrity and security of the private networks (the one within the enterprise and the one within the branch office) is assured. Without such assurances, an attacker sitting behind one of the gateways could easily read and manipulate the data and there is also no authentication of the entities communicating to each other. The user and the server will not securely confirm their identities and thus the communication cannot be seen as fully trusted.
A complete end-to-end security is shown in scenario F. From the workstation of the user until the arrival at the server, all communication is secured. Complete end-to-end security is often required based on the sensitivity of the data that flows over the network or due to regulatory compliance. But to accomplish this state, several things have to be taken care of. The following list provides guidance when you get into discussions with network specialists and architects within your enterprise:
How do your company’s firewall, deep packet inspection, and network security policies fit in with the options?
The capabilities of stateful firewalls, intrusion detection systems, network analyzers, and content-based routers might be affected when end-to-end encryption is used.
Does corporate security policy dictate a specific technology or requirement?
We have seen customer situations where an enterprise policy enforcing the use of deep packet inspection would not allow for an end-to-end encryption to be implemented, because the policy is violated when the packet inspecting device is unable to inspect encrypted traffic. It might also be that your security policy demands TLS for certain traffics and IPSec for other traffic. Reconsider if these requirements are really satisfying your needs and if your applications are capable of providing these technologies.
What are the capabilities of the hosts and network equipment? Both endpoints of a secure connection must support the same cryptographic algorithms.
As we have seen before, several cryptographic algorithms have shown weaknesses against attacks or are generally seen as unsafe. Some applications in your enterprise might rely on those. Do take this into account when planning for enterprise-wide encryption. Carefully assess all participating applications and infrastructure to ensure that the desired cryptographic standards are supported.
What are your communication partners willing and able to use?
Applications have different needs for certain technologies. For example, some applications might offer no security at all and therefore need to be secured using IPSec or similar. As another example, many UNIX or LINUX shops will permit only the use of SSH’s sftp protocol to protect file transfers.
Are relative security infrastructures already in place? For example:
 – Is there already a Public Key Infrastructure (PKI) in place?
 – Is TLS or IPSec already deployed anywhere in the network?
 – What method will you use to distribute public keys for SSH?
Do the security protocols support the transport protocols?
TLS works great for TCP, but nothing else, whereas IPSec protects any IP traffic, regardless of transport protocol.
Is the application already enabled for network security?
TLS-enabled applications might offer features based on the TLS integration and if not, consider application-transparent technologies.
What do you want to authenticate?
Application or user identity or IP node identity? (TLS authenticates the identity of the application or user and is visible to the application whereas IPSec authenticates at the host IP node level.)
How are the different technologies implemented on the platforms involved? Possibilities:
 – Performance optimization, such as hardware crypto and other acceleration technologies
 – Exploitation of other platform-specific features (secure key, SAF, and so on)
What are the responsibilities for different security areas within your organization?
Just because RACF has been the traditional seat of security policies in the organization, it doesn't mean that the security administrator should implement networking security.
There might be many more things to consider within your enterprise. The guiding principle here should just be a reminder that even though you might have cryptography in place, you might need to define a crypto strategy end-to-end for your enterprise. Doing this, and including a detailed assessment of the current structure and implementation, will help you strengthen your network security and take control over this crucial part of IT within the company.
2.2.3 Choosing Transport Layer Security implementations
As we discussed in 2.1.5, “Transport Layer Security on z/OS” on page 46, z/OS offers three different approaches to TLS/SSL protection of TCP traffic: System SSL, JSSE, and AT-TLS. Let's examine some of the factors you might consider when choosing between these three approaches.
System SSL
Applications written in C or C++ can call the functions provided by System SSL. System SSL is a complete TLS/SSL implementation that is part of the z/OS Cryptographic Services component and ships with the base operating system.
JSSE
As we discussed in “TLS implementation on z/OS” on page 47, the Java Secure Socket Extension (JSSE) enables secure Internet communications using TLS/SSL for Java applications. It is a complete TLS/SSL implementation written 100% in Java (a completely separate implementation from System SSL). The main benefit of using JSSE for Java applications is that, because it is 100% Java, TLS/SSL operations are eligible to run on zAAP special purpose processors, rather than executing it in a central processor (CP). This is a cost effective way of enabling your Java applications to use TLS/SSL.
AT-TLS
AT-TLS is part of the Communication Server policy-based networking infrastructure.
AT-TLS offers several advantages over direct calls to System SSL. First of all, you do not have to deal with the TLS implementation at the source code level. You are not obligated to invoke any API calls in your application. You also do not have to take care of the correct configuration of TLS within your application. This is done transparently for all TLS-enabled applications using the Policy Agent. The configuration of AT-TLS is therefore outside the scope of application development and in scope for network management, where it belongs. This leads to a consistent TLS configuration across your application infrastructure. It reduces the risk of applications using weak encryption techniques by enabling the network security administrator to ensure consistency across AT-TLS policies for different applications.
AT-TLS makes the vast majority of System SSL features available to applications and as new features are added to System SSL, AT-TLS will usually be enhanced to expose those new features. Because of this, your applications can take immediate advantage of such enhancements in System SSL, usually with no development effort. All you need to do is update your AT-TLS policy to use new features.
Let’s review the advantages of AT-TLS over basic System SSL use:
Reduce cost
Using AT-TLS eliminates or greatly reduces the amount of code you need to write in your application programs to achieve TLS protection. Only applications that require some awareness or control of the TLS sessions need to be modified. This saves development cost not only for the initial TLS enablement, but also over time as System SSL features become available. Also, AT-TLS policy can reduce administrative costs associated with TLS protection because it presents a consistent and centralized administrative model across applications. Because of this, a single person skilled in configuring AT-TLS for one application can easily do it for other applications.
Up-to-date exploitation of System SSL features
AT-TLS is regularly enhanced to expose new System SSL features. Because of this, applications protected by AT-TLS can take advantage of new System SSL features simply by updating the relevant AT-TLS policies. Source code changes are rarely required.
Performance benefits
The interactions between AT-TLS and System SSL have been optimized and will often perform better than an application's direct use of System SSL.
More control over the TLS implementations in your system
There is no need for a unique TLS/SSL implementation in any of your applications, but the TLS/SSL configuration is administered centrally for all applications in the system.
Because of these benefits, we recommend that you use AT-TLS to provide TLS protection to TCP applications that have not already been modified to use System SSL. And even for your applications that do use System SSL directly, we suggest carefully considering conversion to AT-TLS the next time you have to make source code changes to use a new System SSL feature. You might find that the cost of converting to AT-TLS is justified by the longer-term benefits described.
Many IBM-supplied applications and products, including these, use AT-TLS:
z/OS Communications Server applications and features:
TN3270
FTP (client and server)
NSS daemon
IKE daemon (when acting as an NSS client)
CSSMTP server
Load Balancing Advisor
Centralized Policy Server
CICS sockets
Other IBM products:
IBM DB2
IBM IMS Connect
IBM JES Network Job Entry
IBM RACF Remote Sharing Facility
IBM Tivoli MultiSystem Manager
IBM Tivoli NetView Management Console
IBM Debug Tool for z/OS
Case study: Converting a System SSL application to use AT-TLS
Let us now consider a real-world example of converting a System SSL application to use AT-TLS.
The z/OS Communication Server TN3270 server, FTP server and FTP client used System SSL for TLS/SSL protect before AT-TLS existed. When AT-TLS was developed, these applications were modified to use AT-TLS as the preferred TLS protection mechanism. Let's look at TN3270 to see how this was done.
 
Note: To maintain compatibility for customers who were already using the System SSL support, AT-TLS enablement was added as an option, rather than completely replacing the direct System SSL integration. Whether or not you would need to maintain such compatibility with an earlier version for your own application depends on your individual application requirements.
With respect to TN3270, AT-TLS offers several advantages over System SSL:
Support for the latest TLS versions and cipher suites offered by System SSL
Dynamically refresh a key ring
Support new or multiple key rings
Specify the label of the certificate to be used for authentication rather than using the default certificate
Support SSL session key refresh
Support SSL session reuse
Support ID cashing for SSL sessions in a sysplex
Trace decrypted SSL data for Telnet in a data trace
Receive more granular error messages in syslog for easier debugging
To enable AT-TLS support, you have to edit the TN3270 profile within z/OS Communication Server configuration, to instruct the server to rely on the AT-TLS policy, rather than what might be specified in the configuration. This is done by specifying the TTLSPORT statement rather than the SECUREPORT statement within the TN3270 configuration. Example 2-2 shows a sample AT-TLS enabled Telnet 3270 configuration.
Example 2-2 TELNETPARMS definition for AT-TLS port in TN3270 configuration file
TELNETPARMS
TTLSPORT 4992 1 ; Port 4992 support AT-TLS
CONNTYPE SECURE
; key ring SAF TCPIP/SharedRing1 2 ; omit - defined in Policy Agent
; CLIENTAUTH NONE 2 ; omit - defined in Policy Agent
; ENCRYPT SSL_DES_SHA 2 ; omit - defined in Policy Agent
; SSL_3DES_SHA
; ENDENCRYPT
INACTIVE 0
TIMEMARK 600
SCANINTERVAL 120
FULLDATATRACE
SMFINIT 0 SMFINIT NOTYPE119
SMFTERM 0 SMFTERM TYPE119
SNAEXT
MSG07
ENDTELNETPARMS
;
BEGINVTAM
PORT 4992
DEFAULTLUS
SC33DT01..SC33DT99
ENDDEFAULTLUS
USSTCP USSTEST1 ; Use USSTABLE USSTEST1
ALLOWAPPL SC3* ; Netview and TSO
ALLOWAPPL NVAS* QSESSION ; session mngr queues back upon CLSDST
ALLOWAPPL TSO* DISCONNECTABLE ; Allow all users access to TSO
ALLOWAPPL * ; Allow all applications that have not been
; previously specified to be accessed.
ENDVTAM
As you can see in the example, when AT-TLS is chosen as the TLS protection mechanism for TN3270, several of the configuration parameters that serve as input the direct System SSL integration are not longer used. AT-TLS obtains these values from the relevant AT-TLS policies
 
Recommended reading: For more information about configuring AT-TLS for Telnet 3270, see IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 1 Base Functions, Connectivity, and Routing, SG24-7996.
2.2.4 Things to keep in mind when defining certificates
Digital certificates play an important role in network security. Certificates are used by the TLS/SSL and IKE protocols to authenticate security endpoints, such as servers and clients (TLS/SSL) and IP nodes (IKE). We briefly describe what a certificate is and then lay out important considerations when certificates are defined in your environment.
A certificate is a digital document that proves the identity of the certificate owner and binds that identity to a specific public/private key pair. The commonly used type of certificate is X.509, standardized by the IETF (see RFC 2459). Certificates make use of public-key cryptography. A public key is contained in every certificate. The corresponding private key resides in the key database of the owner of the certificate.
The owner of a certificate is uniquely defined using an X.500 distinguished name (DN). This is a hierarchical representation of the owner within a hierarchy (typically the enterprise hierarchy represented in LDAP). The DN defining the owner of a certificate is called Subject DN. The DN defining the issuer of the certificate (that is, the trusted third party) is called Issuer DN. The issuer signs the certificate using its private key. The public key of the issuer is contained in its certificate, which must be available to all communicating partners. Anybody can now verify the validity of the certificate by using the issuers public key.
Choosing encryption algorithms when defining certificates
Be aware that a common mistake made when working with certificates is to choose encryption algorithms or key lengths which might not be supported in all participating products. These two parameters need to be supported by all products involved in secure communication which uses certificates. If a peer does not support the signature algorithm or key length in your certificate, then you will be unable to establish a connection with that peer. To prepare in advance for situations like these, make your mandatory encryption algorithms a policy to which all application owners and your administration agree on. See 2.2.2, “Defining a cryptography strategy within your organization” on page 73 for more information.
Default key length to be used when creating certificates
Be aware that the default key length offered by z/OS certificate management functions for public/private key pairs might not be sufficient for your individual security needs. As for RACDCERT command in RACF, this is 1024 bit for a private RSA key. This is not sufficient where strong security is required, as we explained in 2.2.1, “Choosing appropriate cryptographic algorithms for network security” on page 69. Be sure that your certificates at least use a key length of 2048 when using RSA (which is most common). The documentation provides guidance about how to relate the key length of RSA to other algorithms.
Figure 2-11 shows a screen capture of the RACF Panels, option 7.1 where a new certificate is created and the default key length is provided with 1024 for non-ECC keys. It is common for users to skip these values and leave the default value.
Figure 2-11 RACF certificate creation panels, default key length
PKI Services for z/OS will also use default key lengths of 1024 bit for non-ECC keys. If you are using PKI Services in your z/OS environment, check your templates for the default key lengths.
If you are using the gskkyman utility in your installation, it presents the user with a selection of different certificate types, including several key lengths. The possibility of using only the default is not an option when using this utility. Figure 2-12 shows the certificate type selection panel for gskkyman.
Figure 2-12 gskkyman certificate type selection panel
Renewing keys
When a certificate approaches its expiration date, you can renew the certificate and continue using it. You can choose to renew the certificate using the same private key, thereby extending the life of the private key. Or you can retire the private key and replace it with a new private key (also called certificate rekeying or key rollover).
A certificate renewal must be performed before a certificate expires. When the certificate has expired, renewal is not possible. Also, the certificate must not be revoked. If it is revoked, it becomes unusable.
When you renew a certificate using a new private key, you retire the private key and replace it with a new one. This process is commonly called certificate rekeying or key rollover. You choose this option to prevent a private key from being overused. (The more a key is used, the more susceptible it is to being broken and recovered by an unintended party.)
All information in the renewed certificate is updated to reflect the renewal, including the key ring connection information. After you retire and replace the old certificate, you can now begin to use the new certificate and its private key. You can continue to use the old, retired certificate until it expires to verify previously generated signatures. However, you cannot use the retired certificate to create new signatures. Additionally, do not connect the retired certificate to any key rings as the default certificate.
In conclusion let us just state that it is a better approach to always renew the certificate, resulting in a new private/public key pair. If a private key is always reused when the certificate is renewed, this is comparable to never changing a password. If a password is reused over and over again, which is also against password policy in most organizations, this weakens security. The same holds for private keys in certificates. It provides an attacking vector, because an intruder might have more time to guess the private key and misuse it.
Taking care of expiration dates and certificate validity
If you are not using PKI Services on z/OS or a similar solution to manage certificates, you are running the risk of missing certificate expiration dates in your installation.
When a certificate expires without someone taking proper preparatory steps, it can cause significant disruption to your system or application availability and can have a large impact on the users of those systems or applications. We have seen customer situations, where the security administrator was called on emergency, because, for example, a Server certificate in RACF had expired in the middle of the night or over a weekend. You want to avoid situations like these. This can do serious damage to your business because it provokes downtime of the application.
Although RACF itself does not provide a mechanism to verify certificate expiration dates and provide warnings to a security administrator regarding certificate expiration, a new health check was added in 2.1 to help in this area.IBM HealthChecker for z/OS provides a check to make sure that certificate expiration dates are not missed. If you have implemented HealthChecker for z/OS, make sure this check is performed regularly and that its results are examined. IBM Health Checker for z/OS V1R12 User'’ Guide, SC23-6843-00, provides guidance for implementing the product and the relevant checks. The certificate expiration check offers the following measure:
RACF_CERTIFICATE_EXPIRATION allows RACF to identify all certificates which have expired, identify all certificates which are going to expire within the next few days (number of days can be set as a parameter to the check), and ensures that the user has defined a proper baseline set of protections within the z/OS environment.
It is also useful to carefully review all the current expiration dates in RACF. PKI Services for z/OS and other certificate management solutions might provide fixed templates for certificates. Meaning that, for example, a standard user certificate for TN3270E might be issued for one year. Then it is not possible to issue such a certificate for a longer time period. IBM Security zSecure also provides templates for the creation of certificates since version 2.1. Using certificate template mechanisms provides an enhancement in security, as it becomes easier to adhere to policies for certificate creation. As described in “Renewing keys” on page 81, private keys are in some ways analogous to passwords. Therefore, providing users or servers with certificates that are valid for a long time is comparable to providing them with passwords that will never expire.
If you act as your own CA within your organization or plan to, you must pay close attention to the expiration dates of your CA certificates also. Keep in mind that no certificate can have a validity longer than the signing certificate. If your CA is valid for five years, you will not be able to sign certificates that are valid for more than five years. To avoid putting your certificate infrastructure at risk and to make sure that you can always sign certificates using your specified templates, make sure that the CA certificate is valid for at least double time of your user and server certificates. You must find a balance between operational requirements (certificates need to be created) and security requirements (you do not want to have keys that are active forever).
RACF notes on digital certificates creation, change, and activation
Activate and RACLIST the DIGTCERT class if you use digital certificates with applications that require high performance, such as applications that access IBM WebSphere Application Server. If the DIGTCERT class is not RACLISTed, digital certificates can still be used but performance might be impacted when applications that retrieve certificates from RACF must wait while RACF retrieves them from the RACF database rather than from virtual storage.
After creating a new digital certificate, refresh the DIGTCERT class by issuing the SETROPTS RACLIST(DIGTCERT) REFRESH command. If you do not refresh the RACLISTed DIGTCERT profiles, RACF will still use the new digital certificate. However, performance might be impacted because applications that retrieve certificates from RACF will wait while RACF retrieves the new certificate from the RACF database.
 
Restriction: Any RACLISTed digital certificates that you alter, re-add or delete will not reflect your changes until you refresh the DIGTCERT class. This is because RACF uses RACLISTed profiles before profiles in the RACF database. Therefore, to make your changes effective, refresh the DIGTCERT class.
In general, after running any of the RACDCERT commands that update certificates or key rings, if the DIGTCERT and DIGTRING classes are RACLISTed, you must issue the following command:
SETROPTS RACLIST(DIGTCERT DIGTRING) REFRESH
PKI Services on z/OS versus RACDCERT for certificate management
We have described the PKI Services for z/OS component in detail in 2.1.9, “PKI services” on page 65. PKI Services z/OS is one possible way of managing certificates on z/OS. Nevertheless, the most common approach to certificate management on z/OS is the use of the RACF RACDCERT command. We want to give you a short guideline on how to decide which way of managing certificates is the right way for you.
PKI Services is the choice to make when it comes to managing vast amounts of certificates on the mainframe. It provides all of the functions required of a full certificate authority and has several capabilities and management functions that create an advantage over using RACDCERT in these situations. Also, PKI Services is compliant with the PKIX standard for PKIs. Especially if you are under regulation by government or similar, that might be requested anyway.
If you already deployed a PKI in your organization where users and administrators can create and retrieve certificates themselves, think about integrating PKI Services into your existing infrastructure for host-related certificates as a sub-CA. This way, you will have the possibility of enabling the PKI users with functions similar to what they are used to. PKI Services offer a highly customizable, web-based user interface.
We have also seen customers who were struggling to scale up their existing PKI solutions. They were not able of handling fast growing numbers of certificates with appropriate performance. In these situations, it makes great sense to evaluate a change in the infrastructure to PKI Services. This is especially the case when existing PKI solutions have been developed a long time ago where performance and scalability for huge amounts of certificates (in some industries, this can mean millions of certificates) has not been taken into account. These solutions might not scale up to the level you need. In modern network security environments, most of communication parties use public key cryptography, for example: network devices, routers, firewalls, security gateways, users, smart cards, PIN-code readers, credit card readers, servers, applications, and so on. The figures for certificates in these environments can reach high numbers very fast.
Several customer situations around the world have proven PKI Services to deliver the necessary performance in demanding environments.
For a usage comparison between PKI Services z/OS and RACDCERT certificate management, we provide an overview in Table 2-2.
Table 2-2 PKI Services versus RACDCERT for certificate management
RACDCERT
PKI Services z/OS
Smaller numbers of certificates to be generated
Need to generate a large number of certificates
You can keep track of expiration dates yourself1
Notification of certificate expiration should be sent automatically
Certificates are centrally generated and distributed from z/OS
You want to provide self-service solutions to users
You do not need a certificate revocation checking mechanism
Revoked certificates should be posted into a Certificate Revocation List (CRL)
You only need basic extensions in your certificates
You need extended support for extensions

1 Note: If you are using the z/OS Health Checker in z/OS 2.1, there is a check available that fulfils this requirement.
2.2.5 Guiding principles for IPSec
In 2.1.7, “IPSec” on page 54, we introduced various protocols that are collectively known as IPSec, including AH, ESP, and IKE. Next, we lay out guiding principles for the implementation of IPSec.
Comparison of dynamic and static tunnels for IPSec
We now describe the differences between dynamic mode tunneling and static tunneling with IPSec.
As you can see from Table 2-3, when static tunnels are used, secret keys must be defined locally at both endpoints of the communication. They are hardcoded and cannot be changed dynamically during an IPSec session. On the other hand, dynamic tunnels make use of the IKED to agree on shared secret keys between endpoints and to exchange keys dynamically during a communication session. This adds to the security of your IPSec implementation.
Table 2-3 Comparison of dynamic and static tunnels for IPSec
Dynamic tunnel mode
Static tunnel mode
SA attributes are agreed to through IKE negotiation
SA attributes are agreed through out-of-band communication and must be predefined locally
Cryptographic keys are established through IKE negotiation
Cryptographic keys are established through out-of-band communication and must be predefined locally
Provides authentication through pre-shared keys or digital signatures
Provides authentication through shared secret keys
Cryptographic keys are automatically refreshed in a nondisruptive manner
Cryptographic keys must be refreshed out-of-band and require the deactivation of the VPN to take effect
Comparison of protocol versions
The z/OS Communications Server supports IKEv2 in addition to IKEv1. IKEv2 has better performance and operational characteristics than IKEv1. Many government agencies expect the vendors who do business with them to use IKEv2 to establish secure communications with them. Consider the following differences:
IKEv2 does not interoperate with IKEv1.
IKEv1 supports AH in combination with ESP, but IKEv2 does not.
The z/OS IKE daemon can support both IKEv1 and IKEv2 simultaneously.
The z/OS IPSec policy file can contain both IKEv1 and IKEv2 policies.
Each TCP/IP stack can support tunnels activated by IKEv1 and IKEv2.
Security Association (SA) terminology differences:
 – IKEv1
 • Phase 1 SA (ISAKMP SA)
 • Phase 2 SA (IPSec SA)
 – IKEv2
 • Phase 1SA (IKE SA)
 • Phase 2 SA (Child SA)
The negotiation modes used by IKEv1 and IKEv2:
 – IKEv1
 • Main
 • Aggressive
 • Quick
 – IKEv2
 • Initial exchanges
 • Subsequent exchanges
The encapsulation mode (tunnel or transport)
 – IKEv1: A negotiated attribute of the SA. Local value must match peer’s value
 – IKEv2: Negotiated based on topology and user preference and SA. Local and peer can have different values
Advantages of IKEv2 over IKEv1
IKEv2 is a more current implementation of the IKE concepts. The protocol standard has several advantages over IKEv1:
IKEv2 requires fewer network flows in most cases compared to IKEv1.
IKEv2 can do rekeying without reauthentication.
IKEv2 has built-in dead-peer detection.
IKEv2 supports avoidance of duplicate or orphaned tunnels.
IKEv2 supports ECDSA signature mode.
All IKEv2 messages have an associated response, unlike IKEv1 which uses several notification messages that do not have an associated response
IKEv2 has solved several other problems in IKEv1.
Comparison of dynamic tunnel authentication methods
As we described in “Internet Key Exchange protocol” on page 57, when using dynamic tunnels for IPSec you can either use a pre-shared secret for authentication of the peers of a communication or you can use digital signature mode making use of X.509 certificates and RSA or ECDSA. Table 2-4 compares the modes of authentication and lists advantages and disadvantages.
Table 2-4 Comparison of dynamic tunnel authentication methods
Authentication method
How authentication is performed
Advantages
Disadvantages
Pre-shared keys
By creating hashes over exchanged information
Simple
 
 
Shared secret must be distributed prior
to IKE negotiations
 
Key refresh must be done manually
 
Identity depends on what is acceptable to peer platform (Some platforms permit IP address only.)
Digital signature (RSA or ECDSA)
By signing hashes over exchanged information
Can use IDs other than IP address
 
Partner certificates do not need to be available before IKE negotiations
 
Better security.
 
Key refreshes are a natural part of an X.509 certificate infrastructure
Requires certificate operations
 
Requires X.509 certificate infrastructure (PKI) and management.
 
More complex configuration
2.2.6 OpenSSH on z/OS UNIX, z/OS dependant features implementation
As we have seen in 2.1.8, “OpenSSH on z/OS” on page 60 OpenSSH on z/OS provides some features which are specific to z/OS and make use of z/OS own security infrastructure. Let us now take a closer look on those features and describe the implementation and the benefits in more detail. We have seen several customers who are using OpenSSH on z/OS but were unaware that they are able to use some z/OS specific security aspects with OpenSSH. The guiding principle for us is to draw attention on those features.
 
Additional material: For more information about setting up OpenSSH z/OS specific features for z/OS, see IBM Ported Tools for z/OS: OpenSSH User's Guide, SA23-2246-00
z/OS specific configuration files for OpenSSH
z/OS obtains z/OS-specific system-wide OpenSSH client configuration data only from the /etc/ssh/zos_ssh_config configuration file. It contains sections separated by Host specifications, and that section is only applied for hosts that match one of the patterns given in the specification. The matched host name is the one given on the command line.
z/OS obtains z/OS-specific per-user client configuration data in the following order:
1. User-specific client options from either of these options:
 – The command-line specification using the -o option of the scp, sftp, or ssh command.
 – The file specified with the _ZOS_USER_SSH_CONFIG variable. The default is ~/.ssh/zos_user_ssh_config.
2. System-wide client options from the /etc/ssh/zos_ssh_config file.
z/OS obtains z/OS-specific daemon configuration data in the following order:
1. Command-line specification using the sshd -o option.
2. A configuration file specified with the environment _ZOS_SSHD_CONFIG variable. The default is /etc/ssh/zos_sshd_config. For each keyword, the first obtained value is used.
 
Keywords: z/OS-specific keywords cannot be specified in the sshd_config and ssh_config configuration files, such as the system-wide configuration file or the user-defined configuration file that is specified with the sshd -f option.
RACF key ring support for OpenSSH on z/OS
Using UNIX files to store the keys is the common method supported on all OpenSSH implementations. Consider what other OpenSSH hosts you will be communicating with; that is, are they z/OS or non-z/OS? Also consider whether the z/OS systems are using key rings. Nevertheless, an intermix between z/OS key rings and UNIX files is possible.
Key rings provide commonality with other z/OS products that store keys in the security product. They can be real or virtual key rings. To use SAF key rings, you must have RACF or an alternative SAF-compliant security product. Appropriate authority must also be given to user IDs to manage the key rings.
Key rings might provide additional security in your environment. If you do not have a sophisticated UNIX security environment in your installation, authentication keys for OpenSSH might be in danger of exposure. There is no strong mechanism of control whether or not OpenSSH administrators make use of key-based authentication rather than password authentication. The storage of OpenSSH keys for authentication in RACF key rings, rather than UNIX files, might give your administration team more control over the use of keys.
SMF records for OpenSSH on z/OS
Providing an audit trail for system logon and data transfer from and to the system are dictated by common regulation standards across all industries. As OpenSSH on z/OS is a common way of accessing the system and transferring files from and to the system, especially for users in UNIX System Services, the audit trail needs to be available here also. SMF is the standard way on z/OS systems to provide audit information. Therefore, the recommendation is to use the SMF records provided by OpenSSH on z/OS to gather the necessary information.
OpenSSH collects SMF Type 119 records for file transfer activity and login failure information. You can control the collection of these records by using the configuration keywords ClientSMF and ServerSMF in z/OS-specific client and daemon configuration files, respectively. These keywords also indicate whether system-wide SMF record exit IEFU83 or IEFU84 receives control. The following list provides an overview of the records created:
SMF server transfer completion record (Type 119 - Subtype 96)
The server transfer completion records are collected when the sftp-server (regular or internal-sftp) or the server side of scp completes processing of one of the following file transfer subcommands:
 – Creating, uploading, downloading, renaming or removing files
 – Creating and removing directories
 – Changing the file permissions, UIDs, or GIDs
 – Creating symbolic links
SMF client transfer completion record (Type 119 - Subtype 97)
The client transfer completion records are collected when the client side of sftp or scp completes processing of one of the following file transfer operations:
 – Uploading files
 – Downloading files
SMF login failure record (Type 119 - Subtype 98)
Login failure records are collected after each unsuccessful attempt to log in to the sshd daemon. A login failure record is collected for each authentication method and attempt that fails. A login failure reason code within the SMF record provides information about the cause of the login failure. Only failures during user authentication are collected with the following exception: Records are not collected for a none authentication failure if it is the first authentication method attempted.
Multilevel Security in OpenSSH for z/OS
Some installations might be obligated to implement multilevel security within z/OS RACF. This is mostly required for high security environments, that are operated under a certain security standard, for example the Common Criteria Operating System Protection Profile (OSPP).
The OpenSSH on z/OS daemon (sshd) can be used on a multilevel-secure system to control a user's security label at login. Review z/OS Planning for Multilevel Security and the Common Criteria before using the daemon on a multilevel-secure system.
The OpenSSH daemon will attempt to derive a security label from the user's port of entry, as defined in a SERVAUTH NETACCESS profile. To successfully log in to a multilevel-secure system, the login user ID must be permitted to the security label defined in the NETACCESS profile for the client IP address. These checks are performed for any user invoking ssh, scp, or sftp to perform remote operations on the multilevel-secure system. For more information about NETACCESS profiles and running daemons in a multilevel-secure environment, see z/OS Communications Server: IP Configuration Guide, SC27-3650-00.

4 More information about the OpenBSD Project is available at http://www.openssh.com/.
5 Information is available at this website: http://datatracker.ietf.org/wg/pkix/documents/
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset