Synergy with System z
As with all previous versions, DB2 11 for z/OS takes advantage of the latest improvements in the platform. DB2 11 increases the synergy with System z hardware and software to provide better performance, more resilience, and better function for an overall improved value. In addition, DB2 11 benefits from advances in large real memory support, faster processors, and better hardware compression.
DB2 for z/OS is designed to take advantage of the System z platform to provide capabilities that are unmatched in other database software products. The DB2 development team works closely with the System z hardware and software teams to take advantage of existing System z enhancements and to drive many of the enhancements that are available on the System z platform.
This chapter describes the synergy between DB2 11 and the System z hardware and software that removes constraints for growth, improves reliability and availability, and continues to improve total cost of ownership and performance. It also outlines features and functions of the IBM zEnterprise platform and z/OS V2R1 that are expected to benefit DB2 for z/OS. It includes the following topics:
2.1 Synergy with IBM zEnterprise System
The IBM zEnterprise System delivers unique value and industry-leading capabilities that allow you to maximize the business value of your unique information. The IBM zEnterprise EC12 (zEC12) is the cornerstone of the latest zEnterprise System and flagship of the IBM Systems portfolio. The superscalar design allows the zEC12 to deliver a record-level capacity. It is powered by 120 of the world’s most powerful microprocessors that run at 5.5 GHz and is capable of executing more than 78,000 millions of instructions per second (MIPS).
DB2 for z/OS takes advantage of the following features available with the zEC12:
2.1.1 Faster CPU speed
The CPU speed of the zEC12 has been measured at 1.25 times the speed of the z196. The improved CPU speed of the zEC12 provides the following performance improvements over the z196 for DB2:
20-28% CPU reduction for OLTP workloads
25% CPU reduction for Query and Utility workloads
1-15% less compression overhead with DB2 data
2.1.2 More system capacity
The zEC12 provides up to 50% more total capacity than the z196. This increase capacity makes the zEC12 an excellent choice to grow either horizontally or vertically within one server. The zEC12 is a good choice if you are planning a large scale consolidation because of its ability to provide secure data serving and to support mission-critical transaction processing. DB2 11 provides scalability features, as described in Chapter 3, “Scalability” on page 23. The zEC12 provides the synergy to take advantage of these scalability enhancements in DB2 11.
2.1.3 zEC12 hardware features
DB2 11 takes advantage of the following hardware features of the zEC12.
Large frame area (LFAREA)
The large frame area is used for the fixed 1 MB large page frames and fixed 2 GB large page frames. Using large page frames can improve performance for some applications by reducing the overhead of dynamic address translation. This improvement is achieved by each large frame requiring only one entry in the translation lookaside buffer (TLB), as compared to the larger number of entries that are required for an equivalent number of 4 KB page frames. A single TLB entry improves TLB coverage for users of large page frames by increasing the hit rate and decreasing the number of TLB misses that an application incurs.
 
TLB buffer: Memory addresses that are referred to by a process are virtual addresses and require translation to the physical address. The TLB is a relatively small cache area that is used to perform this address translation.
Large pages are a performance improvement feature for some cases, but switching to large pages is not recommended for all workloads. Large pages provide performance value to a select set of applications that can generally be characterized as memory access-intensive and long-running. These applications meet the following criteria:
They must reference large ranges of memory.
They tend to exhaust the private storage areas that are available within the 2 GB address space (such as IBM WebSphere®), or they use private storage that is above the 2 GB address space (such as IBM DB2).
Flash memory and pageable 1 MB page frames
The zEC12 supports an optional hardware feature called Flash Express memory cards. These memory cards are supported in an I/O drawer with other I/O cards. The cards come in pairs for improved availability, and no HCD/IOCP definition is required. Flash memory is assigned to partitions the same way that main memory is assigned, and each partition’s flash memory is isolated, similar to main memory. You can dynamically increase the maximum amount of flash memory on a partition, and you can dynamically configure flash memory into and out of the partition.
You can use flash memory to solve many different problems. Flash memory is much faster than spinning disk, but it is much slower than main memory. Flash memory takes less power to utilize than either option.
With the combination of Flash Express installed on a zEC12 and the pageable 1 MB large page frame support in z/OS V1R13, DB2 takes advantage of the large page frame support by allocating internal control blocks (PMBs) using 1 MB pageable storage. These large page frames can be paged to and from Flash Express, and performance might be improved due to a reduction in TLB misses and an increase in the TLB hit rate.
Flash memory can also be used to improve SVC dump data capture time. It removes the requirement for pageable link pack area (PLPA) and common page data sets when used for cold start IPLs.
This feature requires zEC12 (2827) hardware with Flash Express installed and z/OS V1R13 and above with requisite PTFs (FMID JBB778H). APARs PM85944 and PM90486 retrofit this feature to DB2 10 for z/OS.
2 GB large page frames
A 2 GB page frame is a memory page that is 2048 times larger than a 1 MB page and 524,288 times larger than the ordinary 4 KB base page. 2 GB large page frames allow for a single TLB entry to fulfill many more address translations than either a large page or an ordinary base page. 2 GB large page frames provide exploiters with much better TLB coverage and, therefore, potentially allow the following benefits:
Better performance by decreasing the number of TLB misses that an application incurs
Less time spent converting virtual addresses into physical addresses
Less real storage used to maintain DAT structures
Note that 2 GB large pages require z/OS V2R1 and the hardware features of the zEC12.
The Buffer Manager component of DB2 uses a 2 GB frame size only when there are at least 2 GB of buffer storage to allocate and when the buffer pool is defined as long-term page fixed. For example, if you specify a small buffer pool size, such as VPSIZE=20000, a 2 GB frame is not used. If you specify VPSIZE=524288 for a 4 KB buffer pool, you are requesting a buffer pool that can contain 524,288 pages that are 4 KB in size, for a total of 2,147,483,648 bytes, which is exactly 2 GB. In this case, you get exactly one 2 GB frame allocated. If you specify VPSIZE=600000, you get one 2 GB frame, with the remainder of the buffer pool allocated in 1 MB frames up to the specified size.
For DB2 to take advantage of 2 GB large pages, the ALTER BUFFERPOOL command now includes the FRAMESIZE attribute. The valid values are 4 KB, 1 MB and 2 GB. Example 2-1 runs the ALTER BUFFERPOOL command to establish a page fixed buffer pool with 2 GB pages.
Example 2-1 ALTER BUFFERPOOL command to use 2 GB frame size
DB2 COMMANDS SSID: DB1D
===>
Position cursor on the command line you want to execute and press ENTER
Cmd 1 ===> -ALTER BUFFERPOOL(BP4) VPSIZE(600000) FRAMESIZE(2G) PGFIX(YES)
Cmd 2 ===>
Cmd 3 ===>
Example 2-2 shows the results of the ALTER command. The DSNB543I message shows that the PGFIX attribute is set to YES. The DSNB522I message shows that the FRAMESIZE attribute is set.
Example 2-2 Results of ALTER BUFFERPOOL command to change FRAMESIZE
DSNB522I -DB1D VPSIZE FOR BP4 HAS BEEN SET
DSNB543I -DB1D THE PGFIX ATTRIBUTE IS ALTERED FOR
BUFFER POOL BP4
CURRENT ATTRIBUTE = YES
NEW ATTRIBUTE = YES
THE NEW ATTRIBUTE IS IN PENDING STATE.
DSNB522I -DB1D FRAME FOR BP4 HAS BEEN SET
DSN9022I -DB1D DSNB1CMD '-ALTER BUFFERPOOL' NORMAL COMPLETION
***
To validate that the frame size was set properly and that DB2 uses a 2 GB frame, Example 2-3 issues the DISPLAY BUFFERPOOL command.
Example 2-3 DISPLAY BUFFERPOOL command to show 2 GB frame size
DB2 COMMANDS SSID: DB1D
===>
Position cursor on the command line you want to execute and press ENTER
Cmd 1 ===> -DISPLAY BUFFERPOOL(BP4) DETAIL
Example 2-4 shows the results of the DISPLAY command. Note that the preferred frame size is 2 GB. However, no buffers have yet been allocated to the 2 GB frame because no DB2 workload has been run that uses this buffer pool since altering the size.
Example 2-4 Results of DISPLAY BUFFERPOOL command showing 2 GB frame defined
DSNB401I -DB1D BUFFERPOOL NAME BP4, BUFFERPOOL ID 4, USE COUNT 0
DSNB402I -DB1D BUFFER POOL SIZE = 600000 BUFFERS AUTOSIZE = NO
VPSIZE MINIMUM = 0 VPSIZE MAXIMUM = 0
ALLOCATED = 0 TO BE DELETED = 0
IN-USE/UPDATED = 0
DSNB406I -DB1D PGFIX ATTRIBUTE -
CURRENT = YES
PENDING = YES
PAGE STEALING METHOD = LRU
DSNB404I -DB1D THRESHOLDS -
VP SEQUENTIAL = 80
DEFERRED WRITE = 30 VERTICAL DEFERRED WRT = 5, 0
PARALLEL SEQUENTIAL =50 ASSISTING PARALLEL SEQT= 0
DSNB546I -DB1D PREFERRED FRAME SIZE 2G
0 BUFFERS USING 2G FRAME SIZE ALLOCATED
Next, a table is created with a long row (2021 bytes in this case) that was not compressed. For this example, 3.3 million rows are inserted into the table and then a SELECT * is issued on the table with no WHERE clause to ensure that all the rows and all the columns on each row are read.
After running the SQL statements, a DISPLAY BUFFERPOOL command is issued again. Example 2-5 shows the results.
Example 2-5 Results of DISPLAY BUFFERPOOL command showing 2 GB and 1 MB frame allocation
DSNB401I -DB1D BUFFERPOOL NAME BP4, BUFFERPOOL ID 4, USE COUNT 1
DSNB402I -DB1D BUFFER POOL SIZE = 600000 BUFFERS AUTOSIZE = NO
VPSIZE MINIMUM = 0 VPSIZE MAXIMUM = 0
ALLOCATED = 600000 TO BE DELETED = 0
IN-USE/UPDATED = 0
DSNB406I -DB1D PGFIX ATTRIBUTE -
CURRENT = YES
PENDING = YES
PAGE STEALING METHOD = LRU
DSNB404I -DB1D THRESHOLDS -
VP SEQUENTIAL = 80
DEFERRED WRITE = 30 VERTICAL DEFERRED WRT = 5, 0
PARALLEL SEQUENTIAL =50 ASSISTING PARALLEL SEQT= 0
DSNB546I -DB1D PREFERRED FRAME SIZE 2G
524288 BUFFERS USING 2G FRAME SIZE ALLOCATED
DSNB546I -DB1D PREFERRED FRAME SIZE 2G
19200 BUFFERS USING 1M FRAME SIZE ALLOCATED
DSNB546I -DB1D PREFERRED FRAME SIZE 2G
56512 BUFFERS USING 4K FRAME SIZE ALLOCATED
Note that the buffer pool is defined as 600,000 buffers, which is a little more than 2 GB. Because a large enough workload was run to use more than 2 GB of buffer pool storage, DB2 allocated 524,288 pages to a 2 GB frame, which amounts to exactly 2 GB of storage. DB2 then allocated 19,200 pages to 1 MB frames, which amounts to 75 1 MB frames. The remaining 56,512 pages were allocated to 4 KB frames.
You might have expected all the storage above 2 GB to be allocated to 1 MB frames. However, DB2 does use some internal calculations to allocate what is left over after the 2 GB allocation, and it does not always come out to exactly what the system has defined. For this example, the DISPLAY VIRTSTOR,LFAREA command was run to see the maximum possible allocation to each frame size.
Example 2-6 shows the results of the DISPLAY command. In this test case, a maximum of 100 page frames can be used for a frame size of 1 MB. Based on the internal calculation, DB2 allocated 75 page frames of 1 MB each (19200 * 4096 / 1024 / 1024), with the remaining 56,512 pages allocated using a 4 KB frame size.
Example 2-6 D VIRTSTOR command to show the maximum allocation of 2 GB and 1 MB frames
D VIRTSTOR,LFAREA
IAR019I 18.51.21 DISPLAY VIRTSTOR 200
SOURCE = 00
TOTAL LFAREA = 100M , 2G
LFAREA AVAILABLE = 20M , 0G
LFAREA ALLOCATED (1M) = 80M
LFAREA ALLOCATED (4K) = 0M
MAX LFAREA ALLOCATED (1M) = 80M
MAX LFAREA ALLOCATED (4K) = 0M
LFAREA ALLOCATED (PAGEABLE1M) = 0M
MAX LFAREA ALLOCATED (PAGEABLE1M) = 0M
LFAREA ALLOCATED NUMBER OF 2G PAGES = 1
MAX LFAREA ALLOCATED NUMBER OF 2G PAGES = 1
2.2 Synergy with IBM System z and z/OS
This section discusses interfaces that are used by DB2 11 to take advantage of the synergy potential between the System z hardware and the z/OS operating system software. There are a number of features in DB2 11 that use features in different versions of the z/OS operating system. DB2 11 takes advantage of the following features available in z/OS:
2.2.1 AUTOSIZE options VPSIZEMIN and VPSIZEMAX
The AUTOSIZE attribute of the ALTER BUFFERPOOL command specifies whether DB2 uses Workload Manager (WLM) services, if available, to increase the buffer pool size automatically as appropriate.
The VPSIZEMIN and VPSIZEMAX attributes have been added to the ALTER BUFFERPOOL command to allow more control. These attributes specify the minimum and maximum size for a buffer pool when AUTOSIZE(YES) is in effect beyond the DB2 increase or decrease of the buffer pool size by +/-25%. They require z/OS V2R1.
2.2.2 1 MB page frames for DB2 execution code
In z/OS V2R1, the execution code for DB2 itself can be backed by 1 MB pageable page frames. It is available only with Flash Express configured, which can result in CPU reductions that are associated with loading the code.
2.2.3 Improved performance of batch updates in data sharing
z/OS V2.1 with IBM DB2 11 for z/OS running on zEC12 or zBC12, or later systems with CFLEVEL 18, is planned to take advantage of the function to allow batched updates to be written directly to disk without being cached in the coupling facility in an IBM Parallel Sysplex®. This function can help avoid application stalls that might sometimes occur during large concurrent batch updates.
When a page set is GBP-dependent, if GBPCACHE CHANGED is used, both COMMITs and DEREFFED WRITEs need to write the pages to the GBP. If the I/O subsystem is slower at casting out pages from the GBP to DASD (or to a remote site) than the rate at which deferred writes are filling up the GBP, COMMITS are suspended and cast out operations can free space in the GBP. Typically the deferred writes done on behalf of batch updates are the culprit. In effect, DB2 is thrashing the coupling facility, because there is no value in having the deferred writes be written to the GBP. DB2 11 solves this situation with the support of changes to z/OS.
The z/OS support for this function is also available on IBM zEnterprise 196 (z196) and zEnterprise 114 (z114) servers with CFLEVEL 17 and an MCL, and on z/OS V1.12 and z/OS V1.13 with the PTF for APAR OA40966.
This feature is described in more detail in 5.1, “Group buffer pool write-around protocol” on page 86.
2.2.4 Improved usability and consistency for security administration
DB2 11 for z/OS is designed to improve usability and consistency for security administration. z/OS V2.1 RACF, when used with DB2 11, is designed to provide consistency between DB2 and RACF access controls for bind and rebind under an owner’s authorization identifier, RACF security exit support for declared global temporary tables (DGTT), and support for automatic authorization statement cache refreshes when RACF profiles are changed. This is intended to make DB2 security administration easier.
Details on security enhancements can be found in Chapter 10, “Security” on page 239.
2.2.5 Log writing
As a performance improvement in DB2 11, log records are written without the need to first space switch to the xxxxMSTR address space. To support this change, log buffers must be moved from their current location in xxxxMSTR 31-bit private to common storage. Because the log buffers can be large, up to 400 MB, it is not practical to move the log buffers to ECSA because most systems would not have enough ECSA available for a single request of this size. The log buffers are moved to 64-bit common (HCSA).
The amount of HCSA used is roughly the size of the log buffers specified by the OUTBUFF parameter plus 15%. The SYS1.PARMLIB setting for HVCOMMON must be large enough to accommodate this size for each DB2 11 subsystem active on an LPAR. In addition, the buffers can reside in 1 MB page frames, if available. You might want to increase the SYS1.PARMLIB setting for LFAREA to allow for this allocation.
IFCID 225 includes statistics for the common storage used by log manager buffers and control structures.
This function is enabled in conversion mode (CM).
2.3 Using zIIP speciality processors
DB2 for z/OS began using zIIP specialty processors in V8 and continued to improve total cost of ownership (TCO) by further using zIIP engines in DB2 9 and DB2 10. DB2 11 continues this trend by providing additional zIIP workload eligibility, as described in this section.
zIIP is designed to help free general computing capacity and lower software costs for select DB2 workloads. The initial DB2 implementation of zIIP was targeted towards reducing the software costs for business intelligence (BI), enterprise resource planning (ERP), and customer relationship management (CRM) workloads on the mainframe. However, non-DB2 workloads can take advantage of zIIP as well.
The amount of redirect in each case varies based on workload characteristics.
The following DB2 11 for z/OS processing is authorized to execute on zIIP:1
Asynchronous processing that is executed under enclave SRBs and that will be “charged” for CPU consumption purposes to a DB2 address space (rather than to user applications), with the exception of P-lock negotiation processing
Such zIIP eligible processing includes:
 – Cleanup of pseudo deleted index entries as part of DB2 system task cleanup
 – Cleanup of XML multi-version documents (available in DB2 10 for z/OS through APAR PM72526)
 – Log write and log read
The DB2 base LOAD, REORG, and REBUILD INDEX utility processing of inline statistics collection that DB2 directs to be executed under enclave Service Request Blocks (SRBs)2
The DB2 base processing of the RUNSTATS utility Column Group Distribution statistics collection that DB2 directs to be executed under enclave SRBs2
The DB2 base LOAD utility index management processing when running LOAD REPLACE that DB2 directs to be executed under enclave SRBs2
From the DB2 address space point of view:
DBM1 address space
 – System task performing clean up of pseudo-deleted index entries
 – Portions of XML multi version documents cleanup processing (also available in DB2 10 through APAR PM72526)
 – System-related asynchronous SRB processing with the exception of P-lock negotiation processing
MSTR address space
System related asynchronous SRB processing, such as log write or log read
Utilities
 – Portions of inline statistics gathering processing during LOAD, REORG, and REBUILD index processing
 – Portions of RUNSTATS column group distribution statistics processing
 – The work on elimination on NPSIs during LOAD REPLACE PART with dummy input
Refer to the IBM documentation for software and hardware requisites for zIIP at:
zAAP on zIIP
IBM continues to support running IBM System z Application Assist Processor (zAAP) workloads on IBM System z Integrated Information Processor (zIIP) processors (zAAP on zIIP). z/OS V2.1 is designed to remove the restriction that prevents zAAP-eligible workloads from running on zIIP processors when a zAAP is installed on the server. This support is intended to help facilitate migration and testing of zAAP workloads on zIIP processors. This support is also available with the PTF for APAR OA38829 for z/OS V1.12 and z/OS V1.13.
IBM zEnterprise EC12 is planned to be the last high-end System z server to offer support for zAAP specialty engine processors. IBM intends to continue support for running zAAP workloads on zIIP processors (zAAP on zIIP).
2.4 Reduced need for REORG
Starting in 2009, several product enhancements emerged that improved performance for disorganized index and data. These enhancements provided less need to run expensive DB2 REORGs. DB2 11 continues this progress towards reducing the need for REORGs. This section reviews what has happened since 2009 and explains features in DB2 11 that help to further reduce the need for REORG.
In 2009 IBM and other vendors began to offer solid-state disks (SSD) for enterprise storage. SSD has no mechanical seeks or rotational delays that are associated with disorganized data, enabling the device to efficiently stream the data no matter how the data is organized. Random pages are still not streamed as fast as sequential pages, but the performance gap between random and sequential data is significantly reduced.
In 2011 IBM delivered High Performance FICON® (zHPF) support for DB2 list prefetch with its IBM System Storage® DS8000® Licensed Machine Code (LMC) level R6.2. This support also requires a z196 or zEC12 processor. In addition, IBM delivered FICON Express 8S channels for these two processors. FICON Express 8S is optimized for zHPF. Using zHPF, FICON Express 8S can read discontiguous pages faster.
R6.2 also introduced List Prefetch Optimizer to optimize the fetching of data from disk when DB2 is using list prefetch. List Prefetch Optimizer requires zHPF. List Prefetch Optimizer is optimal for both random pages (as is the case with a disorganized index scan) and skip-sequential scans (as is the case with a sorted RID list). List Prefetch Optimizer is especially good in conjunction with solid-state disks. For details, see GPFS in the Cloud: Storage Virtualization with NPIV on IBM System p and IBM System Storage DS5300, REDP-4682.
Furthermore, R7.2 recently delivered Flash Optimized Offering for the DS8870. It is expected that some disorganized index scans might benefit from this support when using SSD. See the recent announcement at:
The price of SSD is rapidly coming down and gaining market share. As cost reduction happens, DB2 10 and 11 are well positioned to take advantage of this hardware to further reduce the need for REORGs.
2.5 DFSMS storage tiers
The classic z/OS DFSMS storage hierarchy involves a three-level hierarchy. Data that is regularly accessed is maintained on a fast drive Primary Level (Level 0) that is managed by DFSMShsm. When the data is no longer accessed regularly, it is migrated to Migration Level 1 (ML1), which is typically on a less expensive disk drive. When the data has not been accessed for a longer period of time, then it is migrated to Migration Level 2 (ML2), which is typically on tape.
Figure 2-1 shows an example of the classic three-level storage hierarchy.
Figure 2-1 The classic DFSMS storage hierarchy
Over the years, typical configurations have changed to leave data on Level 0 longer and then migrate directly to ML2, bypassing ML1. When ML2 is a Virtual Tape Server (VTS), then VTS disk cache replaces the ML1 tier. The VTS disk cache implementation provides the following savings:
Eliminates MIPS required for software compression to ML1
Eliminates DFSMShsm ML1 to ML2 processing
Although the classic DFSMS storage hierarchy provides these benefits, this storage management solution also have the following shortcomings:
There is no policy-based automation for moving data within the Primary Storage Hierarchy (Level 0)
There is no policy-based management of Active (open) data
z/OS V2R1 DFSMS introduces a storage tiers solution. This solution provides an automated, policy-based space management that moves data from tier to tier within the Primary (Level 0) Hierarchy. The storage tiers solution provides the following benefits:
It better aligns storage costs with changing business value.
It minimizes the TCO for System z data by actively managing data on the lowest cost storage that meets the business needs of the data.
Within the storage tiers solution, automated movement of data is provided through the existing DFSMShsm Space Management function. Movement is referred to as a class transition. The data that is moved remains in its original format and can be accessed immediately after the movement is complete.
The storage tiers solution replaces ML1 with a Nearline level, which represents data that is not at the Enterprise (Level 0) level but still needs to be accessed relatively quickly. That is, it is not “hot” data, but it is not “frigid” data either. The data is either cool or cold, which means it is not accessed that frequently. The storage tiers solution allows this “cool” data to be transitioned from the Enterprise level (Level 0) to the Nearline level (Level 1) after some specified period of time. The data that is stored in the Nearline level is still stored on DASD and is still immediately accessible. The data is just transitioned to a different class of storage.
Figure 2-2 shows an overview of the storage tiers solution.
Figure 2-2 Storage tiers overview
The critical enterprise data, or “hot” data as it is sometimes called, is stored on Enterprise Level storage, or tier 0. The less critical, but still regularly accessed, data is stored on Nearline Level storage, or tier 1. Data on the Nearline level that is not accessed in 32 days is migrated to ML2 storage.
2.5.1 Use cases for storage tiers
The following data set examples benefit from the storage tiers solution.
One case that benefits is where the data sets are not currently eligible for migration because they always need to be immediately accessible. In this situation, a delay while waiting for the data set to be recalled is unacceptable. These data sets can be allocated on a particular class of storage and then later transitioned to a less expensive class of storage for permanent retention.
A second case that might benefit is when there are data sets that are eligible for migration today, but there is a benefit to keeping them online for a longer period of time. In this case, it makes sense to convert the migration of data sets to transition to a lower cost storage and then to increase the number of days that the data sets must be unreferenced before migrating directly to ML2.
Note that there is a difference between the HSM migrate/recall functions and class transitions. When a data set is recalled, it is returned to the class of storage as directed by the automatic class selection (ACS) routines, which typically is higher than where a data set resides after a transition. When a data set transitions to a lower class of storage, it remains there until it is transitioned again or until it migrates.
2.5.2 Setup and invocation of storage tiers
DFSMShsm Space Management processing uses policy-based automation to ensure that volumes within the Primary Storage Hierarchy have enough free space for new data and to ensure that data is stored at the lowest acceptable tier in the Storage Hierarchy. This function is accomplished through the following processes:
Data set expiration
Migration of unreferenced data to the Migration Hierarchy
“Class Transitions” within the Primary Hierarchy
Class Transition processing is new and is used by the storage tier solution. This processing is integrated into the following existing DFSMShsm Space Management functions:
Primary Space Management
On-Demand Migration, which is a new function introduced in V1R13
This function performs space management on a volume as soon as it goes over its high threshold. It is a replacement for on-the-hour Interval Migration processing.
Interval Migration
When a volume is selected for space management processing due to being over a threshold, in addition to existing expiration and migration checking, space management functions determines if a data set is eligible to be transitioned, based on management class criteria.
SMS Management Class
The SMS Management Class provides the Class Transition policies, which include the following components:
Class Transition Criteria
Serialization Error Exit
Transition Copy Technique
Each of these policies is discussed in more detail in the sections that follow.
Class Transition Criteria
This criteria determines if and when a data set should be transitioned. This criteria includes information about how long since the data set was created and how long since the data set was used. In addition, there is a periodic setting that specifies that a data set should be transitioned monthly, quarterly, or annually, regardless of the usage of the data set.
Serialization Error Exit
This exit indicates what type of special processing occurs if the data set cannot be serialized, meaning that the data set is open and it cannot be moved. The following serialization error exit setting options are available:
NONE
DB2
CICS
zFS
EXIT
If the setting is DB2, the exit invokes DB2 to close and unallocate the object. If this operation is successful, the object is serialized and moved, and DB2 is invoked to reopen the object. For DB2 data, the data set can always be open, and special processing might be needed to transition the data at any time. Because it is expected that data sets can be open, the default is to not issue an error message if a data set cannot be exclusively serialized; it is just skipped, which is similar to migration processing.
Transition Copy Technique
This technique setting indicates which copy technique is used to move the data set. The following techniques are available:
Standard uses standard I/O, which is the default.
Fast Replication Preferred prefers Fast Replication. If it cannot be used, standard I/O is used.
Fast Replication Required requires Fast Replication. If it cannot be used, fail the data movement. This technique requires the target volume to be in the same storage controller.
Preserve Mirror Preferred prefers to use Preserve Mirror. This technique indicates that a Metro Mirror primary volume is allowed to become an IBM FlashCopy® target volume. If Preserve Mirror cannot be used, FlashCopy or standard I/O can be used.
Preserve Mirror Required requires Preserve Mirror. The transition is performed only if the Metro Mirror primary target volume does not go duplex pending. This parameter has no affect if the target volume is not a Metro Mirror primary volume.
Storage Group Processing Priority
In addition to the class transition policies, the new Storage Group Processing Priority specifies the relative order in which storage groups are processed during Primary Space Management. To help ensure that the “receiving” storage groups have enough space for the data sets that are moved to them, a new storage group Processing Priority is provided. These storage groups are assigned a higher priority. Storage Groups are processed in the order of their priority. A higher value means a higher priority. The valid values are 1 to 100, with a default of 50.
After DFSMShsm determines that a data set has met the Class Transition criteria specified by the Management Class, it invokes the ACS routines to determine what the transition should be. The ACS routines are invoked with the new ACS environment (the &ACSENVIR variable) value of SPMGCLTR, for “space management class transition.” The following ACS routines are invoked in the order shown:
Storage Class
Management Class
Storage Group
Any or all policies can be transitioned.
The Storage Class indicates the “preferred” class of storage to which the data set is allocated. If the storage class changes but the storage group remains the same and if a device matching the new storage class attributes cannot be selected, the data set is not moved.
When a new management class is assigned, DFSMShsm begins using the newly assigned policies to manage the data set. If only the management class changes, the data set is altered to assign it to the new management class, and no data movement is performed.
During processing of the Storage Group routine, from 1 to 15 storage groups can be returned. The storage administrator ensures that a different storage group name provides a meaningful transition.
When DFSMShsm determines that a data set should be moved for a Class Transition, DFSMSdss is invoked to perform a Logical COPY with the DELETE command. In this case, DFSMSdss is the full data mover, unlike migrate/recall and backup/recover processing where DSS is only the half data mover. DFSMSdss handles the Copy Technique and Exit processing. After the movement, the data set retains all existing attributes and can be immediately accessed.
The ICF catalog is updated as a part of the movement. No new DFSMShsm control data set records are created for transitions; however, new functional statistics record (FSR) type 24 is created for reporting purposes.
2.5.3 Use cases for DB2
Possible use cases for storage tiers for DB2 data are cases where the data is partitioned and the data is date or time dependent and the latest data is always added to the end. For example, if a table is defined as partitioned by range (PBR) and the partitioning key is defined as a date or time stamp, you can design it such that each partition held one month’s worth of data. If the most frequently accessed data is data within the last 60 days, you can set up storage tiers such that the partition that contains the current month’s data is on Primary storage, and the two partitions that contain the prior two month’s data are on Nearline storage. All data prior to those months are on Migration Level 2 storage.
A similar scenario can be made for partition by growth (PBG) table spaces, with the assumption that newly added partitions contain the data for which there is the most interest.
2.6 Additional System z enhancements
The following additional enhancements to the System z hardware and software platform also provide benefits to DB2 for z/OS.
2.6.1 Enhancing DB2 BACKUP SYSTEM solution
DB2 11 enables recovery of single pageset from DB2 system-level backup even if original volume does not have sufficient space and enables exploitation of FlashCopy consistency group for DB2 BACKUP SYSTEM. It also enables restoration of a pageset to a different name.
FRBACKUP COPYPOOL with consistency allows you to create a backup of the log copypool with consistency. Prior to DB2 11, you need a conditional restart of DB2 with a log truncation point that corresponds to the data complete LRSN of the system-level backup. The conditional restart is needed to compensate for the fuzziness of the backup of the log copypool. If the backup of the log copy pool is taken with consistency, you no longer need to do a conditional restart of DB2.
You can use the FlashCopy Consistency Group function to minimize application impact when making consistent copies of data spanning multiple volumes. The procedure consists of freezing the source volume during each volume copy operation and thawing the frozen volumes using the CGCREATED command after a FlashCopy Consistency Group is formed. During the time period between the first and the last volumes are frozen, no dependent write updates occur, which allows a consistent copy of logically related data that spans multiple volumes.
2.6.2 z/OS DFSMS VSAM RLS for z/OS catalog support
In a Parallel Sysplex environment, z/OS V2.1 extends support for the VSAM record-level sharing (RLS) environment to catalogs to allow improvements to both single-system and shared catalog performance.
DB2 9 and above can see improved DB2 data set open/close performance.
2.6.3 DDF Synchronous Receive support
DB2 10 currently uses Asynchronous Receive, which requires extra SRB dispatching. DB2 11 uses the z/OS 1.13 Communication Server services for synchronous receive. The benefits are reduced CPU for DIST address space, especially for high performance DBATs or long running transactions.
No application changes or binds are required.
2.6.4 zEnterprise Data Compression
zEnterprise Data Compression (zEDC) for z/OS V2.1, a priced optional feature of z/OS that runs on zEC12 and zBC12 systems with the zEDC Express adapter, is designed to support a new data compression function. This facility is designed to provide high-performance, low-latency compression without significant CPU overhead. Initially, z/OS allows you to specify that SMF data written to log streams be compressed, which is expected to reduce disk storage requirements for SMF data and reduce SMF and System Logger CPU consumption for writing SMF data. For more information about this function, see Subsystem and Transaction Monitoring and Tuning with DB2 11 for z/OS, SG24-8182.
Further support for zEDC is also planned. Corresponding support in the SMF dump program IFASMFDL is designed to support both hardware-based and software based decompression, and software-based decompression support is available on z/OS V1.12 and z/OS V1.13 with the PTF for APAR OA41156. This function allows higher write rates for SMF data when hardware compression is enabled. IBM RMF™ support for hardware compression includes SMF Type 74 subtype 9 records and a Monitor I PCIE Activity report that provides information about compression activity on the system.
In addition, plans are to make the BSAM and QSAM access methods available by the end of the first quarter of 2014. These functions can help you save disk space, improve effective channel and network bandwidth without incurring significant CPU overhead, and improve the efficiency of cross-platform data exchange.
Plans are also to provide support for DFSMSdss to take advantage of zEDC by the end of the third quarter 2014. This function is designed to be available for dumping and restoring data and also when DFSMShsm uses DFSMSdss to move data. This function can provide efficient compression with lower CPU overheads than the processor- and software-based compression methods currently available.

1 This information provides only general descriptions of the types and portions of workloads that are eligible for execution on IBM Specialty Engines (for example, zIIPs, zAAPs, and IFLs). IBM authorizes customers to use IBM Specialty Engines only to execute the processing of eligible workloads of specific programs expressly authorized by IBM as specified in the “Authorized Use Table for IBM Machines” provided at:
No other workload processing is authorized for execution on a Specialty Engine. IBM offers Specialty Engine at a lower price than General Processors/Central Processors because customers are authorized to use Specialty Engines only to process certain types or amounts of workloads as specified by IBM in the Authorized Use Table.
2 DB2 does not direct all such base utility processing to be executed under enclave SRBs.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset