Data sharing
DB2 data sharing can provide the following advantages over database architectures:
Separate, independent DB2 systems
Improved DB2 availability during both planned and unplanned outages
Increased scalability because you are not bound by the limits of a single DB2 system
Greater flexibility when configuring systems
These advantages and an overview of the operational aspects of data sharing are described in detail in DB2 11 for z/OS Data Sharing: Planning and Administration, SC19-4055.
DB2 11 for z/OS provides a number of enhancements to data sharing. These enhancements provide improved availability, scalability, and performance. All of these enhancements are available in DB2 11 conversion mode (CM) unless otherwise noted.
This chapter describes the following enhancements to data sharing:
5.1 Group buffer pool write-around protocol
DB2 environments that have data sharing enabled can have multiple applications concurrently accessing data from any member of a data sharing group, with many members potentially reading and writing the same data. When multiple members of a data sharing group open the same table space, index space, or partition, and at least one of them opens it for writing, the data is said to be of inter-DB2 read/write interest to the members.
To control access to data that is of inter-DB2 read/write interest, whenever the data is changed, DB2 caches it in a storage area that is called a group buffer pool (GBP) in the coupling facility. When there is inter-DB2 read/write interest in a particular table space, index, or partition, it is dependent on the group buffer pool, or GBP-dependent (group buffer pool-dependent).
In DB2 data sharing, when batch jobs or utilities run against GBP-dependent objects, it can result in heavy, sustained GBP page write activity. When this happens, the GBP can begin to fill up with changed pages which can result in application slowdowns or, in severe cases, pages being written to the logical page list (LPL), which cause DB2 data outages.
Over the years, this has consistently been one of the top data sharing customer complaints, and many customers, after having suffered through these slowdowns or outages, have over-allocated the GBP structures, which introduces its own set of problems. An over-allocated GBP drives up cost and can result in performance or scalability concerns because of the overly large GBP structure sizes. As a large GBP becomes polluted with lots of changed pages, the resulting flood of castout related CF commands, and in some cases the CF, becomes unresponsive because of this flood of write and castout commands.
DB2 11 addresses these issues by providing a capability to bypass writing pages to the GBP in certain situations and write the pages directly to DASD instead, while using the GBP to send buffer invalidate signals. This feature is referred to as GBP write-around.
Two thresholds are used to determine whether GBP write-around is invoked for all objects in a GBP or for a page set/partition: the GBP castout threshold; and the class castout threshold. When the GBP castout threshold hits 50%, meaning that 50% of the GBP is occupied by changed pages, then write-around is invoked for writing pages for all objects. When the class castout threshold hits 20%, meaning that 20% of a class castout queue is occupied by changed pages, then DB2 employs the write-around protocol for the page set/partition. The write-around process at the GBP level continues until the GBP castout threshold drops to 40%. The write-around process at the page set/partition level continues until the class castout threshold drops to 10%. These threshold values are fixed; you cannot change them.
If either threshold is reached, the write-around protocol at the appropriate level is invoked. The processing occurs as follows:
Changed pages are conditionally written to the GBP. Conditional write means that if a page is already cached in the GBP then the write is allowed to proceed, and if the page is not already cached then the write fails. A DISPLAY GROUPBUFFERPOOL command with the MDETAIL option shows the number of pages written through a write-around processing in the DSNB777I informational message that is displayed as part of the output.
If the write failed, the page is written to DASD, and the GBP is used to send buffer invalidate signals to the other members.
Example 5-1 shows a DISPLAY GROUPBUFFERPOOL command for a buffer pool that includes an object that was updated from one member and selected from another member, which forces the object to become GBP-dependent. Following the command is partial output from the command, including the DSNB777I message.
Example 5-1 DISPLAY GROUPBUFFERPOOL output with write-around statistics
DB2 COMMANDS SSID: D1B1
===>
Position cursor on the command line you want to execute and press ENTER
Cmd 1 ===> -DISPLAY GROUPBUFFERPOOL(GBP0) MDETAIL
 
 
DSNB750I -D1B1 DISPLAY FOR GROUP BUFFER POOL GBP0 FOLLOWS
DSNB755I -D1B1 DB2 GROUP BUFFER POOL STATUS
CONNECTED = YES
CURRENT DIRECTORY TO DATA RATIO = 5
PENDING DIRECTORY TO DATA RATIO = 5
CURRENT GBPCACHE ATTRIBUTE = YES
PENDING GBPCACHE ATTRIBUTE = YES
 
DSNB777I -D1B1 ASYNCHRONOUS WRITES
CHANGED PAGES = 0
CLEAN PAGES = 0
FAILED DUE TO LACK OF STORAGE = 0
WRITE-AROUND PAGES = 0
The benefit of GBP write-around is that DB2 automatically detects the flooding of writes to the GBP, and automatically responds by dynamically switching to the GBP write-around protocol for those objects that are causing the heaviest write activity. Only the deferred writes are affected. Commits continues to write GBP-dependent page sets to the GBP. After the GBP storage shortage is relieved, DB2 resorts back to normal GBP write activity for all GBP-dependent objects.
If a page is already in the GBP, the deferred writes will update the page in the GBP rather than writing the page to DASD (conditional write).
GBP write-around does not solve the underlying I/O subsystem issues that contributed to the GBP being flooded. The I/O subsystem problem will remain, however, it doesn't matter that the batch updates slow down a little bit as long as the COMMITs perform better. But, it is possible that eventually the COMMITs themselves will flood the GBP if the batch updates continue to flood the I/O subsystem.
This support is provided in z/OS 1.12 and above, with Coupling Facility Control Code (CFCC) Level 17 and 18 on z196 and later hardware.
5.2 Improved castout processing
In prior versions of DB2, data sharing environments with heavy write activity cause pages to be written to the group buffer pools faster than the castout engines can process them. As a result, the group buffer pools become congested with changed pages and, in extreme cases, group buffer pool full conditions might occur. This inefficient castout processing often results in application response time issues. DB2 11 provides the following enhancements to make castout processing more efficient.
5.2.1 Reduced wait time for I/O completion
The read of the GBP for castout processing now overlaps the write I/O operation to DASD. In prior versions, DB2 waited until a page read from the GBP was written to DASD before another page was read from the GBP. This wait time is reduced by overlapping the read for castout with the write to DASD.
5.2.2 Reduced notify message size sent to castout owners
The size of the message indicating the status of the castout processing is reduced. Previously, the notification message sent to castout owners was a list of pages, which can be large if many pages are cast out. Now the message includes a list of page sets or partitions, instead of a list of pages, which considerably reduces the size of the message.
5.2.3 More granular class castout threshold
Castout processing now provides more granularity for the class castout threshold. Previously, the class castout threshold was specified as a percentage of the number of data entries. The smallest allowable number was 1%. For a really large GBP, a value of 1% still results in thousands of pages being cast out at a time, which can stress the castout engines and the coupling facility. The capability to allow for a value smaller than 1% was needed.
The syntax for the ALTER GROUPBUFFERPOOL command is changed to allow the class castout threshold to be specified as either a percentage of the number of data entries or an absolute number of pages. The new syntax for the CLASST option is as follows:
CLASST(class-threshold1,class-threshold2)
You can use the class-threshold1 variable to represent the class castout threshold in terms of a percentage of data entries. It can be specified as an integer between 0 and 90, representing 0% to 90%. The default value is 5%.
You can use the class-threshold2 variable to represent the class castout threshold in terms of an absolute number of pages. It can be specified as an integer between 0 and 32767. The default value is 0.
Do not specify a value for both variables. If you do, the value of class-threshold2 is ignored. If you want to specify the threshold in terms of a percentage of data entries, which was the only behavior prior to DB2 11, specify a non-zero value for the first variable and a zero value for the second variable. If you need to specify a value smaller than 1% for a large GBP, specify a zero value for the first variable and the desired number of pages at which castout should occur for the second variable.
In the data sharing environment for this example, the GBP0 group buffer pool is defined with CLASST values of 5 and 0. Example 5-2 shows the output of a DISPLAY GBPOOL command, which is an abbreviation of the DISPLAY GROUPBUFFERPOOL command, showing the class castout threshold (CLASST) values of 5 and 0.
Example 5-2 DISPLAY GBPOOL command output for percentage based CLASST threshold
DSNB750I -D1B1 DISPLAY FOR GROUP BUFFER POOL GBP0 FOLLOWS
DSNB755I -D1B1 DB2 GROUP BUFFER POOL STATUS
CONNECTED = YES
CURRENT DIRECTORY TO DATA RATIO = 5
PENDING DIRECTORY TO DATA RATIO = 5
CURRENT GBPCACHE ATTRIBUTE = YES
PENDING GBPCACHE ATTRIBUTE = YES
DSNB756I -D1B1 CLASS CASTOUT THRESHOLD = 5, 0
GROUP BUFFER POOL CASTOUT THRESHOLD = 30%
GROUP BUFFER POOL CHECKPOINT INTERVAL = 4 MINUTES
RECOVERY STATUS = NORMAL
Example 5-3 shows an example of an ALTER GROUPBUFFERPOOL command to specify a class castout threshold such that pages are cast out when 500 changed pages are in the GBP.
Example 5-3 ALTER GBPOOL command to express CLASST in number of pages
DB2 COMMANDS SSID: D1B1
===>
DSNE294I SYSTEM RETCODE=000 USER OR DSN RETCODE=0
Position cursor on the command line you want to execute and press ENTER
Cmd 3 ===> -ALTER GBPOOL(GBP0) CLASST(0,500)
Example 5-4 shows the output of the ALTER GBPOOL command.
Example 5-4 ALTER GBPOOL command output showing CLASST in number of pages
DSNB804I -D1B1 CLASS CASTOUT THRESHOLD SET TO 0,500 FOR GBP0
DSN9022I -D1B1 DSNB1CMD '-ALTER GBPOOL' NORMAL COMPLETION
***
You can only expect to see a benefit to using the second threshold value for a GBP that is large enough that it is causing a delay at castout time.
5.3 Improved DELETE_NAME performance
After all pages for a page set are cast out, the page set becomes non-GBP-dependent. DB2 then uses a cache DELETE_NAME request to delete both data and directory entries from the GBP. At the same time, cross-invalidation signals are sent to each member that has interest in the page set to indicate that there is no longer GBP-dependency for that page set.
Normally this cross-invalidation process completes without issue. In rare cases, there might be a high number of DELETE_NAME requests due to time-outs when sending cross-invalidation signals to data sharing members when there is a long distance between the members and the coupling facility.
In prior versions, DB2 delivered some maintenance that deleted only the data entries to avoid timeouts caused by cross invalidation. However, these enhancements did not provide the desired effect.
DB2 11 resolves this issue by suppressing the cross-invalidation signals during the processing of DELETE_NAME requests.
All the necessary cross-invalidation signals have already been sent when the pages were previously written to the GBP. The cross-invalidations for DELETE_NAME are not needed after casting out the pages; therefore, the cross-invalidation signals can be suppressed.
New features in cross-system extended services for z/OS (XES) and CFCC are required to use the DELETE_NAME performance enhancement. The specific requirements are as follows:
The GBP must be allocated in a coupling facility of CFLEVEL=17 or higher. The “suppress cross-invalidation” functionality is supported on the following combinations of hardware and CFCC levels:
 – z114 (2818) DR93G CFCC EC N48162 CFCC Release 17 at the requisite microcode load (MCL) level
 – z196 (2817) DR93G CFCC EC N48162 CFCC Release 17 at the requisite microcode load (MCL) level
 – zEC12 (2827) CFCC Release 18
The DB2 member that performs castout must be running on z114 or z196 that supports the “suppress cross-invalidation” functionality or a zEC12. The following z/OS releases support the “suppress cross-invalidation” functionality:
 – z/OS V1R12 and later with APAR OA38419 installed
This feature has also been retrofitted to DB2 9 and DB2 10 through APAR PM67544.
5.4 Restart light with CASTOUT option
If a failed DB2 subsystem was running on a z/OS image that is no longer available, that subsystem can hold locks on behalf of transactions that were executing at the time of the failure. If those locks were global locks, which are locks on resources that are actively shared, then the update type locks in the lock structure are changed to retained locks. In this scenario, it is critical to restart the failed DB2 in another z/OS image in the same Parallel Sysplex (where another member might be active) to release the retained locks. Otherwise, no other access is allowed to the resources protected by those locks until the underlying changes are either committed or backed out.
When restarting a failed DB2 subsystem on another z/OS image, the other z/OS image might not have the resources to handle the workload of an additional DB2 subsystem. The option to restart DB2 in LIGHT mode enables DB2 to restart with a minimal storage footprint to quickly release retained locks and then terminate normally.
Prior to DB2 11, restart light released most, but not all, retained locks. Restart light was designed to restart quickly with a minimal storage footprint. Therefore, restart light did not go through castout processing and, as a result, retained page set P-locks in IX or SIX mode were not released. Utilities can be blocked from running by these retained page set P-locks, therefore impacting overall DB2 availability.
DB2 11 introduces an enhancement to the restart light process to also include castout processing. The syntax of the START command is enhanced to include the LIGHT(CASTOUT) option. When this new option is used, transaction retained locks are released as usual, but the restart also kicks off castout processing. After castout is completed, the page sets become non-GBP-dependent, and the page set P-locks in IX or SIX mode are released.
The default for the LIGHT restart option is NO, which means that DB2 performs a complete restart and not a light restart. Of the remaining three LIGHT values, CASTOUT, YES and NOINDOUBTS, the value of CASTOUT involves the most steps, because it also includes castout processing. The YES and NOINDOUBTS options do not provide castout processing. The values for the LIGHT option are mutually exclusive.
Restart might take longer to run with LIGHT(CASTOUT) than with LIGHT(YES), but the benefit is that utilities can now be run without disruption, therefore increasing availability. If you are dependent upon running utilities as soon as possible after the restart, then you might want to investigate the LIGHT(CASTOUT) option. If you are more focused on getting the DB2 subsystem up as quickly as possible and will deal with retained page set P-locks on your own, you might want to use LIGHT(YES) or LIGHT(NOINDOUBTS) instead.
5.5 Locking enhancements
DB2 11 includes a number of locking enhancements that provide improved reliability, availability, and scalability. Many of these locking enhancements also provide benefits in a data sharing environment. This section describes the following locking enhancements:
5.5.1 Conditional propagation of child Update locks to the coupling facility
Prior to internal resource lock manager (IRLM) 2.3, IRLM propagates U state child locks for S state parent page set P-Locks in all cases. Propagation of U state child locks are not necessary in cases where only a single member is doing updates to a table in a data sharing environment. Propagating a large number of U state child locks to the CF incurs unnecessary overhead and should be avoided until it is necessary.
In DB2 11, IRLM 2.3 propagates shared S state parent page set P-locks to the CF as XES exclusive requests and suppresses any update U state child lock propagations until there is global contention on the parent page set P-lock.
This enhancement improves the performance of SELECT FOR UPDATE statements in data sharing environments.
5.5.2 Improved performance in handling lock waiters
In prior versions of DB2, in a large data sharing group with a large number of processes waiting on locks, there can be a performance cost for managing the lock waiters. DB2 11, with IRLM 2.3, introduces an improved deadlock and contention algorithm. This improved algorithm results in reduced CPU time for processes with many lock waiters and also reduces the number of lock suspensions.
5.5.3 Increase in maximum number of CF lock table entries
IRLM previously limited the number of CF lock structure table entries (LTEs) to a maximum value of 1 GB. Because the maximum LTEs supported by XES is 2 GB, there is no reason for IRLM to put its own limit for the LTEs. Therefore, in DB2 11, with IRLM 2.3, you can specify a lock table size as big as 2 GB as supported by XES.
This enhancement reduces contention when accessing the lock structure in the CF. It also reduces the possibility of false contention, which can occur when the number of LTEs is too small compared to the number of different resource names that can acquire locks.
You can use the MODIFY irlmproc,SET z/OS command to change the number of lock table entries. Example 5-5 shows the syntax for this command.
Example 5-5 Syntax of MODIFY irlmproc,SET command
>>-MODIFY--irlmproc,SET-+-,DEADLOCK=nnnn---------------+-------><
+-,LTE=nnnn--------------------+
+-,MLT=nnnnnU------------------+
+-,PVT=nnnn--------------------+
+-,TIMEOUT=nnnn,subsystem-name-+
| .-10--. |
'-,TRACE=-+-nnn-+--------------'
You can increase the number of lock table entries by changing the value of the LTE option. You must set it to an exact power of 2. Each increment in value represents 1,048,576 LTE entries. To set it to the maximum number of LTEs, 2 GB, specify a value of 2048.
Because the command is a z/OS IRLM command, it can be issued only from a z/OS console. You also need to rebuild the CF lock structure to enable the new LTE size.
5.5.4 Throttle batched unlock requests
If a thread in a data sharing environment holds million of locks and is going through de-allocation, IRLM sends unlock requests for all of the locks in a batch unlock request to XES. If this process is done under non-preemptible SRB mode in a uni-processor environment, the unlock request process occupies the processor until all of the unlock requests are processed. As a result, this flood of unlock requests can cause XES to receive an abend178 because real storage manager (RSM) cannot find any available real storage frame.
IRLM 2.2 and 2.3 are enhanced through APAR PM60449 to change the way that IRLM handles batch unlock processing when a thread is holding more than 128,000 locks. The unlock request runs as a queued request under an IRLM SRB and does a status STOP SRB after running for some time. The batch unlock processing continues when this SRB is resumed again.
This change reduces storage constraints on XES and also reduces secondary latch contention in IRLM. This change allows other higher priority work from RSM and XES to run, which can improve conditions that result in a hang.
5.5.5 Improved IRLM resource hash table algorithm
IRLM 2.2 introduced through APAR PK50095 the capability to have an expanded resource hash table to handle anticipated higher locking volumes. IRLM provided support for a 64 KB hash table in DB2 10.
IRLM internally goes through the resource hash table serially every deadlock cycle while holding the main latch, preventing any other work from processing in IRLM.  With higher volumes of threads and more and more locks being held, this hash table processing drives up the CPU time with every deadlock cycle.
In DB2 11, with IRLM 2.3, the resource hash table algorithm is improved to perform the deadlock cycle processing more efficiently. This improvement results in less CPU time associated with this process and reduced contention on lock structure access.
5.6 Index availability and performance
DB2 11 includes a number of index enhancements that provide improved availability and performance in data sharing environments. This section describes the following index enhancements:
5.6.1 Avoid placing indexes in RBDP state during group restart
There are certain recovery scenarios where objects are placed in the logical page list (LPL) or in group buffer pool recovery pending (GRECP) state. A small timing window exists where, if there is an index tree structural modification (Index SMOD) in progress when the index is put into LPL or GRECP state, the index manager can write logical compensation log records (LCLRs) before writing the physical NOT APPLY undo log records for the unfinished index SMODs. Later, the LPL or GRECP recovery fails and the index is left in rebuild pending (RBDP) state when the LCLRs are processed before the physical undo logs are processed. After the LPL or GRECP recovery completes in this scenario, you still need to spend time to rebuild the index, which can take hours if the affected index is large. Ideally, you want to be able to run LPL and GRECP recovery and recover the indexes at the same time.
In DB2 11, if there is an index tree structural modification in progress when the index is put into LPL or GRECP state, then DB2 goes through a two-pass LPL or GRECP log apply process to recover the index. The second pass makes indexes available after the LPL or GRECP recovery process is completed. A DSNI051I message is issued at the start of the second pass. The LPL or GRECP recovery might take longer to finish when the second pass is needed, but you will not need to spend a longer amount of time to rebuild the index. This enhancement reduces DB2 outage time and increases index availability.
This enhancement is available in DB2 11 New Function Mode (NFM) only.
5.6.2 Reduce synchronous log writes during index structure modifications
As rows are inserted in data sharing environments, index pages often need to be split as new keys fill the existing pages. The index split logic for GBP-dependent indexes causes two synchronous log writes, which can have a significant impact on transaction or batch performance. A similar situation exists for deletes where empty index pages get pruned from the index tree. There are five synchronous log writes in the delete case.
A related issue occurs when there are massive deletes from an index while the index is not GBP-dependent, and then the index becomes GBP-dependent as the backout starts. This situation has caused some backouts to take 10 to 20 times longer than the unit of work, because of the need to force several log write I/Os when adding deleted pages back into the index tree as part of the undo of the deletes.
DB2 11 reduces the synchronous log writes for index split and index page delete operations. Rather than do the log force write I/Os after processing begins and then again at the end of the process, the log force write I/Os occur only once at the end of the process. This enhancement improves performance for index splits and for pseudo deletes.
DB2 11 also reduces backout time by reducing the number of log force write I/Os during a rollback of deleted pages. DB2 11 can tell whether an index split operation completed successfully and will not roll back a successfully completed index split operation.
5.7 Group buffer pool write performance
The GBP batch write processing in DB2 11 has been enhanced to avoid pagefix operations by allocating fixed storage for GBP batch write. This enhancement provides a reduction in the path length for the COMMIT processing.
5.8 Automatic LPL recovery at end of restart
There are occasionally times when there are not enough pages available in the GBP to accept a write from the local buffer pool. The write is attempted a few times while DB2 goes through the castout process in an attempt to free more pages. However, after a few unsuccessful attempts, DB2 gives up and inserts an entry for the page in the LPL, where it is unavailable for any access until the LPL condition is removed. The LPL exception condition is set if pages cannot be read from or written to the GBP.
Prior to DB2 11, in a data sharing environment, when pages were added to the LPL by an active member while one of the members was down and holding retain locks, there was no automatic LPL recovery performed when the failed member restarted. This process resulted in an extended outage for application programs, which impacted overall system availability.
To recover these LPL pages, you had to manually resolve the LPL objects by issuing a -START DB(xx) SPACE(yy) command for every object with an LPL exception condition. This manual process can be time consuming if there are many objects with pages in the LPL, therefore extending the time that some applications are unavailable. This manual process can also be error prone, because some of the objects can be missed when issuing the -START commands, especially when there is a long list of objects to be recovered.
DB2 11 improves upon the LPL recovery process by initiating automatic LPL recovery of objects at the end of both normal (non-PIT recovery) restart and restart light. If the restart involves any indoubt or postponed abort (PA) units of recovery (URs) then the LPL recovery associated with those URs is not automatically triggered at the end of restart. These objects cannot be automatically recovered because DB2 does not know the entire LPL log range for indoubt and PA URs until they are resolved.
The auto-LPL recovery process is not triggered for any of the following circumstances:
If the DB2 member is started in access maintenance mode
If the DB2 member is started in point-in-time (PIT) Recovery mode
If the DB2 member is started at a tracker site
If the DB2 member is involved in any type of conditional restart
If DEFER ALL was specified on installation panel DSNTIPS
For objects explicitly listed in a DEFER object list on installation panel DSNTIPS
For table spaces defined as NOT LOGGED
The processing for automatic LPL recovery is similar to the auto-GRECP recovery processing that is done at the end of restart in that the auto-LPL recovery uses the existing messages to report LPL recovery progress, errors and successful completion. It makes only one attempt to automatically recover LPL objects. If the auto-LPL recovery fails at restart time, then either the DBA can manually recover the LPL objects by issuing -START DB commands or mainline auto-LPL recovery can perform the recovery at the appropriate time.
In the following conditions, an LPL object can remain in LPL after the end of restart auto-LPL recovery is completed:
If any error is encountered during log apply
If an active member continues to add new pages to the LPL or extends the LPL range while the restarting member is performing the auto-LPL recovery
The mainline auto-LPL recovery processing has always had retry logic to drive the LPL recovery one more time if the LPL page range or log range is extended. Auto-LPL continues to honor the same retry logic except during restart light. Because auto-LPL recovery is initiated during restart light, the auto-LPL recovery task serializes with DB2 shutdown; the end of restart auto-LPL recovery will complete before DB2 is terminated at the end of restart light.
At the end of auto-LPL recovery, each member issues a DSNI049I message on the console to indicate that LPL recovery of all objects is completed. This is the same message that has been issued in prior versions of DB2 when the -START DB command completes, and is still issued when the -START DB command completes if you need to issue the command because auto-LPL recovery cannot complete due to one of the reasons listed above. The message is always issued to make it easier for you to automate the recovery process using whatever tool or manual procedure you have implemented.
5.9 Log record sequence number spin avoidance
Enhancements were made in both DB2 9 and DB2 10 in the area of log record sequence number (LRSN) spin avoidance. DB2 9 allowed for duplicate LRSN values for consecutive log records on a given member. DB2 10 further extended LRSN spin avoidance by allowing for duplicate LRSN values for consecutive log records for inserts to the same data page.
Each of these enhancements meant that a DB2 member did not need to “spin” consuming CPU resources under the log latch to wait for the next LRSN increment. This function can avoid significant CPU overhead and log latch contention (LC19) in data sharing environments with heavy logging.
The DB2 9 and DB2 10 enhancements avoided the need to “spin” in the Log Manager to avoid duplicate LRSNs for most cases. However, some cases still exist where CPU spinning is necessary, which adds overhead. For example, consecutive DELETE or UPDATE operations to the same page require LRSN spin.
DB2 11 extends the LRSN to use more of the TOD clock precision. RBA and LRSN values are expanded from 6 bytes to 10 bytes so that it can take hundreds of years to exhaust a DB2 subsystem’s or data sharing group’s logging capacity, based on current and projected logging rates. Details about the expanded RBA and LRSN vales are provided in 3.1, “Extended RBA and LRSN” on page 24.
Data sharing environments can take advantage of the larger LRSN values and avoid LRSN spin altogether.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset