DB2 infrastructure setup
In this chapter we discuss the tasks you need to perform to provide a DB2 data sharing infrastructure that supports continuous availability in a WebSphere Application Server environment. With reference to continuous availability we first discuss the setup tasks to be performed regardless of the JDBC driver type being used and then highlight the tasks that are specific to each JDBC driver type used by WebSphere Application Server for connecting to DB2. For information about the configuration of the WebSphere Application Server environment refer to Chapter 5, “WebSphere Application Server infrastructure setup” on page 207.
This chapter covers the following topics:
4.1 z/OS related setup
Creating a DB2 data sharing group requires a certain infrastructure to be provided across all z/OS systems and subsystems in a Parallel Sysplex. The infrastructure to be provided involves configuration tasks to be performed within the following System z components:
4.1.1 Parallel Sysplex
DB2 data sharing in its core provides high availability, scalability and optimum performance by proactively exploiting Parallel Sysplex functions and resources. To provide data sharing DB2 exploits Coupling Facility resources to allow multiple DB2 members running on any z/OS system in a Sysplex to share the same data. When using DB2 data sharing DB2 data remains available for as long as at least one member of the data sharing group is available. In case of a DB2 failure Parallel Sysplex functions are used to bring the failing DB2 member back online as soon as possible.
In this part of the documentation we outline the Parallel Sysplex resources that are required for DB2 data sharing, illustrate the Parallel Sysplex configuration used in our environment, and discuss important aspects we want you to consider.
For more information and suggested practices on how to set up and tune DB2 data sharing, refer to the following resources.
Part 4 DB2 sysplex best practices of System z Parallel Sysplex Best Practices, SG24-78177.
DB2 10 for z/OS Data Sharing: Planning and Administration. SC19-2973
IBM developerWorks® DB2 for z/OS with best practices presentations available at http://www.ibm.com/developerworks/data/bestpractices/db2zos/
Coupling Facility resources
For our DB2 data sharing group we created the coupling facility (CF) structures shown in the IBM RMF™ III Coupling Facility Activity report in Figure 4-1.
RMF V1R13 CF Activity - SANDBOX Line 1 of 28
Command ===> Scroll ===> CSR
Samples: 120 Systems: 4 Date: 08/08/12 Time: 20.35.00 Range: 120 Sec
CF: CF1 Type ST System CF --- Sync --- --------- Async --------
Util Rate Avg Rate Avg Chng Del
Structure Name % Serv Serv % %
DB0ZG_GBP0 CACHE AS *ALL 0.1 0.0 0 0.1 1031 0.0 0.0
DB0ZG_GBP2 CACHE AS *ALL 1.9 0.0 0 29.7 109 0.0 0.0
DB0ZG_GBP32K CACHE AS *ALL 0.0 0.0 0 0.0 0 0.0 0.0
DB0ZG_GBP8K0 CACHE AS *ALL 0.0 0.0 0 0.1 544 0.0 0.0
DB0ZG_LOCK1 LOCK A *ALL 8.1 207.6 55 68.6 113 0.0 0.0
DB0ZG_SCA LIST A *ALL 0.4 5.0 39 0.0 0 0.0 0.0
Figure 4-1 RMF Monitor III CFACT display
Our data sharing environment was configured for function testing rather than for performance and scalability testing for which we were happy to accept a minimum configuration in terms of CF structure sizes. On top of that, we implemented common best practice recommendations such as structure duplexing and structure failure isolation to support high availability.
Under normal circumstances you would need to plan your CF structure implementation to make sure they are appropriately sized and implemented to support your
availability requirements.
Coupling Facility sizing
When you enable DB2 data sharing you can use the IBM CFSIZER application to size the DB2 CF structures. The CF sizer estimates the size of your DB2 CF structures based upon environmental parameters such as locking rate, number of systems, databases, table spaces, tables per database, and local buffer pool sizes. The DB2 CFSIZER application is available on http://www.ibm.com/systems/support/z/cfsizer/db2.
If for some reason you are not able to provide the input parameters required by the DB2 CFSIZER tool you can use the DB2 minimum structure sizing recommendations given in Chapter 8, “Best practices:”, in DB2 for z/OS: Data Sharing in a Nutshell, SG24-7322.
Coupling Facility related recommendations
After the DB2 structures are in use by DB2 you need to perform regular monitoring and tuning to assure the DB2 CF structures have been configured to support your workload and availability requirements. You can use the recommendation provided in Part 4, DB2 sysplex best practices of System z Parallel Sysplex Best Practices, SG24-7817, to set up monitoring and tuning to make sure your CF structures are configured in that way. This book discusses topics such as:
Avoid single point of failures
Failure isolate the DB2 lock and SCA structure
Why XCF auto alter is useful for your DB2 structures
Importance of having your DB2 SCA structure generously sized
Why to avoid GBP cross-invalidations
Why the GBP checkpoint frequency might affect speed of recovery
Why DB2 group buffer pools (GBP) duplexed is useful
GBPCACHE recommendations
Table space to buffer pool design considerations.
4.1.2 Automatic Restart Manager policy
If one of the members of the data-sharing group fails, it is almost certain that the DB2 is holding locks on DB2 objects when it fails. Until that DB2 is restarted and releases those locks, the related objects are not available to the other members of the sysplex.
There are two scenarios relating to a DB2 failure:
DB2 failed
The system that DB2 was running on failed
In either case, the most important thing is to get the failed DB2 back up and running as quickly as possible. The best way to achieve this is to use the IBM MVS™ Automatic Restart Manager (ARM). Many automation products provide support for ARM. This means that they manage DB2 for normal startup, shutdown, monitoring, and so on. However, if DB2 fails, they understand that they must allow ARM to take responsibility for restarting DB2.
If the failure was just in DB2, and the system it was running on is still available, restart DB2 in the same LPAR, with a normal start. DB2 automatically releases any retained locks as part of the restart.
If the system DB2 was running on is unavailable, start DB2 for another member of the sysplex as quickly as possible. The reason for this is that it results in DB2 coming up and cleaning up its locks far faster than it is able to do were you would have to wait for z/OS to be IPLed and brought back up.
Furthermore, if DB2 is started on another system in the Sysplex, you really only want it to release any locks that it was holding. More than likely, there is another member of the data-sharing group already running on that system. If you specify the LIGHT(YES) option on the START DB2 command, DB2 starts with the sole purpose of cleaning up its locks. In this mode, it only communicates with address spaces that it was connected to before the failure, and that have indoubt units of work outstanding. As soon as DB2 completes its cleanup, the address space automatically shuts itself down. Hopefully, the failed system is on its way back up at this time, and the DB2 can be brought up with a normal start in its normal location.
In addition to restarting DB2 using ARM and Restart Light, also define a restart group to ARM so that it also restarts any subsystems that were connected to DB2 prior to the failure. By restarting all the connected subsystems, any indoubt units of recovery can be cleaned up.
Note that when the Restart Light capability was introduced by DB2 V7, it did not handle cleanup for any INDOUBT units of work. However, in DB2 V8 the Restart Light capability was enhanced so that it cleans up any INDOUBT units of work, assuming that the associated address space is also restarted on the same system. If you do not want to have DB2 resolve the INDOUBT units of work, or if you do not plan to restart the connected address spaces on the other system, start DB2 with the NOINDOUBT option.
 
Suggestion: Use ARM to restart DB2 following a DB2 failure.
If only DB2 failed, ARM must do a normal restart for DB2 for the same z/OS system. Do a RESTART LIGHT of DB2 on a different system from the one on which it was running previously.
If the system failed, ARM should do a Restart Light for DB2 on another system in the sysplex. Also, define a restart group so that ARM can also restart any related subsystems together with the restarted DB2.
WebSphere Application Server considerations
WebSphere Application Server for z/OS uses the z/OS automatic restart management (ARM) to recover application servers. Each application server running on a z/OS system is automatically registered with ARM. The registration uses a special element type called SYSCB, which ARM treats as restart level 3, assuring that RRS and DB2 restarts before any application server, because RRS and DB2 are treated by ARM as restart level 2. The automatic ARM registration per se registers all WebSphere Application Server server instances in a default group to provide automatic restart in case of subsystem or
system failure.
By way of derogation from the default ARM registration, we recommend to implement the following ARM policy changes to be in line with best practice recommendations:
Set up your location service daemons for restart in place. If the location service daemon attempts to restart on an alternate system, it will fail.
Set up you node agents for restart in place. If the node agent restarts on the alternate system, it will have no recovery work to do.
For more information about how to configure ARM with WebSphere Application Server for z/OS refer to IBM WebSphere Application Serve Information Center, Automatic restart management at http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp.
Implementing the DB2 ARM policy
Figure 4-2 illustrates the configuration of our WebSphere Application Server and DB2 infrastructure. The illustration shows the following infrastructure components:
z/OS systems SC63 and SC64
Each z/OS system hosts one DB2 data sharing member and its related application server instance. In ARM these components are also referred to as restart element. The DB2 members and application server instances are:
 – z/OS system SC63: DB2 member D0Z1, application server instance MZSR013
 – z/OS system SC64: DB2 member D0Z2, application server instance MZSR014
In case of element failure (also referred to as subsystem failure) the failing element is to be restarted in the same system (restart in place)
In case of system failure (z/OS failure) the DB2 member and its related application server is to be restarted in the surviving z/OS system which can be system SC63 or SC64.
Figure 4-2 WebSphere Application Server for z/OS and DB2 for z/OS infrastructure
To provide the recommended level of availability for our application server environment we configured ARM to use the restart policy shown in Example 4-1. Through this policy the DB2 member and its related application server instance will be restarted in the same LPAR in case a subsystem failure occurs. In case of a system failure ARM restarts the DB2 member and its related application server instance either in system SC63 or SC64 depending on system availability.
Example 4-1 ARM policy DB2 WebSphere Application Server
RESTART_GROUP(DB2WAS)
TARGET_SYSTEM(SC63,SC64)
ELEMENT(DB0ZGD0Z1)
RESTART_ATTEMPTS(3,)
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'-D0Z1 STA DB2,LIGHT(YES)')
ELEMENT(DB0ZGD0Z2)
RESTART_ATTEMPTS(3,)
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'-D0Z2 STA DB2,LIGHT(YES)')
ELEMENT(MZCELLMZSR013)
RESTART_ATTEMPTS(3,)
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'S MZACR3,'
'JOBNAME=MZSR013,ENV=MZCELL.MZNODE3.MZSR013')
ELEMENT(MZCELLMZSR014)
RESTART_ATTEMPTS(3,)
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'S MZACR4,'
'JOBNAME=MZSR014,ENV=MZCELL.MZNODE4.MZSR014')
After the policy shown in Example 4-1 on page 104 was activated we used the operating system command interface to confirm that our ARM policy was used for DB2 and its related application servers. The operating system command output is provided in Figure 4-3.
========= verify ARMSTATUS of application server MZSR014 ============
D XCF,ARMSTATUS,JOBNAME=MZSR014,DETAIL
IXC392I 00.10.06 DISPLAY XCF 129
ARM RESTARTS ARE ENABLED
-------------- ELEMENT STATE SUMMARY -------------- -TOTAL- -MAX-
STARTING AVAILABLE FAILED RESTARTING RECOVERING
0 1 0 0 0 1 200
RESTART GROUP:DB2WAS PACING : 0 FREECSA: 0 0
ELEMENT NAME :MZCELLMZSR014 JOBNAME :MZSR014 STATE :AVAILABLE
CURR SYS :SC64 JOBTYPE :STC ASID :012A
INIT SYS :SC64 JESGROUP:XCFJES2A TERMTYPE:ALLTERM
EVENTEXIT:*NONE* ELEMTYPE:SYSCB LEVEL : 3
TOTAL RESTARTS : 0 INITIAL START:08/31/2012 23:45:33
RESTART THRESH : 0 OF 3 FIRST RESTART:*NONE*
RESTART TIMEOUT: 300 LAST RESTART:*NONE*
======== verify ARMSTATUS of DB2 member D0Z2 ====================
D XCF,ARMS,ELEMENT=DB0ZGD0Z2,DETAIL
IXC392I 00.15.31 DISPLAY XCF 133
ARM RESTARTS ARE ENABLED
-------------- ELEMENT STATE SUMMARY -------------- -TOTAL- -MAX-
STARTING AVAILABLE FAILED RESTARTING RECOVERING
0 1 0 0 0 1 200
RESTART GROUP:DB2WAS PACING : 0 FREECSA: 0 0
ELEMENT NAME :DB0ZGD0Z2 JOBNAME :D0Z2MSTR STATE :AVAILABLE
CURR SYS :SC64 JOBTYPE :STC ASID :00CE
INIT SYS :SC64 JESGROUP:XCFJES2A TERMTYPE:ALLTERM
EVENTEXIT:DSN7GARM ELEMTYPE:SYSDB2 LEVEL : 1
TOTAL RESTARTS : 0 INITIAL START:08/16/2012 12:38:01
RESTART THRESH : 0 OF 3 FIRST RESTART:*NONE*
RESTART TIMEOUT: 21600 LAST RESTART:*NONE*
Figure 4-3 DISPLAY XCF ARM command output
4.1.3 WLM configuration
You need to setup WLM to provide WLM application environments for DB2 external stored procedure processing and for the service classification of the DB2 system address spaces and for JDBC type 4 workloads.
DB2 stored procedure WLM application environments
The JDBC driver uses DB2 provided external stored procedures for metadata retrieval. These external stored procedures require WLM application environments (WLM APPLENV) to be defined and activated. DB2 install jobs DSNTIJRT or DSNTIJMS performs this configuration task during DB2 installation or migration. The WLM application environments used in our DB2 environment are illustrated in Figure 4-4.
Application-Environment Notes Options Help
-----------------------------------------------------------------------
Application Environment Selection List Row 164 to 178
Command ===> __________________________________________________________
Action Codes: 1=Create, 2=Copy, 3=Modify, 4=Browse, 5=Print, 6=Delete,
/=Menu Bar
Action Application Environment Name Description
__ DSNWLMDB0Z_DSNACICS DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_GENERAL DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_JAVA DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_MQSERIES DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_NUMTCB1 DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_PGM_CONTROL DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_REXX DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_UTILS DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_WEBSERVICES DB2-SUPPLIED WLM ENVIRONMENT
__ DSNWLMDB0Z_XML DB2-SUPPLIED WLM ENVIRONMENT
Figure 4-4 DB2 SPAS WLM application environments
After we had run one of our Java EE sample applications we verified the status of the procedures used by the JDBC driver for metadata retrieval. To perform this verification we executed a DISPLAY PROCEDURE command shown in Figure 4-5. For each procedure the command output confirms the procedure status and the WLM application environment name.
-dis proc
DSNX940I -D0Z1 DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -
------- SCHEMA=SYSIBM
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
SQLCOLUMNS
STARTED 0 0 1 0 0 DSNWLMDB0Z_GENERAL
SQLCAMESSAGE
STARTED 0 0 1 0 0 DSNWLMDB0Z_GENERAL
Figure 4-5 DISPLAY PROCEDURE output
Next we executed the DB2 command shown in Example 4-2 to verify the status and the JCL procedure name of the DSNWLMDB0Z_GENERAL WLM application environment. The command output confirms the application environment availability and the name of the JCL procedure used by WLM to start the WLM stored procedure address space (WLM SPAS).
Example 4-2 DISPLAY WLM APPLENV output
D WLM,APPLENV=DSNWLMDB0Z_GENERAL
IWM029I 04.02.44 WLM DISPLAY 256
APPLICATION ENVIRONMENT NAME STATE STATE DATA
DSNWLMDB0Z_GENERAL AVAILABLE
ATTRIBUTES: PROC=D0ZGWLMG SUBSYSTEM TYPE: DB2
WLM service classification for DB2
You use z/OS Workload Manager (WLM) to assign performance goals and business importance to your batch, OLTP, and Java EE workloads. Performance goals and business importance in that sense controls how many resources, such as CPU and storage, should be given to the work to meet its goal. WLM controls the dispatching priority based on the goals you supply. WLM raises or lowers the priority as needed to meet the specified goal.
The three kinds of goals that you can use are:
Response time
Controls how quickly you want the work to be processed.
Execution velocity
Controls how fast the work should be run when ready, without being delayed for processor, storage, I/O access, and queue delay.
Discretionary
Defines a category for low priority work for which you define no performance goals.
Response time goals are appropriate for user applications. User applications in the context of this book are WebSphere Application Server for z/OS Java applications connecting to DB2 for z/OS using JDBC type 2 and type 4 connections.
For the DB2 system address spaces, velocity goals are more appropriate. Only a small amount of the work done in DB2 is counted toward this velocity goal. Most of the work done in DB2 counts towards the user goal.
Your performance goals are implemented through WLM service classes. You create your WLM service classes using the attributes that are required to meet your service level agreement objective. WLM classes are categorized by subsystem types. WLM uses the subsystem type specific classification rules to assign service classes to incoming workloads. For simplification we use the term service classification to refer to the process of service class assignment by WLM.
WLM subsystem types
Figure 4-6 provides an overview over the subsystem types available in WLM.
Figure 4-6 WLM classification subsystem types
The WLM subsystem types relevant to our WebSphere Application Server environment are
STC (started task control)
Subsystem type for the service classification of DB2 and WebSphere Application Server system address spaces. In this part of the book we only discuss the classification of the DB2 system address spaces.
DDF
Subsystem type for the service classification of transaction type enclave workloads that arrives in DB2 through the DB2 DIST address space through JDBC type 4 connections.
CB
Subsystem type for service classification of transaction type enclave Java workloads that run in WebSphere Application Server for z/OS regardless of the JDBC driver type being used.
Service classifications for subsystem type DB2 is only relevant for workloads related to DB2 Parallel Sysplex Query Parallelism which is not being used in our scenario.
DB2 system service classification
We used the WLM service classification illustrated in Figure 4-7 on page 109 for assigning velocity goals to our DB2 system service address spaces. The service classes we chose are in line with the recommendations given by DB2 10 for z/OS Managing Performance, SC19-2978. We additionally assigned unique report classes to each DB2 address space which can be useful for monitoring and problem determination purposes.
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 8 to 34 of 35
Command ===> ___________________________________________ Scroll ===> CSR
Subsystem Type . : STC Fold qualifier names? N (Y or N)
Description . . . Started Tasks
Action codes: A=After C=Copy M=Move I=Insert rule
B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
--------Qualifier-------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: STC RSYSDFLT
____ 1 TN D0Z1ADMT ___ STC D0Z1ADMT
____ 1 TN D0Z1DBM1 ___ STCHI D0Z1DBM1
____ 1 TN D0Z1MSTR ___ STCHI D0Z1MSTR
____ 1 TN D0Z1DIST ___ STCHI D0Z1DIST
____ 1 TN D0Z1IRLM ___ SYSSTC D0Z1IRLM
____ 1 TN D0Z2ADMT ___ STC D0Z2ADMT
____ 1 TN D0Z2DBM1 ___ STCHI D0Z2DBM1
____ 1 TN D0Z2MSTR ___ STCHI D0Z2MSTR
____ 1 TN D0ZGWL* ___ STCHI D0ZGWLM
____ 1 TN D0Z2DIST ___ STCHI D0Z2DIST
____ 1 TN D0Z2IRLM ___ SYSSTC D0Z2IRLM
Figure 4-7 WLM DB2 system service classification
After we had started DB2 we used the ISPF SDSF application to verify that z/OS used our WLM configuration for the DB2 started tasks. The SDSF display active output that we obtained for verification is shown in Figure 4-8.
Display Filter View Print Options Search Help
------------------------------------------------------
SDSF DA SC64 SC64 PAG 0 CPU/L/Z 2/ 3/ 1
COMMAND INPUT ===>
NP JOBNAME U% Workload SrvClass RptClass
D0Z2MSTR 2 STCTASKS STCHI D0Z2MSTR
D0Z2IRLM 2 SYSTEM SYSSTC D0Z2IRLM
D0Z2DBM1 2 STCTASKS STCHI D0Z2DBM1
D0Z2DIST 2 STCTASKS STCHI D0Z2DIST
D0Z2ADMT 2 STCTASKS STC D0Z2ADMT
D0ZGWLMG 2 STCTASKS STCHI D0ZGWLM
D0ZGWLM1 2 STCTASKS STCHI D0ZGWLM
Figure 4-8 SDSF WLM DB2 system service classification
In our scenario the D0Z1 and D0Z2 DIST address spaces run in service class SCTHI which represents a performance goal that is as high as the goal for the DB2 database services address spaces. Classifying the DIST address spaces appropriately is important as the service class determines how quickly the DIST address space is able to perform operations associated with managing the distributed DB2 work load. Operations in that sense include adding new users or removing users that have terminated their JDBC type 4 connections.
JDBC type 4 service classification
The WLM classification of the DDF address space described in “DB2 system service classification” on page 108 does not govern the performance objective of data base access threads (DBATs) connecting to DB2 through WebSphere Application Server JDBC type 4 connections. As illustrated in Figure 4-9 the SQL workload submitted by DBATs, also referred to as DDF server threads, are scheduled by DDF to run in DB2 as enclave service request blocks (enclave SRB). The enclave SRB obtains its performance goal from the WLM service classification rule that is defined in the active WLM policy.
Figure 4-9 WebSphere Application Server JDBC type 4
The main characteristics of a DDF enclave are:
An enclave is a single transaction, which starts when the enclave is created and ends when the enclave is deleted.
DDF creates an enclave for an incoming request when it detects the first SQL statement and usually deletes the enclave at SQL COMMIT, thus a DDF enclave transaction consists of a single SQL COMMIT scope.
You can use WLM to let individual DDF server threads (DDF enclaves) have their own z/OS performance objectives. For instance, you can assign a WLM service class with a short response time to service your business critical online DDF server threads.
A DDF enclave SRB is scheduled to run in the target DB2 address space but executes work on behalf of the DDF enclave.
Dispatching controls for enclave SRB processing are derived from the DDF enclave. As for our Java EE sample applications we use DB2 client information to control service classification in WLM.
CPU time consumed by each SRB is accumulated back to the DDF enclave and is reported as enclave-related CPU service in the SMF type 30 records of the enclave owning DDF address space.
RMF creates separate Type72 SMF records for independent enclaves.
In our environment we use two WebSphere Application Server applications to illustrate WLM service classification. For WLM classification each application provides the DB2 clientApplicationInformation data source custom property shown in Table 4-1 on page 111 when connecting to DB2.
Table 4-1 clientApplicationInformaton
Application
Context root
clientApplicationInformation
DayTrader-EE6
/daytrader
TraderClientApplication
D0ZG_WASTestClientInfo
/wastestClientInfo
dwsClientinformationDS
We used the DB2 clientApplicationInformation listed in Table 4-1 to define the service classification rules that are shown in Figure 4-10.
Subsystem-Type Xref Notes Options Help
-----------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 19 to 41 of
Command ===> ___________________________________________ Scroll ===> CS
Subsystem Type . : DDF Fold qualifier names? N (Y or N)
Description . . . DDF Work Requests
Action codes: A=After C=Copy M=Move I=Insert rule
B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
--------Qualifier-------- -------Class--------
Action Type Name Start Service Report 3
DEFAULTS: DDFBAT ________
____ 1 SSC 1 DB0ZG ___ DDFDEF RD0ZGDEF
____ 2 PC 2 Trade* ___ DDFONL RTRADE0Z
____ 2 PC dwsClie* ___ DDFONL RDWS0Z
Figure 4-10 WLM DDF classification
1. Type SSC (subsystem collection) contains the data sharing group name which is not to be confused with the group attach name. You can determine group name by running the command shown in Figure 4-11.
 
-dis group
DSN7100I -D0Z2 DSN7GCMD
*** BEGIN DISPLAY OF GROUP(DB0ZG ) CATALOG LEVEL(101) MODE(NFM )
Figure 4-11 DB2 Display group output
2. Under data sharing group DB0ZG we use request type PC (process name) to assign WLM service and report classes based upon DB2 client application information that are used by our Java applications. The DB2 client information provided by our Java applications are:
 – clientApplicationInformation:TraderClientApplication
 • matches WLM process name Trade*
 • assigns WLM service class DDFONL
 • assigns WLM report class RTRADE0Z
 – clientApplicationInformation: dwsClientinformationDS
 • matches WLM process name dwsClie*
 • assigns WLM service class DDFONL
 • assigns WLM report class RDWS0Z
The WLM classification of our DB2 DBAT workload using the WLM configuration shown in Figure 4-10 on page 111 is illustrated in Figure 4-12.
Figure 4-12 WLM DDF classification overview
1. When our WebSphere Application Server application connects to DB2 it provides its client application information referred to in Figure 4-10 on page 111. The DB2 distributed address space creates an enclave and schedules an enclave SRB. The enclave SRB uses a program call instruction to trigger processing in the DB2 database manager address space.
2. WLM considers the client application information to assign the performance goal referred to in service class DDFONL. WLM furthermore assigns the report class defined in the WLM policy shown in Figure 4-10 on page 111 which is useful when it comes to creating RMF workload activity reports.
3. All other DB2 DBATs will be classified using the data sharing group default service class and report class configuration referred to in the WLM policy shown in Figure 4-10 on page 111 under rule type SSC (subsystem collection).
4. Requests falling in rule type SSC will be classified using service class DDFDEF and report class RD0ZGDEF.
5. The DB2 system address spaces are classified using classification rules defined in subsystem type STC. For our environment these classification rules are discussed in “DB2 system service classification” on page 108.
When we tested the D0ZG_WASTestClientInfo data web service application we captured the SDSF enclave display output shown in Figure 4-13 to confirm that the WLM classification rule shown in Figure 4-10 on page 111 was correctly used by our runtime environment.
Display Filter View Print Options Search Help
--------------------------------------------------------------------------
SDSF ENCLAVE DISPLAY SC63 ALL LINE 1-13 (13)
COMMAND INPUT ===> SCROLL ===>
NP NAME SSType Status SrvClass Per PGN RptClass ResGroup
i 600000051A DDF ACTIVE DDFONL 1 RDWS0Z
Figure 4-13 SDSF enclave display
From the SDSF panel provided in Figure 4-13 we issued the SDSF action character shown to obtain additional information about the enclave. This took us the panel shown in Figure 4-14.
Enclave 600000051A on System SC63
Subsystem type DDF Plan name DISTSERV
Subsystem name D0Z1 Package name SYSLH300
Priority Connection type SERVER
Userid WASCTX1 Collection name JDBCAUTHTEST
Transaction name Correlation db2jcc_appli
Transaction class Procedure name
Netid Function name DB2_DRDA
Logical unit name Performance group
Subsys collection DB0ZG Scheduling env
Process name dwsClientinformationDS
Reset NO
Figure 4-14 SDSF display enclave information
The information that we obtained through Figure 4-14 confirmed that the following runtime attributes were used because of our WLM service classification:
Subsystem type DDF and subsystem name D0Z1 - the enclave was managed by the D0Z1DIST address space
Subsystem collection DB0ZG - data sharing group name
Process name dwsClientinformationDS - derived from DB2 clientApplicationInformation data source custom property setting.
 
Important: Always provide a classification rule in WLM. If you do not classify your DDF workload your DDF transactions will run unclassified in service class SYSOTHER, which has the lowest execution priority in your system. As a consequence transaction throughput of those applications will suffer from bad performance.
JDBC type 2 to connections
WebSphere Application Server applications connecting to DB2 using JDBC type 2 connections do not require WLM subsystem type DDF service classification rules as they communicate with DB2 through local DB2 RRSAF connections without going through the DDF address space. The SQL submitted over JDBC type 2 connections runs under the service class of the invoking Java application for which classification rules are provided in WLM subsystem type CB. The process of WLM classification of local JDBC type 2 connection is illustrated in Figure 4-15.
Figure 4-15 WebSphere Application Server for z/OS JDBC type 2
DB2 WLM additional information
For more information about setting z/OS performance options for DB2 using z/OS Workload Manager refer to Chapter 4. z/OS performance options for DB2 of DB2 10 for z/OS, Managing DB2 Performance, SC19-2978.
4.1.4 Resource Recovery Services
Resource Recovery Services (RRS) is a z/OS subsystem running in its own address space. RRS is a critical resource that must be available to resource managers such as WebSphere MQ, CICS, IMS, WebSphere Application Server, DB2 and transactional VSAM to guarantee data integrity in a z/OS two-phase commit environment.
RRS is a prerequisite for DB2 for z/OS availability in a WebSphere Application Server environment. This is why you must not shut down RRS while DB2 and WebSphere Application Server are running, because if you do WebSphere Applications Server terminates and cannot be restarted until RRS has been restarted. As a result uncommitted units of recovery (UR) cannot be resolved in DB2 for as long as WebSphere Application Server is down. For this reason you should perform RRS shutdown only after resource managers such as DB2 and WebSphere Application Server have been quiesced. In case of RRS subsystem failure RRS must be restarted as quickly a possible which is usually assured through z/OS Automatic Restart Manager (ARM) or by other means of system automation.
For information about using and implementing RRS for high availability, see z/OS Programming: Resource Recovery, SA22-7616-11:available at http://www.ibm.com/systems/z/os/zos/bkserv/r13pdf.
 
Important: DB2 external stored procedures access DB2 through the RRSAF attachment interface which requires RRS to be available.
The DB2 JDBC driver (regardless whether you use JDBC type 2 or type 4 connections) transparently calls DB2 provided external stored procedures for metadata retrieval. Therefore, accessing DB2 for z/OS through JDBC already causes the requirement for the RRS subsystem to be available regardless of the JDBC connection type being used and regardless of the runtime environment your Java application is executing in.
DB2 RRS resource managers
DB2 for z/OS registers and deregisters its two resource managers with RRS during DB2 start and DB2 shut down. The DB2 resource managers our DB2 environment registers and deregisters with RRS are:
RRS Attachment Facility (for instance, DSN.RRSATF.IBM.D0Z1 for DB2 member D0Z1)
WLM controlled stored procedure address spaces (for instance, DSN.RRSPAS.IBM.D0Z1 for DB2 member D0Z1)
DB2 startup
External stored procedures and JDBC type 2 based applications communicate with RRS through the DB2 RRSAF attachment interface which ensures data integrity in case resource changes in other z/OS resource managers like IMS, CICS, WebSphere MQ are performed within the same unit of recovery and always for type 2 connections. To cater for this RRS requirement DB2 verifies the availability of RRS during startup and issues the messages shown in Figure 4-16.
 
DSN3029I -D0Z1 DSN3RRRS RRS ATTACH PROCESSING IS AVAILABLE
DSNL512I -D0Z1 DSNLILNR TCP/IP
Figure 4-16 DB2 RRS attach ok
If you are not sure whether the DB2 resource managers (RMs) are successfully registered with RRS you can verify their RRS state by running the z/OS command shown in Figure 4-17. To verify the RM state of all your DB2 members you need to issue the command once for each z/OS LPAR your DB2 members are running in.
D RRS,RM,SUM
ATR602I 18.01.34 RRS RM SUMMARY 351
RM NAME STATE SYSTEM GNAME
DSN.RRSATF.IBM.D0Z1 Run SC63 SANDBOX
DSN.RRSPAS.IBM.D0Z1 Run SC63 SANDBOX
Figure 4-17 DB2 start RRS RM state
DB2 shut down
During shut down DB2 deregisters its resource managers from RRS. When we shut down DB2 member D0Z1 we noticed the RRS deregistration messages shown in Figure 4-18.
-D0Z1 STOP DB2
DSNY002I -D0Z1 SUBSYSTEM STOPPING
ATR169I RRS HAS UNSET EXITS FOR RESOURCE MANAGER
DSN.RRSATF.IBM.D0Z1 REASON: UNREGISTERED
ATR169I RRS HAS UNSET EXITS FOR RESOURCE MANAGER
DSN.RRSPAS.IBM.D0Z1 REASON: UNREGISTERED
Figure 4-18 DB2 RRS deregistration
We then verified the RRS state of the DB2 resource managers shown in Figure 4-18. As you can see in Figure 4-19 stopping DB2 member D0Z1 set the RRS resource manager state to a value of Reset.
D RRS,RM,SUM
ATR602I 18.29.44 RRS RM SUMMARY
RM NAME STATE SYSTEM GNAME
DSN.RRSATF.IBM.D0Z1 Reset SC63 SANDBOX
DSN.RRSPAS.IBM.D0Z1 Reset SC63 SANDBOX
Figure 4-19 RRS RM state upon DB2 shut down
Stopping RRS
If you stop RRS it deregisters from ARM and issues the system message shown in Figure 4-20.
ATR143I RRS HAS BEEN DEREGISTERED FROM ARM.
ASA2960I RRS SUBSYSTEM FUNCTIONS DISABLED. COMPONENT ID=SCRRS
Figure 4-20 Stop RRS
When a DB2 RRSAF application accesses DB2 while RRS is unavailable, the DB2 Resource Recovery Service Attachment Facility (RRSAF) interface returns error code and reason code information for the application to cater for that situation. The reason codes an application needs to take care of in case of RRS unavailability are shown in Table 4-2 on page 117.
Table 4-2 RRSAF reason codes
RRSAF reason code
Description
00C12219
The application program issued an SQL or IFI function request without completing CREATE THREAD processing. SQL or IFI requests cannot be
issued until CREATE THREAD processing is complete.
 
We experienced this reason code when using an implicit RRSAF connection to DB2. It indicates that the RRSAF CREATE THREAD processing failed due to RRS unavailability which subsequently caused a failure during SQL processing.
00F30091
The application program issued an
RRSAF IDENTIFY function request, but RRS/MVS was not available.
WebSphere Application Server for z/OS considerations
As shown in Figure 4-21 uncontrolled RRS shutdown triggers termination of WebSphere Application Server for z/OS which causes additional complexity in terms of restart and recovery. To prevent others from causing such undesired situations we strongly recommend to restrict the use of RRS shut down to processes and procedures that cater for system management standards and data consistency. Such processes would not allow for RRS to be stopped while a WebSphere Application Server for z/OS instance or DB2 are running.
BBOT0024A RRS HAS BECOME UNAVAILABLE. WEBSPHERE MUST BE RESTARTED.
BBOO0035W TERMINATING THE CURRENT PROCESS, REASON=C9C218F7.
BBOO0009E WEBSPHERE FOR Z/OS DAEMON SC63 ENDED ABNORMALLY, 310
REASON=C9C212C4.
BBOO0056E CONTEXT SERVICE 'CTXBEGC' FAILED WITH RETURN CODE=301.
Figure 4-21 WebSphere Application Server for z/OS and RRS termination
4.1.5 z/OS resource planning
When planning your DB2 environment, you need to have an idea of how much memory, processor capacity or disk storage your application is going to consume. To estimate this resource requirement you need to know the DB2 objects that are going to be created, how much data is going to be stored in DB2 tables and indexes, the SQL workload profile of your application, the amount of processor time an application invocation is going to consume, and the throughput your application is going to cause.
For Java applications or DB2 DRDA workloads you might want to consider adding zIIP or zAAP processor capacity to satisfy the additional processor requirement and to financially benefit from using these speciality engines. During pre-production stress testing you carefully monitor and tune your application aiming at reaching production like application throughput rates. As a result of pre-production stress testing and tuning you know which additional memory, processor and disk resources are required to run your application in your production environment. After these additional resources have been allocated to your production environment you are ready to promote your application to production level.
We created our WebSphere Application Server for z/OS and DB2 for z/OS runtime environment with system function testing in mind. The infrastructure we were provided with showed the following resource allocations:
IBM z196 zEnterprise® 2817 Model 716
2 Parallel Sysplex z/OS 1.13 members, each with 8 GB real memory, 2 central processors (CP), 2 System z Application Assist Processors (zAAP), 2 System z Information Integration Processors (zIIP)
2 Coupling Facilities, each with 1607 MB internal storage and two shared processors
2 way DB2 10 for z/OS data sharing with one member running in each LPAR, configured for distributed workload balancing and with fault tolerant configuration
WebSphere Application Server for z/OS cluster with two application server instances running, one in each LPAR
DFSMS storage group set up with four 3390 model 9 volumes for DB2 user data and indexes
In the environment of our scenario we implemented DB2 and application objects of the WebSphere Daytrader application which we downloaded from http://www14.software.ibm.com/webapp/download/preconfig.jsp?id=2011-06-08+10%3A34%3A22.216702R&cat=webservers&fam=&s=&S_TACT=&S_CMP=&st=52&sp=20
The Daytrader application consists of a WebSphere Applications Server application and a database model of 6 DB2 tables and 11 indexes. The scenario that we use causes just a few thousand table rows to be inserted per application execution. The tables are reset prior to application execution. Given these workload characteristics there was no need to perform more detailed capacity planning as our Parallel Sysplex infrastructure provided plenty of resources to efficiently deal with the anticipated data volume and workload throughput.
4.1.6 External storage configuration
DB2 user data requires DASD storage to be provided for DB2 table space and index space VSAM LDS allocation. Whenever you create a DB2 table space or index you want to be sure the underlying VSAM LDS data set is allocated in the volume pool that has been provided for your data. To ensure your user data volume pool is sufficiently sized and correctly used for data set allocations we recommend to provide for the following resources and configurations:
Estimate space requirement for user table and index spaces
DASD disk space and it DFSMS storage group setup
Data set high level qualifier (HLQ) for VSAM LDS dat sets created for DB2 table spaces and index spaces.
DB2 storage group referencing the HLQ for your DB2 table spaces and index spaces
DFSMS automatic class selection (ACS) routines to assign the desired data set, management and storage attributes. The storage group ACS routine makes sure your DB2 table space or index space data set is going to be created in the volume pool that is dedicated to your DB2 data.
In the following sections we outline the major steps required for external storage configuration. For details. see DB2 9 for z/OS and Storage Management, SG24-7823.
Estimating space requirement
Before you create new tables and indexes in DB2 you should have an idea of the amount of disk space these new objects are going to use. For capacity planning purposes these space requirements should be discussed with your storage administrator so that your volume pool configuration can be changed to provide the additional disk space.
After the objects have been created and are operational you can use the tooling provided by DB2 to monitor space growth. The tools provided by DB2 supporting you in performing these tasks are:
External stored procedure SYSPROC.ADMIN_DS_LIST
DB2 real time statistics (RTS)
External stored procedure SYSPROC.DSNACCOX
For a discussion of these tools refer to 4.3, “DB2 for z/OS configuration” on page 137
DFSMS storage group
The ISPF interactive storage management facility (ISMF) output shown in Figure 4-22 provides a list of system managed storage (SMS) storage groups defined for our DB0ZG data sharing group.
Panel List Utilities Scroll Help
------------------------------------------------------------------------------
STORAGE GROUP LIST
Command ===> Scroll ===> HALF
Entries 1-8 of 8
Data Columns 3-6 of 48
CDS Name : ACTIVE
Enter Line Operators below:
LINE STORGRP SG VIO VIO AUTO
OPERATOR NAME TYPE MAXSIZE UNIT MIGRATE
---(1)---- --(2)--- -------(3)------ --(4)-- (5)- --(6)---
DB0ZARCH POOL ------- ---- NO
DB0ZCPB COPY POOL BACKUP ------- ---- --------
LISTV DB0ZDATA POOL ------- ---- NO
DB0ZIMAG POOL ------- ---- NO
DB0ZLOG1 POOL ------- ---- NO
DB0ZLOG2 POOL ------- ---- NO
DB0ZMISC POOL ------- ---- NO
DB0ZTARG POOL ------- ---- NO
Figure 4-22 ISMF list storage group
We created SMS storage group DB0ZDATA to provide a volume pool for the disk space required to store the Daytrader DB2 tables and indexes. The other storage groups shown are for DB2 archive log data sets, image copy data sets, active log data sets as well as for other runtime data sets. We defined the DB0ZCPB COPY POOL BACKUP pool to provide the infrastructure for DB2 system backup and restore. We then used the ISMF LISTVOL command to obtain a list of volumes available in storage group DB0ZDATA.
The volume list is shown in Figure 4-23.
Panel List Utilities Scroll Help
------------------------------------------------------------------------------
VOLUME LIST
Command ===> Scroll ===> HALF
Entries 1-4 of 4
Enter Line Operators below: Data Columns 3-8 of 43
LINE VOLUME FREE % ALLOC FRAG LARGEST FREE
OPERATOR SERIAL SPACE FREE SPACE INDEX EXTENT EXTENTS
---(1)---- -(2)-- ---(3)--- (4)- ---(5)--- -(6)- ---(7)--- --(8)--
0Z9B07 7913094K 95 401407K 0 7911102K 13
lds 0Z9B86 8021885K 96 292616K 1 8009877K 14
0Z9287 8080430K 97 234071K 0 8079600K 7
0Z9309 7796114K 94 518387K 0 7788976K 14
Figure 4-23 ISMF list volumes
From the volume list shown in Figure 4-23 we then issued the user command lds (list data set) to display the data sets stored on volume 0Z9B86. lds is a user provided REXX program that uses ISPF services to display a volume related data set list which can be useful if you want to check out whether the volume data set placement works as planned. The REXX source is shown in Example 4-3.
Example 4-3 REXX program lds
/* REXX */
TRACE OFF
ADDRESS ISPEXEC
"VGET (COBJ ) ASIS"
"LMDINIT LISTID(LISTEN) VOLUME("COBJ")"
"LMDDISP LISTID("LISTEN") VIEW(VOLUME) CONFIRM(YES)"
"LMDFREE LISTID("LISTEN") "
RETURN
As a result we obtained the data set list shown in Figure 4-24.
Menu Options View Utilities Compilers Help
------------------------------------------------------------------------------
DSLIST - Data Sets on volume 0Z9B86 Row 1 of 131
Command ===> Scroll ===> PAGE
Command - Enter "/" to select action Message Volume
------------------------------------------------------------------------------
DB0ZD.DSNDBD.DBTR8074.ORDERRAC.I0001.A001 0Z9B86
Figure 4-24 List of data sets by volume
DB2 storage group and data set HLQ usage
Table space and index space creation triggers VSAM LDS data set creation in DB2. For data set creation DB2 obtains the data set HLQ from the DB2 storage group referenced in the create table space and create index DDL statement. In our environment we use DB2 storage group GR248074 through which DB2 uses a data set HLQ of DB0ZD for VSAM LDS data set creation. The DDL that we used to create our DB2 storage group is shown in Example 4-4.
Example 4-4 Create storage group and table space DDL
CREATE STOGROUP GR248074
VOLUMES("*")
VCAT DB0ZD ;
 
CREATE TABLESPACE TSACCEJB
IN DBTR8074
USING STOGROUP GR248074 ;
Our DFSMS configuration places data sets with an HLQ of DB0ZD on one of the volumes available in DFSMS storage group DB0ZDATA. If the volume chosen becomes full DB2 automatically adds a volume to the VSAM LDS data set definition allowing the data set to extend to the additional volume. If all volumes available in DFSMS storage group DB0ZDATA become full DFSMS configuration options can be used to overflow to another volume pool or to perform an online volume pool change to supply additional disk space to support high availability.
If you want to read more about DB2 and DFSMS storage, refer to DB2 9 for z/OS and Storage Management, SG24-7823 http://www.redbooks.ibm.com/abstracts/sg247823.html?Open.
Storage group ACS routine and data set placement
When you create a table or index space DB2 defines a VSAM LDS data set using the HLQ provided by the DB2 storage group that you use in your CREATE TABLESPACE or INDEX SQL DDL statement. Data set creation triggers DFSMS automatic class selection (ACS) routine processing. In case of the CREATE TABLESPACE DDL shown in Example 4-4 the storage group ACS routine assigns storage group DB0ZDATA to be used for VSAM LDS data set allocation. During data set allocation DFSMS places the data set on one of the volumes shown in the volume list in Figure 4-23 on page 120.
The ACS routine processing flow for DB2 objects that we created in our environment is illustrated in Figure 4-25.
Figure 4-25 DFSMS ACS routine processing
1. The CREATE TABLESPACE DDL triggers the creation of a VSAM LDS cluster. The cluster name uses the data set HLQ provided by DB2 storage group GR248074 which causes the data class ACS routine to assign the DB0Z data class. The DB0Z data class provides data set attributes that are required to support VSAM extended format and extended addressability. These attributes are recommended to provide high availability and to support new features available with modern disk technology.
2. Next the storage class ACS routine receives control and assigns storage class DB0ZDATA. A DFSMS storage class controls data set level usage of storage performance attributes provided by DFSMS. For instance, in our environment the DB0ZDATA storage class assures the use of parallel access volumes (PAV) which is highly recommended to alleviate I/O queuing in case of high I/O currency on the same physical volume. After a non-null value has been assigned the data set to be created is going to be DFSMS managed.
3. Next the management class ACS routine receives control. The management class controls the actions that are to be taken by the DFHSM space management cycle. The management class used for our table and index spaces ensures that no DFHSM space management activity is taken that can have a negative impact on data availability and data integrity. For instance, the management class used ensures that our table and index space related VSAM LDS data sets are not migrated, deleted or backed up by DFHSM during space management cycle.
4. Finally the storage group ACS routine receives control. As mentioned before the DFSMS storage group provides a group of volumes the data set creation process can transparently choose from.
4.1.7 UNIX System Services file system configuration
If your production environment depends on the IBM Data Server Driver for JDBC to be available on z/OS you need to design your infrastructure to provide high availability for this infrastructure component.
SMP/E installs the DB2 command line processor (CLP) and IBM Data Server Driver for JDBC and SQLJ related UNIX System Services files into IBM eServer™ zSeries® File ZSystem (zFS) data sets. Copies of these SMP/E controlled zFS data sets are rolled out into target runtime environments to provide software upgrades or to participate in rolling maintenance processes.
Because you should not replace the IBM Data Server Driver for JDBC installation while it is being used by applications you need to design your UNIX System Services related DB2 software update strategy to support seamless software updates for installing and backing out JDBC driver changes. To address this problem we carried out the following activities:
Provide one file system directory structure for each JDBC driver level we want to support
Use UNIX System Services symbolic links to connect the appropriate JDBC driver level with a logical path name our application uses to load the JDBC driver
For an up to date list of driver levels currently supported, refer to the following information:
JDBC driver level related file system directories
Our runtime environment uses the UNIX System Services directories shown in Figure 4-26 to provide the different JDBC driver levels our application might have to use.
/pp/db2v10/
+---d110809/ <---- zFS file: OMVS.DSNA10.BASE.D110809
+---d120320/ <---- zFS file: OMVS.DSNA10.BASE.D120320
+---d120719/ <---- zFS file: OMVS.DSNA10.BASE.D120719
 
Figure 4-26 UNIX System Services directories for JDBC driver level related rollout
Under directory /pp/db2v10 we created directories d110809, d120320, d120719 each of them representing a different software maintenance level. We then used these directories as mount point directories for mounting the corresponding zFS file data sets. The zFS files shown in Figure 4-26 are data sets that we previously copied from our SMP/E environment. In our runtime environment each of these mount point directories contains the directory structure shown in Figure 4-27.
/pp/db2v10/
+---d120719/
+-----base <---- DB2 command line processor
+-----jdbc <---- IBM Data Server Driver for JDBC
+-----mql <---- MQ listener
Figure 4-27 DB2 product related directories
Our z/OS administrator configured UNIX System Services to automatically mount the zFS files at IPL time. To verify the mount status of our DB2 related zFS files we ran the z/OS system command shown in Example 4-5.
Example 4-5 Display OMVS DB2 file systems
D OMVS,F,NAME=OMVS.DSNA10*
BPXO045I 05.36.29 DISPLAY OMVS 059
OMVS 0010 ACTIVE OMVS=(3A)
TYPENAME DEVICE ----------STATUS----------- MODE MOUNTED LATCHES
ZFS 162 ACTIVE READ 10/23/2012 L=146
NAME=OMVS.DSNA10.BASE.D120719 21.26.08 Q=146
PATH=/pp/db2v10/d120719
OWNER=SC63 AUTOMOVE=Y CLIENT=N
ZFS 161 ACTIVE READ 10/23/2012 L=145
NAME=OMVS.DSNA10.BASE.D120320 21.26.08 Q=0
PATH=/pp/db2v10/d120320
OWNER=SC63 AUTOMOVE=Y CLIENT=N
ZFS 160 ACTIVE READ 10/23/2012 L=144
NAME=OMVS.DSNA10.BASE.D110809 21.26.08 Q=144
PATH=/pp/db2v10/d110809
OWNER=SC63 AUTOMOVE=Y CLIENT=N
For each mounted zFS file the command output shown in Example 4-5 confirms mount status, zFS file data set name, the mount point and the mount mode. In our environment we mounted each of the DB2 zFS files read only because this is recommended for performance reasons in case no write access is required.
Use of Symbolic links for loading the JDBC driver
JDBC applications should not be configured to load the JDBC driver directly from the installation directory, because if the install directory changes your application JDBC driver configuration needs to be changed.
To address this problem our applications use a data sharing group related path name to load the JDBC driver. We run the command shown in Example 4-6 to create an UNIX System Services symbolic link that connects the current JDBC driver installation directory with the data sharing group related logical path name.
Example 4-6 JDBC create UNIX System Services symbolic link
ln -s /pp/db2v10/d120719 /usr/lpp/db2/d0zg
We than ran the command shown in Example 4-6 to verify which installation directory our data sharing group logical path name is connected with.
ls -l /usr/lpp/db2/d0zg
lrwxrwxrwx /usr/lpp/db2/d0zg -> /pp/db2v10/d120719
Figure 4-28 Verify JDBC symbolic link
In case we need to fall back to the previous JDBC driver level we simply swap the symbolic link as shown in Example 4-7 on page 125. z/OS JDBC applications do not need to change their JDBC configuration because they use the data sharing group related path name for loading the JDBC driver.
Example 4-7 JDBC swap symbolic link
rm /usr/lpp/db2/d0zg
ln -s /pp/db2v10/d120320 /usr/lpp/db2/d0zg
WebSphere Application Server considerations
We used the symbolic link shown in Example 4-6 on page 124 in the WebSphere Application Server JDBC driver path and JDBC driver native path configuration.
If you run multiple instances of WebSphere Application Server for z/OS you might want to consider to use application server specific symbolic links for loading the JDBC driver. This provides the flexibility you might need in case an application server instance is bound to using a specific JDBC driver level due to failures that were introduced by a new JDBC
driver installation.
4.1.8 Monitoring infrastructure
When you operate DB2 for z/OS you want be aware whether application and system resource usage is in line with the service level agreements (SLA) and within the estimated resource capacity. You furthermore want to be able to detect and analyze system and application growth to plan additional resource capacity. In case of performance problems you want to be able to track down its root cause to enable you to decide on the performance tuning that needs to be performed. Being able to perform these tasks requires a monitoring infrastructure to be in place.
In 8.1, “Performance monitoring” on page 362, we provide a more general overview of performance monitoring.
In Appendix A, “DB2 administrative task scheduler” on page 483. we describe the administrative task scheduler (ADMT) setup to trigger batch jobs, DB2 commands, and for autonomic statistics monitoring.
To support DB2 system and application monitoring we implemented the following monitoring infrastructure:
Software stack
For reporting and analysis we installed the following software:
IBM OMEGAMON XE for DB2 PE on z/OS (OMPE). We describe this topic in 4.4, “Tivoli OMEGAMON XE for DB2 Performance Expert for z/OS” on page 201.
IBM Resource Measurement Facility™ (RMF) on z/OS
SMF Browser for WebSphere Application Server for z/OS to report on SMF type 120 records for which you can download from https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=zosos390.
SMF records to be collected
We configured our environment to collect the following SMF records:
RMF, DB2 and WebSphere Application Server information. We automatically archive the SMF data whenever a SMF data set switch occurs.
DB2 statistics and RMF collection interval to be set to one minute to be able to line up the RMF data with the DB2 statistics data. Since DB2 10 for z/OS, the statistics traces are collected at one minute intervals
DB2 to send the following statistics and accounting traces to the SMF trace destination:
 – Statistics trace classes 1, 3, 4, 5, 6, and 7
 – Accounting trace classes 1, 2, 3, 7, and 8
DB2 audit authorization failure policy to be activated during DB2 startup. Collecting these information enables you to quickly identify the root cause of authorization failures. We ran the SQL statement shown in Example 4-8 to activate the policy.
Example 4-8 Define audit policy for authorization failures
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, CHECKING,DB2START)
VALUES('AUTHFAIL','A','Y'),
DB2 trace IFCID 318 to be activated during DB2 member startup to enable the collection of global statement cache statistics. We configured the administrative scheduler to issue the DB2 command shown in Example 4-9 within DB2 start processing.
Example 4-9 Start IFCID 318
-START TRACE (P) CLASS(30) DEST(SMF) IFCID(318)
4.1.9 WebSphere Application Server and DB2 security
Our WebSphere Application Server environment accesses DB2 through JDBC type 2 and JDBC type 4 connections and uses dynamic SQL to perform database changes. WebSphere Application Server authenticates to DB2 using its user ID and password which is provided in the data source definition. In the following discussion the application server user ID and password are also referred to as middle tier’s user ID and password.
We examine:
Authentication in a three-tier architecture
WebSphere Application Server operates in a three-tier application model in which it represents the middleware layer. In that three-tier application model the application server authenticates users running client applications, authenticates to DB2 using its middle tier user ID and password and manages interactions with the database server. The privileges of the middle tier user ID are checked when accessing the database. This includes access that is performed on behalf of end-users.
Figure 4-29 on page 127 visualizes the process of client authentication in a three-tier WebSphere Application Server environment.
Figure 4-29 WebSphere Application Server three-tier environment
In the scenario illustrated in Figure 4-29, end-user WASUSER has been authenticated by the application server and is connected to DB2. The DB2 connection uses the application server’s end-user credentials (user ID WASSRV provided in data source authentication related properties) for DB2 authentication and authorization checking. Therefore, SQL requests submitted by WASUSER are executed in DB2 using the application server’s user credentials (user ID WASSRV).
Because all SQL access is performed under the middle tier’s user ID, the three-tier application model causes the following issues:
Loss of end-user identity in DB2.
Loss of control over end-user access of the database.
Diminished DB2 accountability.
The middleware server’s authorization ID (AUTHID WASSRV) needs the privileges to perform all requests from all end-users.
If the middleware server’s security is compromised, so is that of the database server.
Re-establishing a new connection every time the user ID changes does not provide a feasible solution due to the high performance overhead that would cause.
Let us review the illustration provided in Figure 4-30 to discuss the three-tier authentication scenario shown in Figure 4-29 on page 127. WebSphere Application Server handles the connection to DB2 using the application server’s user credentials (user ID WASSRV). Because no trusted context is used WASSRV’s database privileges are checked for SQL access on behalf of WASUSER.
Figure 4-30 Three-tier authentication process
1. End-user logs on with user ID WASUSER and password
2. End-user WASUSER is authenticated by WebSphere Application Server
3. The application server requests a DB2 connection using user ID WASSRV and password
4. DB2 calls RACF to authenticate WASSRV
5. RACF verifies whether WASSRV is authorized to access DB2
6. Connection exit routine assigns WASSRV as the primary authorization id and as the CURRENT SQLID. Secondary authorization IDs may also be assigned. The connection has been established.
7. WASSRV’s database privileges are checked for SQL access on behalf of WASUSER.
Authentication in a three-tier architecture using DB2 trusted context
To explain the benefits of using DB2 trusted contexts in a WebSphere Application Server environment we extend the scenario illustrated in Figure 4-30 to use the data source provided user ID of WASSRV for creating a DB2 connection and to use the application server authenticated user WASUSER for SQL authorization. In the scenario we configured the application server data source to support trusted connections (see 5.7, “Configuring the J2C authentication alias” on page 270.)
We then created the DB2 trusted context by running the SQL DDL shown in Figure 4-31 on page 129.
CREATE ROLE WASSRVROLE; 1
CREATE ROLE WASUSERROLE;
GRANT EXECUTE ON FUNCTION DB2R3.GRACFGRP TO ROLE WASUSERROLE; 2
CREATE TRUSTED CONTEXT CTXWASSRV
BASED UPON CONNECTION USING SYSTEM AUTHID WASSRV 3
DEFAULT ROLE WASSRVROLE
WITHOUT ROLE AS OBJECT OWNER
ENABLE
NO DEFAULT SECURITY LABEL
ATTRIBUTES (
ENCRYPTION 'NONE',
ADDRESS 'wtsc64.itso.ibm.com', 3
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'wtsc63.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com'
)
WITH USE FOR WASUSER ROLE WASUSERROLE WITHOUT AUTHENTICATION; 4
Figure 4-31 Create trusted context
1. The roles used in the trusted context must exist prior to trusted context creation.
2. Role WASUSERROLE is granted to execute function DB2R3.GRACFGRP, because the UDF is invoked in our application scenario. User ID WASSRV does not require the execute or any other DB2 object privilege, because we use WASSRV just for DB2 connection creation. Any DB2 object privilege required within the trusted context needs to be granted to role WASUSERROLE. Role WASSRVROLE is not supposed to access any DB2 object and therefore holds no privilege in DB2.
3. The trusted context shown in Figure 4-31 is for WebSphere Application Server JDBC type 4 connections where the connection user ID (provided in the data source authentication property) matches the SYSTEM AUTHID of WASSRV and where the external entity runs on one of the IP hosts referred to by the domain names provided in the trusted context ADDRESS attributes.
4. Because the authenticated user matches authorization ID WASUSER DB2 assigns role WASEUSERROLE. In case the application server asks DB2 to perform an authorization ID switch for a user that does not match one of the user IDs specified in the trusted context WITH USE FOR clause the request fails with DB2 SQLCODE -20361. In that situation we observed the WebSphere Application Server error message shown in Figure 4-32.
+J2CA0056I: The Connection Manager received a fatal connection error from the Resource Adapter for resource jdbc/Josef. The exception is:
com.ibm.db2.jcc.am.DisconnectRecoverableException:
Ýjcc¨Ýt4¨Ý2040¨Ý11215¨Ý3.64.82¨ An error occurred during a deferred
connect reset and the connection has been terminated. See chained
exceptions for details. ERRORCODE=-4499,
SQLSTATE=null:com.ibm.db2.jcc.am.SqlException:
Ýjcc¨Ýt4¨Ý20130¨Ý12466¨Ý3.64.82¨ Trusted user switch failed.
ERRORCODE=-4214,
SQLSTATE=null:com.ibm.db2.jcc.am.SqlSyntaxErrorException:
Figure 4-32 Application server error message trusted user switch failed
Using the trusted context that we define in Figure 4-31 on page 129 our WebSphere Application Server scenario performs the following processing steps:
1. User logs on with user ID WASUSER and password (Figure 4-33).
Figure 4-33 Step 1: Trusted context three tier authentication
2. User WASUSER is authenticated by WebSphere Application Server (Figure 4-34).
Figure 4-34 Step 2: Trusted context three tier authentication
3. The application server requests a DB2 connection using user ID WASSRV and
its password (Figure 4-35 on page 131).
Figure 4-35 Step 3: Trusted context three tier authentication
4. DB2 calls RACF to authenticate WASSRV (Figure 4-36).
Figure 4-36 Step 4: Trusted context three tier authentication
5. RACF verifies whether WASSRV is authorized to access DB2 (Figure 4-37).
Figure 4-37 Step 5: Trusted context three tier authentication
6. Connection exit routine assigns WASSRV as the primary authorization id and as the CURRENT SQLID. Secondary authorization IDs may also be assigned (Figure 4-38).
Figure 4-38 Step 6: Trusted context three tier authentication
7. DB2 looks for a trusted context with system authorization id WASSRV and validates the attributes of the context (for instance, SERVAUTH, ADDRESS, ENCRYPTION) (Figure 4-39). Depending on the trusted context DEFAULT ROLE attribute a role may also be assigned.
Figure 4-39 Step 7: Trusted context three tier authentication
8. The connection with user WASSRV as connection owner has been established (Figure 4-40 on page 133).
Figure 4-40 Step 8: Trusted context three tier authentication
9. WebSphere issues a switch user request using WASUSER (Figure 4-41). This requires no application code change. It is all implemented by the application server configuration.
Figure 4-41 Step 9: Trusted context three tier authentication
10. DB2 determines whether WASUSER is allowed to switch to its authorization ID (Figure 4-42). In our scenario WASUSER is allowed to switch because the trusted context CTXWASSRV contains user ID WASUSER in its FOR AUTHID clause. If the authenticated user is not allowed to switch to its authorization ID DB2 aborts the request and returns SQLCODE -20361 to the application.
Figure 4-42 Step 10: Trusted context three tier authentication
11. Depending on the trusted context AUTHENTICATION attribute DB2 calls RACF to authenticate WASUSER (Figure 4-43). In the trusted context referred to in Figure 4-31 on page 129 we trigger RACF authentication for authorization ID WASUSER.
Figure 4-43 Step 11: Trusted context three tier authentication
12. The connection exit routine assigns WASUSER as the primary authorization ID and as the CURRENT SQLID (Figure 4-44). Secondary authorization IDs may also be assigned. DB2 assigns role WASUSERROLE which is going to be used for checking DB2 o
bject authorization.
Figure 4-44 Step 12: Trusted context three tier authentication
13. The connection has been initialized using WASUSER as primary authorization ID and with role WASUERROLE assigned (Figure 4-45 on page 135). From now on DB2 uses role WASUSERROLE for checking SQL access authorization.
Figure 4-45 Step 13:Trusted context three tier authentication
14. While the application was running we collected the DB2 command output shown in Figure 4-46 to confirm that the CTXWASSRV trusted context was used by DB2 to establish a trusted connection and that DB2 performed an authorization ID switch for user ID WASUSER which resulted in the assignment of DB2 role WASUSERROLE. In that respect the command output shown in Figure 4-46 confirms the following trusted context related facts:
a. The application server successfully establishes a trusted connection using the DB2 trusted context that we create in Figure 4-31 on page 129.
b. The trusted context system authid matches with the application server JAAS provided username of WASSRV.
c. Because WASUSER is identical with the user that has been authenticated by the application server, DB2 performs an authorization ID switch to WASUSER and assigns DB2 role WASUERROLE.
-DIS THREAD(SERVER) SCOPE(GROUP)
DSNV473I -D0Z2 ACTIVE THREADS FOUND FOR MEMBER: D0Z1
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
SERVER SW * 7 db2jcc_appli WASUSER DISTSERV 0084 427
V485-TRUSTED CONTEXT=CTXWASSRV, a
SYSTEM AUTHID=WASSRV,b
ROLE=WASUSERROLE c
V437-WORKSTATION=WTSC64, USERID=WASSRV, b
APPLICATION NAME=dwsClientinformationDS
V441-ACCOUNTING=JCC03640WTSC64 dwsClientinformation
V429 CALLING FUNCTION=DB2R3.GRACFGRP,
PROC= , ASID=0000, WLM_ENV=DSNWLMDB0Z_GENERAL
V482-WLM-INFO=DDFONL:1:2:550
V445-G90C0609.M72E.CA71F39F97F6=427 ACCESSING DATA FOR
( 1)::9.12.6.9
V447--INDEX SESSID A ST TIME
V448--( 1) 39000:26414 W S2 1231407175013
Figure 4-46 Step 14: Trusted context three tier authentication
To allow for the output in Figure 4-46 on page 135 to be collected while the application is running we issued the DB2 command shown in Example 4-10 to temporarily stop UDF DB2R3.GRACFGRP prior to application execution. While the STOP FUNCTION command is in effect, the attempt to execute the UDF is queued giving us sufficient time to issue the DISPLAY THREAD command to collect the information. Details of the UDF are provided in Appendix G, “External user-defined functions” on page 563.
Example 4-10 Step 14: Temporarily stop UDF DB2R3.GRACFGRP
-sto FUNCTION SPEC(DB2R3.GRACFGRP) scope(group) action(queue)
After we had collected the information we issued the command shown in Example 4-11 to start the UDF which allowed for the application to successfully complete.
Example 4-11 Step 14: Start UDF DB2R3.GRACFGRP
-sta FUNCTION SPEC(DB2R3.GRACFGRP) scope(group)
For more information about DB2 trusted contexts and the configuration we performed for running the DayTrader-EE6 workload refer to 4.3.13, “Trusted context” on page 173.
4.2 Monitoring strategy
We configured our monitoring infrastructure to support monitoring on DB2 system and application level. To enable these monitoring categories we performed the following infrastructure setup:
Capture DB2 statistics traces for DB2 system monitoring, DB2 statistics traces are written to SMF and provide information about resource usage caused by the DB2 system. For the DSNZPARM setting we activate statistics traces at DB2 startup time refer to “DB2 statistics and accounting traces” on page 151.
Capture DB2 accounting traces and z/OS Resource Measurement Facility (RMF) information for DB2 application monitoring. DB2 accounting traces provide information about application level resource usage in DB2. For the DSNZPARM setting we used to activate accounting traces at DB2 startup time refer to “DB2 statistics and accounting traces” on page 151.
Capture DB2 audit trace class 1 to be able to quickly identify authorization failures. Refer to the information about how we configured DB2 to start the audit policy that we defined in Example 4-8 on page 126 at DB2 startup time.
Capture DB2 audit trace class 10 to capture information about trusted context information. We capture this information to assure the correct trusted context is used by DB2. We use the DB2 administrative scheduler to start this trace at DB2 startup time. Refer to Example 4-9 on page 126 for information about how we configured DB2 to start audit trace class 10 at DB2 startup time.
Use of DB2 administrative scheduler (ADMT) to automatically activate global dynamic statement cache statistics by starting a performance trace class 30, IFCID 318. Refer to Example 4-9 on page 126 for information about how you might configure DB2 to start IFCID 318 at DB2 startup time, if necessary.
Regularly capture global statement cache information for subsequent analysis by using IBM Optim™ Query Tuner.
Capture DB2 real time statistics (RTS) before and after DayTrader-EE6 workload stress testing. Among other things the RTS information captured are used to
 – Learn about the characteristics of the DayTrader-EE6 application,
 – Identify insert, update, delete hot spots,
 – Identify redundant indexes,
 – Learn about the REORG and RUNSTATS requirements of objects that are accessed by frequent insert, update, delete DML statements.
 – Estimate future data growth of DB2 tables and indexes and identify objects that are candidates for table partitioning.
For more information about the implementation and usage examples of the RTS snapshot tables refer to 4.3.24, “DB2 real time statistics” on page 198.
Configure RMF to capture SMF record type 70 to 79. For RMF and SMF monitoring, see Chapter 8, “Monitoring WebSphere Application Server applications” on page 361.
WebSphere Application Server applications to provide unique DB2 client application information which we use for creating application level DB2 accounting and RMF workload activity reports for our WebSphere Application Server applications. The DB2 client application information used by our sample applications are shown in Table 4-1 on page 111.
Configure DB2 to monitor the maximum number of concurrent active threads used by the DayTrader-EE6 application. The configuration steps that we took to implement DB2 profile monitoring for the DB2 threads used by the DayTrader-EE6 application are explained in 4.3.21, “Using profiles to disable idle thread timeout at application level” on page 194.
WLM subsystem type DDF classification rules for DBATs to perform service classification based upon DB2 client application information. For information about how we setup WLM classification rules of DBATs refer to “JDBC type 4 service classification” on page 110.
OMPE performance database processes to load DB2 statistics and accounting information into DB2 tables. We use these tables to run predefined SQL queries for application profiling and key performance indicator (KPI) monitoring. For information about implementing and using the performance database refer to 4.1.9, “WebSphere Application Server and DB2 security” on page 126.
4.3 DB2 for z/OS configuration
In this section we discuss the DB2 configuration options that we considered to support our DB2 for z/OS related WebSphere Application Server for z/OS application environment. For this we performed configuration tasks in the following areas:
4.3.1 DB2 connectivity installation parameters
In this section we cover some DB2 installation parameters (DSNZPARMs) that affect the Java applications connecting to DB2 in a WebSphere Application Server environment.
Adjusting the setting of DB2 threads and connections
A thread is a DB2 structure which describes a connection made by an application and traces its progress. There are two kinds of threads:
Allied thread: A thread that is connected to DB2 from local subsystem, such as TSO, batch, IMS, CICS, CAF, or RRSAF. It is always active from allocation to termination. Requests from type 2 JDBC driver are allied threads.
Database access thread (DBAT): A thread that is connected through a network with another system, Requests through type 4 JDBC driver make use of Database Access Threads (DBAT).
Because thread allocation can be a significant part of the cost in a short transaction, you need to set related parameters carefully according to your machine size, your work load, and
other factors.
DDF THREADS field (CMTSTAT)
Database access threads differ from allied threads. They have two modes: ACTIVE MODE and INACTIVE MODE. The modes are controlled by this parameter.
ACTIVE
A database access thread is always active from initial creation to termination. It provides best performance for the thread but consumes more system resource.
INACTIVE
A database access thread can be active or inactive. When a database access thread is active, it is processing requests from client connections within units of work. When a database access thread is inactive, the connection is disassociated from the thread. The thread is pooled and reused for other connections, new or inactive, to start a new unit of work. So typically a small number of threads that can be used to service a large number of connections.
However, in some cases DB2 cannot pool database access threads. Table 4-3 summarizes whether a thread can be pooled or not. When the conditions are true, the thread can be pooled when a COMMIT is issued, otherwise, the thread remains active.
Table 4-3 Requirements for pooled threads
If the event is...
Thread can be pooled?
A hop to another location
Yes
A package bound with RELEASE(COMMIT)
Yes
A package bound with RELEASE(DEALLOCATE)
Yes
A declared temporary table that is active
No
An open and held cursor 1, a held LOB locator, or a package bound with KEEPDYNAMIC(YES), or RELEASE(DEALLOCATE) 2
No
1 A cursor can be closed with fast implicit close. For more information, see DB2 10 for z/OS Managing Performance, SC19-2978.
2 For more information of RELEASE(DEALLOCATE), see High Performance DBATs.
Use INACTIVE MODE threads instead of ACTIVE MODE threads whenever possible.
IDLE THREAD TIMEOUT field (IDTHDOIN)
This parameter controls the approximate time, in seconds, that an active server thread can remain idle before it is terminated. Threads are checked every two minutes. Specifying 0 disables idle thread time-out processing.
Inactive and indoubt threads are not subject to this time-out parameter. If CMTSTAT subsystem parameter is set to ACTIVE, your application must start its next unit of work within the specified time-out period, otherwise its thread is terminated.
POOL THREAD TIMEOUT field (POOLINAC)
In INACTIVE MODE, this parameter specifies the approximate time, in seconds, that a database access thread (DBAT) can remain idle in the pool before it is terminated. Threads are checked every three minutes. In addition, a database access thread is terminated after it has processed 200 units of work. The default value is 120. Increasing POOLINAC can potentially reduce the overhead for creating a new DBAT, but the disadvantage would be the virtual storage used by the pooled DBAT.
The default value is 120. Increasing POOLINAC can potentially reduce the overhead for creating a new DBAT, but the disadvantage would be the virtual storage used by the pooled DBAT.
Choosing a good number for maximum threads is important to keep applications from queuing and to provide good response time. Fewer threads than needed under utilize the processor and cause queuing for threads. More threads than needed do not improve the response time. They require more real storage for the additional threads and might cause more paging and, hence, performance degradation.
MAX USERS field (CTHREAD)
This parameter controls the maximum number of allied threads that are to be allocated concurrently. Requests from type 2 JDBC driver are allied threads.
MAX REMOTE ACTIVE field (MAXDBAT)
This parameter specifies the maximum number of database access threads (DBATs) that are allowed to be concurrently active. These threads are for connections coming into DB2 through DDF, such as requests through the type 4 JDBC driver.
When a request for a new connection to DB2 is received and MAX REMOTE ACTIVE has been reached, If DDF THREAD is ACTIVE mode, the allocation request is allowed but any further processing for the connection is queued waiting for an active database access thread to terminate. If DDF THREAD is INACTIVE mode, the allocation request is allowed and is processed when DB2 can assign an pooled idle database access thread to the connection. Pool idle thread counts as an active thread against MAX REMOTE ACTIVE.
MAX REMOTE CONNECTED field (CONDBAT)
This value must be greater than or equal to the value of MAX REMOTE ACTIVE. The MAX REMOTE ACTIVE is limitation of concurrent active database access threads, while this parameter sets the maximum number of concurrent DDF connections.
If a new connection request to DB2 is received, and MAX REMOTE CONNECTED has been reached or MAX REMOTE CONNECTED is zero, the connection request is rejected.
MAXCONQN in macro DSN6FAC
Specifies the maximum number of inactive or new connections that can be queued waiting for a DBAT to process the request.
OFF means that the depth of the connection queue is limited by the value of the CONDBAT subsystem parameter. ON means that the depth of the connection queue is limited by the value of the MAXDBAT subsystem parameter. A numeric value specifies the maximum number of connections that can be queued waiting for a DBAT to process a request.
When a request is added to the connection request queue and the thresholds specified by both the MAXDBAT and MAXCONQN subsystem parameters are both reached (unless MAXCONQN is set to OFF) then DDF closes the client connection longest waiting client connection in the queue. The closed connections give remote clients an opportunity to redirect the work to other members of the group that have more resources to process the work. The function is enabled only when DB2 subsystem is a member of a data
sharing group.
The default value is OFF.
MAXCONQW in macro DSN6FAC
Specifies the maximum length of time that a client connection waits for a DBAT to process the next unit-of-work or new connection request.
ON means that connections wait as long as the value specified by the IDHTOIN subsystem parameter. OFF means that connections wait indefinitely for a DBAT to process requests. A numeric value specifies a time duration in seconds that a connection waits for a DBAT to process the request.
Each queued connection request entry is examined to see if its time waiting in the queue has exceeded the specified value. If the time is exceeded, the client connection is closed. After all entries in the queue have been processed or the last entry whose time in the queue exceeded the threshold has been processed, a DSNL049I message is issued to indicating how many client connections were closed because of the MAXCONQW value. The function is enabled only when DB2 subsystem is a member of a data sharing group.
The default value is OFF.
4.3.2 Enabling DB2 dynamic statement cache
The feature dynamic statement caching was introduced with DB2 Version 5. Whenever DB2 prepares an SQL statement, it creates a control structure that is used when the statement is executed. When dynamic statement caching is in effect, DB2 stores the control structure associated with a prepared dynamic SQL statement in a storage pool. If that same statement is executed again, DB2 can reuse the cached control structure, avoiding the expense of re-preparing the statement,
When using statement caching, four different types of prepare operations can take place:
Full prepare
A full prepare occurs when the skeleton copy of the prepared SQL statement does not exists in the global dynamic SQL cache (or the global cache is not active). It can be caused explicitly by a PREPARE or an EXECUTE IMMEDIATE statement or implicitly by an EXECUTE when using KEEPDYNAMIC(YES).
Short prepare
A short prepare occurs, if the skeleton copy of the prepared SQL statement in the global dynamic SQL cache can be copied into the local storage.
Avoided prepare
A prepare can only be avoided when using full caching. Because in this case, the full prepared statement is kept across commits, issuing a new EXECUTE statement (without a prepare after a commit) does not need to prepare anything. The full executable statement is still in the thread’s local storage (assuming it was not removed from the local thread storage because MAXKEEPD was exceeded) and can be executed as such.
Implicit prepare
This is the case when an application, that uses KEEPDYNAMIC(YES), issues a new EXECUTE after a commit was performed and a prepare cannot be avoided (the previous case). DB2 will issue the prepare (implicitly) on behalf of the application. (The application must not explicitly code the prepare after a commit in this case.)
Implicit prepares can result in a full or short prepare:
 – In full caching mode, when a statement has been removed from the local cache because MAXKEEPD was exceeded, but still exists in the global cache, the statement is copied from the global cache. This is a short prepare. (If MAXKEEPD has not been exceeded and the statement is still in the local cache the prepare is avoided.)
 – In full caching mode, and the statement is no longer in the global cache either, a full prepare is done.
 – In local caching only mode, a full prepare has to be done.
Whether a full or short prepare is needed in full caching mode depends on the size of the global cache. The bigger the size, the more likely we can do a short prepare.
Comparing the relative cost of the different types of prepare:
If a full prepare costs 100
A short prepare costs 1
And an avoided prepare costs nothing
When the prepared statements are cached in the EDM pool, DB2 will not regenerate the access path if the cached statement can be reused by a subsequent execution. It saves cost of SQL statement preparing. The following DB2 system parameters should be reviewed.
CACHE DYNAMIC SQL field (CACHEDYN)
DB2 global dynamic statement cache is enabled If you specify YES. You must also specify YES for the USE PROTECTION field on panel DSNTIPP. This cache pool is shared by different threads, plans and packages.
EDM STATEMENT CACHE field (EDMSTMTC)
This parameter determines the size (in KB) of the statement cache that is to be used by the EDM. It can be increased and decreased with the SET SYSPARM command, but it cannot be decreased below the value that is specified at DB2 startup. The calculated column of panel DSNTIPC is based on input from previous panels. If you want to set value in the override column, see more information in DB2 10 for z/OS Installation and Migration Guide, GC19-2974, Calculating EDM pool size.
MAX KEPT DYN STMTS field (MAXKEEPD)
BIND option KEEPDYNAMIC(YES) enables applications to keep prepared dynamic statement past commit points in local statement cache (thread based memory).
This parameter specifies the maximum number of prepared statements kept in the local cache, thus it can help limit the amount of storage in DBM1 address space. If this limit is exceeded, DB2 honors the KEEPDYNAMIC(YES) behavior, but “implicit” prepares might be necessary to rebuild the executable version of SQL statements when they are executed after a commit.
Statements in plans or packages bound with REOPT(VARS) are not cached in the global cache. The bind options REOPT(VARS) and KEEPDYNAMIC(YES) are not compatible.
In a data sharing environment prepared statements cannot be shared among the members. As each member has its own EDM pool. A cached statement of one member is not available to an application that runs on another DB2 member.
There are different levels of statement caching, which are explained in the following sections:
No caching
Figure 4-47 on page 143 helps to visualize this behavior.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice, and terminates.
Program B starts after program A has terminated, prepares exactly the same statement S as A did, executes the prepared statement, issues a commit, tries to execute S again, receives an error SQLCODE -514 or -518 (SQLSTATE 26501 or 07003), has to prepare the same statement S again, executes the prepared statement, and terminates.
Each time a prepare has been executed by the programs A and B, issuing the SQL PREPARE statement, DB2 prepared the statement from scratch. After the commit of program B, the prepared statement is invalidated, so program B had to repeat the prepare of statement S.
Figure 4-47 No caching, CACHEDYN = NO and KEEPDYNAMIC = NO
Local dynamic SQL cache only
A local dynamic statement cache is allocated in the storage of each thread in the DBM1 address space. You can control the usage of this cache by using the KEEPDYNAMIC bind option.
Bound with KEEPDYNAMIC(YES), an application can issue a PREPARE for a statement once and omit subsequent PREPAREs for this statement, even after a commit has been issued.
To understand how the KEEPDYNAMIC bind option works, it is important to differentiate between the executable form of a dynamic SQL statement (the prepared statement) and the character string form of the statement (the statement text).
Let us take a look at our two example programs, shown in Figure 4-48 on page 144.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice, and terminates.
Program B starts after program A has terminated, prepares the same statement S as A did, executes the prepared statement, issues a commit, executes S again (causing an internal (implicit) prepare) and terminates.
Each time an SQL PREPARE is been issued by the programs (or DB2 for the implicit prepare), a complete prepare is executed. This process is a full prepare. After the COMMIT of program B, the prepared statement is invalidated (because the cursor was not open and not defined with hold), but the statement text has been preserved in the local statement cache of the thread (because it was bound with KEEPDYNAMIC(YES)). Therefore program B does not have to repeat the prepare of statement S explicitly; it can immediately issue the EXECUTE again. Under the covers, DB2 will execute a complete prepare operation, using the saved statement string. This operation is called an implicit prepare.
Be aware that application program B has to be able to handle the fact that the implicit prepare might fail and an error is returned. Any error that normally occur at prepare time can now be returned on the OPEN, EXECUTE, or DESCRIBE statement issued by the application.
The prepared statement and the statement text are held in the thread´s local storage within the DBM1 address space (outside the EDM pool). But only the statement text is kept across commits when you only use local caching.
Figure 4-48 Local caching, CACHEDYN = NO and KEEPDYNAMIC = YES
The local instance of the prepared SQL statement (the prepared statement), is kept in DBM1 storage until one of the following occurs:
The application process ends.
The application commits and there is no open cursor defined WITH HOLD for the statement. (Because we are using only local caching, just the statement string is kept across commits.)
A rollback operation occurs.
The application issues an explicit PREPARE statement with the same statement name.
If the application issues a PREPARE for the same SQL statement name which is kept in the cache, the kept statement is discarded and DB2 prepares the new statement.
In a distributed environment, if the requester does not issue a PREPARE after a COMMIT, the package at the DB2 for z/OS server must be bound with KEEPDYNAMIC(YES). If both requester and server are DB2 for z/OS subsystems, the DB2 requester assumes that the KEEPDYNAMIC value for the package at the server is the same as the value for the plan at the requester.
The KEEPDYNAMIC option might have performance implications for DRDA clients that specify WITH HOLD on their cursors:
If KEEPDYNAMIC(NO) is specified, a separate network message is required when the DRDA client issues the SQL CLOSE for the cursor.
If KEEPDYNAMIC(YES) is specified, the DB2 for z/OS server automatically closes the cursor when SQLCODE +100 is detected, which means that the client does not have to send a separate message to close the held cursor. This reduces network traffic for DRDA applications that use held cursors. It also reduces the duration of locks that are associated with the held cursor.
When a distributed thread has touched any package which is bound with KEEPDYNAMIC(YES), the thread cannot become inactive.
This level of caching, used without other caching possibilities, is of minor value, because the performance improvement is limited. The only advantage is that you can avoid coding a PREPARE statement after a COMMIT because DB2 keeps the statement string around. This is of course most beneficial in a distributed environment where you can save a trip across the wire this way. On the other hand, by using the DEFER(PREPARE) bind option, you can obtain similar network message savings.
Global dynamic statement cache only
The global dynamic statement cache is normally allocated in the EDM pool within the DBM1 address space. You can activate this cache by setting CACHEDYN=YES in DSNZPARM.
When global dynamic statement caching is active, the skeleton copy of a prepared SQL statement (SKDS) is held in the global dynamic statement cache inside the EDM pool. Only one skeleton copy of the same statement (matching text) is held. The skeleton copy can be used by user threads to create user copies. An LRU algorithm is used for replacement.
If an application issues a PREPARE or an EXECUTE IMMEDIATE (and the statement has not been executed before in the same commit scope), and the skeleton copy of the statement is found in the global statement cache, it can be copied from the global cache into the thread´s storage. This is called a short prepare.
 
Note: Without local caching (KEEPDYNAMIC(YES)) active, the application cannot issue EXECUTEs directly after a commit. The statement returns an SQLCODE -514 or -518, SQLSTATE 26501 or 07003.
Let us take a look at our example. The global cache case is shown in Figure 4-49 on page 146.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice, and terminates.
Program B starts after program A has terminated, prepares the same statement S as A did, executes the prepared statement and issues a COMMIT. Then program B tries to execute S again. The program receives an error SQLCODE -514 or -518 (SQLSTATE 26501 or 07003) and has to prepare the same statement S again. Then it executes the prepared statement and terminates.
The first time a prepare for statement S is issued by the program A, a complete prepare operation is performed. The SKDS of S is then stored in the global statement cache. When program B executes the prepare of S for the first time, the SKDS is found in the global statement cache and is copied to the local storage of B´s thread (short prepare). After the COMMIT of program B, the prepared statement is invalidated in B’s local storage, but the SKDS is preserved in the global statement cache in the EDM pool. Because the statement string or the prepared statement is not kept after the commit, program B has to repeat the prepare of statement S explicitly. This causes another copy operation of the SKDS in the global cache to the local storage of the thread of application B (short prepare).
Figure 4-49 Global caching, CACHEDYN = YES and KEEPDYNAMIC = NO
This level of statement caching has important performance advantages.
Full caching
Full caching is a combination of local caching (KEEPDYNAMIC(YES)), a MAXKEEPD DSNZPARM value > 0, and global caching (CACHEDYN=YES). It is possible to avoid prepares, because a commit does not invalidate prepared statements in the local cache.
Let us look again at our example when full caching is active, shown in Figure 4-50 on page 147.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice, and terminates.
Program B starts after program A has terminated, prepares the same statement S as A did, executes the prepared statement, issues a commit, executes S again, and terminates.
The first time a prepare for statement S is issued by the program A, a complete prepare is done (full prepare). The SKDS of S is stored in the global cache. When program B executes the prepare of S the first time, the SKDS is found in the global statement cache and is copied to the local statement cache of B´s thread (short prepare). The COMMIT of program B has no effect on the prepared statement. When full caching is active, both the statement string which is also kept for local caching only, and the prepared statement are kept in the thread’s local storage after a commit. Therefore, program B does not have to repeat the prepare of statement S explicitly, and it was not necessary to do the prepare the statement implicitly because the full executable statement is now kept in the thread’s local storage. This case is called prepare avoidance.
Figure 4-50 Full caching, CACHEDYN = NO, KEEPDYNAMIC = YES and MAXKEEPD > 0
Using full caching the maximum size of the local cache across all user threads is controlled by the MAXKEEPD DSNZPARM. A FIFO algorithm is used for replacement of statements in the local cache.
CACHEDYN should be turned on for dynamic SQL for WebSphere applications. For Local Dynamic Statement Cache because the statements are kept in thread storage, Sysplex workload balancing is not available if KEEPDYNAMIC is exploited. Use BIND option KEEPDYNAMIC(YES) for application with a limited number of SQL statements that are executed frequently.
To achieve a balance between performance and storage usage, you can adjust EDMSTMTC and MAXKEEPD according to the statistic report. Generally GLOBAL CACHE HIT RATIO should be higher than 90%-95%, and LOCAL CACHE HIT RATIO should be higher than 70%.
GLOBAL CACHE HIT RATIO = [Short Prepares] / [Short + Full Prepares]
LOCAL CACHE HIT RATIO = [Prepares Avoided]/[Prepares Avoided + Implicit Prepares]
DYNAMIC SQL STMT QUANTITY /SECOND /THREAD /COMMIT
--------------------------- -------- ------- ------- -------
PREPARE REQUESTS 124.5K 5.78 0.75 0.25
FULL PREPARES 17446.00 0.81 0.10 0.04
SHORT PREPARES 108.1K 5.02 0.65 0.22
GLOBAL CACHE HIT RATIO (%) 86.10 N/A N/A N/A
IMPLICIT PREPARES 0.00 0.00 0.00 0.00
PREPARES AVOIDED 5603.00 0.26 0.03 0.01
CACHE LIMIT EXCEEDED 0.00 0.00 0.00 0.00
PREP STMT PURGED 3.00 0.00 0.00 0.00
LOCAL CACHE HIT RATIO (%) 100.00 N/A N/A N/A
Figure 4-51 Information about the dynamic SQL statement in statistic report
WebSphere Application Server prepared statement cache and DB2 dynamic statement cache are different concepts. For how to make these two functions work together, refer to 2.6.6, “WebSphere Application Server prepared statement cache” on page 57.
 
Note: DB2 10 for z/OS has largely reduced LC24 on the EDM pool by removing the areas dedicated to cursor tables and skeleton cursor tables.
4.3.3 Locking and accounting setup
In this section, we describe DB2 system parameters related to locking and lock performance of the DB2 subsystem and to accounting and administration:
Application and system locking
We need to consider the following parameters related to application locking and lock performance:
RESOURCE TIMEOUT field (IRLMRWT)
This parameter specifies the number of seconds IRLM waits before detecting a time-out. IRLM checks for time-out on each deadlock detection cycle. So the actual wait time between the lock request and IRLM detecting the time-out will be:
For non-data sharing:
IRLMRWT <= actual wait time <= IRLMRWT + DEADLOCK TIME
In a data sharing environment, because the deadlock detection process sends inter-system XCF messages, the actual wait time is longer:
IRLMRWT + DEADLOCK TIME <= actual wait time <= IRLMRWT + 4 * DEADLOCK TIME
If you can afford suspended process remaining inactive for 60 seconds, use the default. Sometimes TIMEOUT is caused by badly behaving application, you can simulate workload in testing environment and identify it:
1. Start with the default of 60 seconds.
2. Monitor the time-out.
3. Reduce the value a few seconds if none occur. Cycle back to 2.
4. If time-outs occur, identify the cause and correct the process if possible. Cycle back to 2.
You can change the TIMEOUT value using the IRLM modify command.
DEADLOCK TIME field and DEADLOCK CYCLE field
These two fields on panel DSNTIPJ, correspond to the IRLM start procedure DEADLOK parameter. DEADLOCK TIME controls the time for which local deadlock detection cycles are to run. DEADLOCK CYCLE specifies the number of local deadlock cycles that must expire before the IRLM does global deadlock detection, which is used only for DB2 data sharing. You can use (1,1) by default.
LOCKS PER TABLE (SPACE) field (NUMLKTS)
This parameter specifies the default maximum number of page, row, or LOB locks that an application can hold simultaneously in a table or table space. LOCKMAX clause of the CREATE TABLESPACE and ALTER TABLESPACE can overwrite this setting for a specific table space. If a single application exceeds the maximum number of locks in a single table or table space, lock escalation occurs. It obtains a table or table space lock, then releases all of the page or row locks.
This value is workload dependent. High setting or a value of 0 might result in excessive numbers of locks, which is storage and CPU time consuming. Whereas small value can trigger lock escalation frequently, which might lead to lock contention. Lock escalation is an expensive process as well.
LOCKS PER USER field (NUMLKUS)
This parameter specifies the maximum number of page, row, or LOB locks that a single application can hold concurrently for all table spaces. After that limit is reached, the program that accumulated these locks will terminate with SQLCODE -904. Do not specify 0 or a large value unless it is specifically required to run an application.
U LOCK FOR RR/RS field (RRULOCK)
This parameter specifies whether DB2 is to use U (UPDATE) locks or S (SHARE) locks when the isolation of the program is repeatable read (RR) or read stability (RS). If your programs with RR or RS make frequent updates, specify YES to get greater concurrency. For more information about LOCK mode and isolation level, see 6.8, “Locking” on page 331.
X LOCK FOR SEARCHED U/D field (XLKUPDLT)
This specifies locking method when performing a searched update or delete. The acceptable values are:
NO (default), DB2 uses an S or U lock when scanning for qualifying rows. Before DB2 updates or deletes qualifying rows or pages, the lock is changed to an X lock.
YES, DB2 uses an X lock on qualifying rows or pages based on stage 1 predicates.
TARGET, it means a combination of YES and NO behavior. DB2 uses an X lock on qualifying rows or pages of the updating or deleting table, while it uses an S or U lock when scanning for rows or pages of other tables referenced by the query
A value of NO provides higher rates of concurrency.
EVALUATE UNCOMMITTED field (EVALUNC)
This parameter specifies whether predicate evaluation can occur on uncommitted data of other application processes. It applies only to stage 1 predicate processing that uses table access (table space scan, index-to-data access, and RID-list processing) for queries with isolation level RS or CS.
Default value is NO. Specify YES to improve concurrency if your applications can tolerate returned data that might falsely exclude any data that would be included as the result of undo processing. This parameter does not influence whether uncommitted data is returned to an application because queries with isolation level RS or CS return only committed data.
You can obtain similar results by using the SQL SKIPPED LOCKED DATA clause.
SKIP UNCOMM INSERTS field (SKIPUNCI)
This parameter specifies whether statements ignore a row that was inserted by another transaction but has not been committed or aborted. It applies only to statements with row-level locking and isolation level RS or CS
Default value is NO. If your applications do not need to wait for the inserts outcome of other transactions, specifying YES to get greater concurrency.
DB2 accounting accumulation setting
For DRDA threads and RRS attach threads, you can reduce the high volume DB2 accounting records by using accounting accumulation parameter, which consolidates multiple accounting records into one.
DDF/RRSAF ACCUM field (ACCUMACC)
The parameter controls whether DB2 accumulated accounting data by the user for DDF and RRSAF threads.
NO means DB2 writes an accounting record when a DDF thread becomes inactive or when sign-on occurs for an RRSAF thread.
A value n (between 2 and 65535) means DB2 writes an accounting record every n accounting intervals for a given user.
AGGREGATION FIELDS field (ACCUMUID)
This parameter controls the aggregation fields used for DDF and RRSAF accounting rollup. Each value (between 0 and 17) represents a rollup criteria. For more information, see DB2 10 for z/OS Installation and Migration Guide, GC19-2974, Tracing parameters panel: DSNTIPN.
 
Note: For JDBC type 2 connections you might want to consider setting the account interval data source property to obtain DB2 accounting information written at DB2 commit.
DB2 statistics and accounting traces
We configured our DB2 members to collect DB2 statistics and accounting traces through SMF. For this we performed the following DB2 system parameter (DSNZPARM) settings through which DB2 accounting and statistics traces are started during DB2 member startup:
SMFACCT=(1,2,3,7,8)
The SMFACCT DSNZPARM controls the collection of DB2 accounting traces through SMF. Besides plan level accounting information (classes 1,2,3) we also collected package level accounting information (classes 7,8). For Java workloads you might want to consider not to collect package level information as you cannot use the JDBC package names to perform application level profiling and reporting.
SMFSTAT=(1,3,4,5,6,7)
The SMFSTAT DSNZPARM controls the collection of DB2 statistics traces through SMF. In our DB2 environment we collect statistics trace classes 1, 3, 4, 5, 6, and 7.
Miscellany
Below are other DB2 installation parameters you need to note for the Java applications running in a WebSphere Application Server environment.
DESCRIBE FOR STATIC field (DESCSTAT)
This parameter controls whether DB2 is to build a DESCRIBE SQL descriptor area (SQLDA) when binding static SQL statements. Use the default value YES especially you are using SQLJ. Packages size will slightly increase because the DESCRIBE SQLDA is stored with each bound SQL SELECT statement.
ADMTPROC - administrative task scheduler
ADMTPROC identifies a name for the JCL procedure that is used to start the DB2 administrative task scheduler that is associated with the DB2 subsystem. We set this parameter to the following values:
D0Z1ADMT for subsystem D0Z1
D0Z2ADMT for subsystem D0Z2
If you set this parameter to blanks, DB2 will not start the administrative task scheduler.
4.3.4 Buffer pool configuration
Our environment is configured with function testing in mind. For this purpose it is sufficient to provide separate buffer pools for the following object categories:
DB2 catalog and directory
User table spaces
User index spaces
Lob user data
Workfile data base
To support separation of the object categories we created the buffer pools shown in Table 4-4 in both data sharing members:
Table 4-4 Buffer pool configuration
Buffer pool ID
Object category
Size in no of pages
BP0
Catalog Directory
20000
BP8K0
Catalog Directory 8K pages
20000
BP16K0
Catalog Directory 16K pages
20000
BP32K
Catalog Directory 32K pages
20000
BP1
User table spaces
20000
BP2
User index spaces
20000
BP3
Lob user data
20000
BP8K1
User table spaces 8 KB pages
10000
BP16K1
User table space 16 KB pages
10000
BP16K3
XML table spaces
10000
BP32K1
User table spaces 32 KB pages
10000
BP7
Workfile 4 KB pages
20000
BP8
Workfile index buffer pool
20000
BP32K7
Workfile 32 KB pages
20000
Buffer pool related DSNZPARM configuration
To support the buffer pool separation shown in Table 4-4 on page 151 on DB2 subsystem level we used the DSNZPARM settings on both data sharing members to keep user table and index spaces away from the buffer pools used by catalog and directory and the workfile database:
IDXBPOOL=BP2
TBSBPOOL=BP1
TBSBP8K=BP8K1
TBSBP16K=BP16K1
TBSBP32K=BP32K1
TBSBPLOB=BP3,
TBSBPXML=BP16K3
Database related buffer pool configuration
To support the buffer pool separation shown in Table 4-4 on page 151 on database level we configured the DayTrader-EE6 database to use the default buffer pools settings shown in Example 4-12 for table space and index space creation:
Example 4-12 Create database default buffer pool settings
CREATE DATABASE DBTR8074
BUFFERPOOL BP1
INDEXBP BP2
CCSID EBCDIC
STOGROUP GR248074;
Buffer pool tuning
With the buffer pool configuration shown in Table 4-4 on page 151 we hardly ran into performance problem caused by undersized buffer pools. However, if you need to tune your buffer pools, refer to DB2 9 for z/OS: Buffer Pool Monitoring and Tuning, REDP-4604.
Simulate production like buffer pool sizes and catalog statistics
To tune your DB2 application or queries under production like conditions is a pre-production environment is an important requirement which enables you to discover problems with applications and SQL queries in time prior to application or SQL production deployment. For this it is recommended to have your tables to reflect production like data volumes or, if this is not an option, to configure your DB2 catalog to reflect production like statistics for the tables against which you are going to run application workloads or you need to perform query tuning.
For more information about cloning catalog statistics, refer to DB2 10 for z/OS Managing Performance, SC19-2978, “Modeling your production system statistics in a test subsystem”.
When preparing an SQL statement the optimizer takes important hardware configurations such as buffer pool sizes, CPU speed, and the number of processor into account to make the most suitable cost based access path decision.
In cases in which a DB2 test system is constraint on CPU and real storage resources the optimizer cannot provide you with an access path decision it would have made if the same access path decision was taken in a DB2 production environment with more and faster CPUs and with more real storage and bigger buffer pools. To provide help in such situations you can use DB2 profiles to model your DB2 test environment based on the configuration of your production environment. Without having to have the corresponding hardware resources installed and available to your DB2 test system, you can use profiles to provide the following parameters to emulate your production hardware and DB2 pool configuration for DB2 access path selection:
Processor speed
Number of processors
Maximum number of RID blocks
Sort pool size
Buffer pool size
For more information about this topic refer to “Modeling a production environment on a test subsystem”, DB2 10 for z/OS, Managing Performance, SC19-2978.
4.3.5 DB2 for z/OS Distributed Data Facility
Accessing DB2 for z/OS from Java applications require the JDBC packages to be bound in DB2 for z/OS. The only way to bind the JDBC packages is through a JDBC type 4 connection using the DB2Binder utility or the DB2 command line processor. This is why the DB2 base setup for JDBC includes setting up Distributed Data Facility (DDF) even if you do not plan to use DDF is you only want to use local JDBC type 2 connections.
DB2 boot strap data set configuration
DDF setup requires you to change the BSDS distributed data facility communication record to provide location name, port numbers, and optionally the IP addresses to be used by the member and the group.
Configuration with IP address in the BSDS
We used the DSNJU003 utility control statements shown in Example 4-13 to set up DDF in DB2 for z/OS once for each DB2 data sharing member involved.
Example 4-13 DSNJU003 DDF configuration with IP address
//BSDSCH3 EXEC PGM=DSNJU003
//STEPLIB DD DISP=SHR,DSN=DB0ZT.SDSNEXIT
// DD DISP=SHR,DSN=DB0ZT.SDSNLOAD
//SYSUT1 DD DISP=OLD,DSN=DB0ZB.D0Z1.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB0ZB.D0Z1.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
* DDF intial record setting
DDF LOCATION=DB0Z,RESPORT=39002,PORT=39000,SECPORT=0
* IPV4=<member IP addr>,GRPIPV4=<group IP addr>
DDF IPV4=9.12.4.138,GRPIPV4=9.12.4.153
* we are not using a VTAM LUNAME
DDF NOLUNAME
* DB2 to initialize the TCP/IP interface only
DDF IPNAME=IPDB0Z
We configured the member and group DVIPA in the BSDS using the IPV4 and GRPIPV4 parameters. With this BSDS setting the TCP/IP port statements shown in “Port definition without IP address binding” on page 157 must not have any BIND IP address configuration. When DB2 starts it automatically binds to the IP addresses given in the BSDS. DB2 accepts connections not only on the IP address specified in the BSDS, but on any IP address that is active on the TCP/IP stack. Additionally, connections are accepted on both secure and non-secure SQL ports. In contrary, using bind specific TCP/IP port statements as discussed in “Port definition with IP address binding” on page 156 do not support secure DB2 SQL ports.
 
Important: With IP addresses in the BSDS a client can connect to DB2 using IP addresses other than the group or member IP address provided these are active on the current TCP/IP stack. This can be useful as it provides the flexibility to choose between IP addresses available on the current IP stack. However, DB2 clients connecting to DB2 for z/OS using an IP address other than the DB2 group or member specific IP address might break if a DB2 member has been moved to run in a different LPAR.
Configuration without IP address in the BSDS
With IP address bindings defined on the port statement as shown in “Port definition with IP address binding” on page 156 the BSDS distributed data facility communication record must not have the member and group IP addresses provided through the IPV4 and GRPIPV4 BSDS parameters. The DSNJU003 utility control statements required to perform that configuration is shown in Example 4-14.
Example 4-14 DSNJU003 DDF configuration without IP address
//BSDSCH3 EXEC PGM=DSNJU003
//STEPLIB DD DISP=SHR,DSN=DB0ZT.SDSNEXIT
// DD DISP=SHR,DSN=DB0ZT.SDSNLOAD
//SYSUT1 DD DISP=OLD,DSN=DB0ZB.D0Z1.BSDS01
//SYSUT2 DD DISP=OLD,DSN=DB0ZB.D0Z1.BSDS02
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
* DDF intial record setting
DDF LOCATION=DB0Z,RESPORT=39002,PORT=39000,SECPORT=0
* we are not using a VTAM LUNAME
DDF NOLUNAME
* DB2 to initialize the TCP/IP interface only
DDF IPNAME=IPDB0Z
When we initially setup our DDF configuration we setup the SSLPORT to support SSL encryption. During startup DB2 issued the error message shown in Figure 4-52 indicating that the TCP/IP IP address bindings on the PORT statement were not supported with DB2 secure port configurations. As a consequence we corrected the TCP/IP port configuration to remove the IP address bindings and defined the IP addresses in the BSDS.
DSNL512I -D0Z1 DSNLILNR TCP/IP BINDSPECIFIC NOT
SUPPORTED WITH SECURE PORT FAILED WITH
RETURN CODE=0 AND REASON CODE=00000000
Figure 4-52 DB2 secure port and BINDSPECIFIC
 
Important: If you use IP address bindings on the TCP/IP port configuration in your DDF configuration you will not be able to configure an SSL port in DB2. If you do DB2 issues the error message DSNL512I and the DDF initialization fails.
Member specific location alias
In some situations it is useful to be able to connect to an individual member of a data sharing group by using the DB2 group IP address. To cater for that requirement, we defined one member specific location alias for each DB2 member. We decided to define those aliases through the DB2 modify command interface (see Example 4-15) to take advantage of the online change capabilities provided by DB2 MODIFY DDF ALIAS command interface. You cannot use the DB2 MODIFY DDF command to change alias names that have been statically defined in the BSDS.
Example 4-15 Dynamically define location alias
-D0Z1 MODIFY DDF ALIAS(D0Z1) ADD
-D0Z2 MODIFY DDF ALIAS(D0Z2) ADD
-D0Z1 MODIFY DDF ALIAS(D0Z1) START
-D0Z2 MODIFY DDF ALIAS(D0Z2) START
Upon successful command completion we displayed the status of DDF on both DB2 members and obtained the command output shown in Figure 4-53.
-D0Z1 DIS DDF
DSNL080I -D0Z1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL087I ALIAS PORT SECPORT STATUS
DSNL088I D0Z1 0 0 STARTD
-D0Z2 DIS DDF
DSNL080I -D0Z2 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL087I ALIAS PORT SECPORT STATUS
DSNL088I D0Z2 0 0 STARTD
Figure 4-53 Display DDF alias
Activate high performance DBAT
After we successfully activated DDF we activated high performance DBAT in both DB2 members by running a DB2 command as shown in Example 4-21 on page 161. For a discussion on high performance DBAT refer to “Configuration for high performance DBATs” on page 160
DB2 related TCP/IP configuration
For the DB2 data sharing group to be able to provide TCP/IP based DB2 connections, ideally based upon DB2 Sysplex workload balancing in conjunction with TCP/IP Sysplex Distributor, you need to cater for the following configuration:
DB2 member specific and group Dynamic Virtual IP address (DVIPA) with automatic VIPA takeover
DB2 member specific resynchronization port
DB2 group well-known SQL port
DB2 member and group DVIPA
We configured TCP/IP to use the IP addresses shown in Example 4-16 to define the DVIPA resources for our data sharing group and its DB2 members.
Example 4-16 TCP/IP DVIPA configuration
VIPADYNAMIC
; BEGIN DB2 D0ZG
VIPARANGE DEFINE 255.255.255.255 9.12.4.138 ; D0Z1
VIPARANGE DEFINE 255.255.255.255 9.12.4.142 ; D0Z2
VIPADEFINE 255.255.255.255 9.12.4.153 ; D0ZG
VIPADISTRIBUTE DEFINE 9.12.4.153 PORT 39000 DESTIP ALL
; END DB2 D0ZG
ENDVIPADYNAMIC
Port definition with IP address binding
We used the port configuration shown in Example 4-17 to define the TCP/IP ports required to support the BSDS configuration shown in “Configuration without IP address in the BSDS” on page 154.
Example 4-17 TCP/IP Port configuration with IP address binding
PORT
39000 TCP D0Z1DIST1 SHAREPORT3 BIND2 9.12.4.153
39000 TCP D0Z2DIST1 SHAREPORT3 BIND2 9.12.4.153
39002 TCP D0Z1DIST1 BIND2 9.12.4.138
39003 TCP D0Z2DIST1 BIND2 9.12.4.142
1. By specifying the DDF address space names in the port statements we restrict port usage to the address space given in the port statement. This prevents others from accidentally using this port number.
2. The BIND parameter causes the specified address space to bind to the IP address given in the same port statement.
3. SHAREPORT allows for D0Z1DIST and D0Z2DIST to share port 39000 which represents the well-known SQL port of the data sharing group.
Port definition without IP address binding
We used the port configuration shown in Example 4-18 to define the TCP/IP ports required to support the BSDS configuration shown in “Configuration with IP address in the BSDS” on page 154.
Example 4-18 TCP/IP port configuration without IP address binding
PORT
39000 TCP D0Z1DIST SHAREPORT
39000 TCP D0Z2DIST SHAREPORT
39002 TCP D0Z1DIST
39003 TCP D0Z2DIST
DB2 startup messages
Upon successful DDF configuration we started each data sharing member and verified the DDF configuration by reviewing the DDF start messages shown in Figure 4-54.
DSNL003I -D0Z1 DDF IS STARTING
DSNL523I -D0Z1 DSNLILNR TCP/IP SERVICES AVAILABLE
FOR IP ADDRESS ::9.12.4.138 AND PORT 39000 1
DSNL523I -D0Z1 DSNLILNR TCP/IP SERVICES AVAILABLE
FOR IP ADDRESS ::9.12.4.153 AND PORT 39000 2
DSNL004I -D0Z1 DDF START COMPLETE
LOCATION DB0Z
LU -NONE 3
GENERICLU -NONE
DOMAIN d0zg.itso.ibm.com
TCPPORT 39000
SECPORT 0 4
RESPORT 39002
IPNAME IPDB0Z 5
OPTIONS:
PKGREL = BNDOPT
DSN9022I -D0Z1 DSNYASCP 'STA DB2' NORMAL COMPLETION
DSNL523I -D0Z1 DSNLIRSY TCP/IP SERVICES AVAILABLE
FOR IP ADDRESS ::9.12.4.138 AND PORT 39002 6
Figure 4-54 DB2 DDF startup messages
The DDF part of the D0Z1MSTR startup messages confirmed our customization:
1. The DB2 member is ready to accept connections on SQL port 39000 and the member specific IP address
2. The DB2 member is ready to accept connections on SQL port 39000 and the data sharing group IP address
3. An IBM VTAM® LU name is not required by DRDA workloads. Most DDF connections use TCP/IP. Configuring DB2 without a VTAM LU name saves resources required for initializing and maintaining the DB2 VTAM interface.
4. SECPORT was set to 0 to disable DDF SSL processing. We intentionally used that configuration option as the DB2 DDF address space was placed in a secure network, front ended by WebSphere Application Server. SSL encryption was therefore not required.
5. We set up DDF to use IPNAME to make sure the DB2 VTAM interface is not initialized during DB2 startup.
6. Member D0Z1 is ready to accept requests on its resynchronization port which is required for all resynchronizations.
As shown in Figure 4-55 on page 159 you can alternatively issue the “DISPLAY DDF” command to review the DDF configuration of an active DB2 data sharing member. The command output below additionally shows the following DB2 system parameter settings and DDF thread management related information that are important for system monitoring and tuning:
DT=I, DSNZPARM CMTSTAT=INACTIVE
CONDBAT=10000, DSNZPARM CONNDBAT=10000
MDBAT=200, DSNZP
MAXDBAT=200
ADBAT=0, Current number of database access threads
QUEDBAT=0, cumulative counter that is always incremented when the DSNL090I MDBAT1 limit has been reached
INADBAT=0, Current number of inactive DBATs. This value only applies if the dt value specified in the DSNL090I message indicates that DDF. INACTIVE support is enabled. Any database access threads reflected here can also be observed in the DISPLAY THREAD TYPE(INACTIVE) command report.
CONQUED=0, Current number of connection requests that have been queued and are waiting to be serviced. This value only applies if the dt value that is specified in the DSNL090I message indicates that DDF INACTIVE support is enabled.
DSCDBAT=0, Current number of disconnected database access threads. This value only applies if the dt value specified in the DSNL090I message indicates that DDF. INACTIVE support is enabled.
INACONN=0, current number of inactive connections. This value only applies if the dt value specified in the DSNL090I message indicates that DDF. INACTIVE support is enabled.
-D0Z1 DIS DDF DETAIL
DSNL080I -D0Z1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB0Z -NONE -NONE
DSNL084I TCPPORT=39000 SECPORT=0 RESPORT=39002 IPNAME=IPDB0Z
DSNL085I IPADDR=::9.12.4.153
DSNL086I SQL DOMAIN=d0zg.itso.ibm.com
DSNL086I RESYNC DOMAIN=d0z1.itso.ibm.com
DSNL089I MEMBER IPADDR=::9.12.4.138
DSNL090I DT=I CONDBAT= 10000 MDBAT= 200
DSNL092I ADBAT= 0 QUEDBAT= 0 INADBAT= 0 CONQUED= 0
DSNL093I DSCDBAT= 0 INACONN= 0
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 30 ::9.12.4.142
DSNL102I 12 ::9.12.4.138
DSNL105I CURRENT DDF OPTIONS ARE:
DSNL106I PKGREL = BNDOPT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
Figure 4-55 DB2 display DDF command output
To enable remote DB2 clients to connect to DB2 using a group domain name we then managed to have the group and member DVIPA addresses registered in our domain name server (DNS). To test the set up, we configured the DB2 for LUW database directory as shown in Example 4-19.
Example 4-19 DB2 for LUW db directory setup
catalog db db0z at node db0z authentication dcs;
catalog tcpip node db0z remote "d0zg.itso.ibm.com" server 39000 ostype mvs;
catalog dcs db db0z as db0z;
As shown in Example 4-20 we were then able to use the DB2 command line processor (CLP) to connect to the database that we cataloged in Example 4-19.
Example 4-20 DB2 DDF verification
db2 => connect to db0z user db2r3
Enter current password for db2r3:
 
Database Connection Information
 
Database server = DB2 z/OS 10.1.5
SQL authorization ID = DB2R3
Local database alias = DB0Z
 
db2 => ping db0z 5
 
Elapsed time: 82264 microseconds
Elapsed time: 81563 microseconds
Elapsed time: 82586 microseconds
Elapsed time: 82217 microseconds
Elapsed time: 81808 microseconds
 
db2 =>
In the example shown in Example 4-20 on page 159 we issued a DB2 PING command to measure DB2 for z/OS server turnaround elapsed time. The average network turnaround time shown is higher than 0.08 seconds indicating high network latency. Depending on your throughput requirement you should expect to see turnaround times well below 0.001 seconds.
For more information about setting up DB2 for z/OS for a distributed load balancing and fault tolerant configuration refer to 3.3, “High availability configuration options” on page 92 and DB2 9 for z/OS Data Sharing: Distributed Load Balancing and Fault Tolerant Configuration, REDP-4449.
4.3.6 High Performance DBATs
Before DB2 10, all packages that were accessed at server through DRDA behave as RELEASE(COMMIT) even they were bound with RELEASE(DEALLOCATE). DB2 10 for z/OS, if configured for High Performance DBAT honors the RELEASE(DEALLOCATE) package bind parameter for database access threads, which reduces CPU cost for package allocation and deallocation processing. Performance results can vary and the benefits are more pronounced for short transactions.
Feature of High Performance DBATs
If a package that is associated with a distributed application is bound with RELEASE (DEALLOCATE), it is allocated to the DBAT up until the DBAT is terminated. Although CMTSTAT is set to INACTIVE, DDF does not pool the DBAT and disassociates it from its connection after the unit-of-work is ended. Thus the DBATs hold package allocation locks even while they are not being used for client unit-of-work processing.
The High Performance DBAT will be terminated after 200 (not user changeable) units-of-work are processed by it. On the next request to start an unit-of-work by the connection, a new DBAT is created or a pooled DBAT is assigned to process the unit-of-work. Normal idle thread time-out detection is applied to these DBATs. IDTHDOIN will not apply if the DBAT is waiting for the next client unit-of-work.
Configuration for high performance DBATs
High performance DBATs are available only under the following conditions:
KEEPDYNAMIC YES is not enabled.
CURSOR WITH HOLD is not enabled.
CMTSTAT is set to INACTIVE.
These are the steps when dealing with High Performance DBAT
1. BIND or REBIND packages with RELEASE(DEALLOCATE)
We recommend to bind the JDBC packages that you want to use with High Performance DBAT into their own package collection. For information about the procedure we used to bind the JDBC packages into their own collections refer to 4.3.9, “Bind JDBC packages” on page 165.
2. Use -MODIFY DDF PKGREL(COMMIT)
When you want to increase resource concurrency and the likelihood for your SQL DDL, BIND operations, and utilities to be successfully executed while the application workload is running, you can deactivate High Performance DBAT by issuing the command -MODIFY DDF PKGREL(COMMIT).
3. Use -MODIFY DDF PKGREL(BNDOPT) command
This command enables the RELEASE bind option (COMMIT or RELEASE) that was previously used for remote client processing for any package that is used for remote client processing.
Example 4-21 shows the results of the MODIFY DDF PKGREL command.
Example 4-21 MODIFY DDF PKGREL(BNDOPT) output
-D0Z1 MODIFY DDF PKGREL(BNDOPT)
DSNL300I -D0Z1 DSNLTMDF MODIFY DDF REPORT FOLLOWS:
DSNL302I PKGREL IS SET TO BNDOPT
DSNL301I DSNLTMDF MODIFY DDF REPORT COMPLETE
Example 4-22 shows the results of the -DIS DDF command. You can check setting of PKGREL through message DSNL106I.
Example 4-22 -DIS DDF command reporting the PKGREL option
-D0Z1 DIS DDF
DSNL080I -D0Z1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB0Z -NONE -NONE
DSNL084I TCPPORT=39000 SECPORT=0 RESPORT=39002 IPNAME=IPDB0Z
DSNL085I IPADDR=::9.12.4.153
DSNL086I SQL DOMAIN=d0zg.itso.ibm.com
DSNL086I RESYNC DOMAIN=d0z1.itso.ibm.com
DSNL089I MEMBER IPADDR=::9.12.4.138
DSNL105I CURRENT DDF OPTIONS ARE:
DSNL106I PKGREL = BNDOPT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
Because activating High Performance DBAT for distributed applications avoids pooling of DBATs, you might have to increase subsystem parameter MAXDBAT to avoid queuing of distributed requests.
BNDOPT is the default value of the MODIFY DDF PKGREL command.
By using these commands, you do not need to perform REBIND to activate or deactivate High Performance DBAT.
JDBC bind recommendation
High Performance DBATs and bind option KEEPDYNAMIC(YES) are mutually exclusive. You need to choose between using High Performance DBAT and bind option KEEPDYNAMIC(YES) depending on the characteristics of your applications.
Bind option KEEPDYNAMIC(YES) is recommended for applications with a limited amount of SQL statements that are frequently executed.
High Performance DBAT fits best for a light transaction environment.
To allow for High Performance DBAT to be chosen on application level we recommend to create dedicated package collections which you explicitly bind with RELEASE(DEALLOCATE) and which you explicitly choose to be used by your application by setting the setCurrentPackagePath data source custom property to the name of the package collection ID. For details, see 5.11.2, “currentPackagePath” on page 292.
In our environment we created to following JDBC package collections to support this intention:
Collection JDBCHDBAT bound with RELEASE(DEALLOCATE)
Collection JDBCNOHDBAT bound with RELEASE(COMMIT)
For more information about how we created these package collections refer to 4.3.9, “Bind JDBC packages” on page 165.
4.3.7 IBM Data Server Driver for JDBC and SQLJ
Our workload scenario accesses DB2 for z/OS using JDBC type 2 and JDBC type 4 connections. You use a JDBC type 2 connection to establish a local connection to a DB2 system or a data sharing member running in the same z/OS system as your JDBC application. You use a JDBC type 4 connection to establish a remote connection to a DB2 subsystem or data sharing group. In case of JDBC type 4 the connection requires a TCP/IP network to be available between the DB2 client and the DB2 server. An overview over DB2 clients connecting to DB2 for z/OS using JDBC type 2 and type 4 connections is illustrated in Figure 4-56. See 3.2.1, “Connectivity options for IBM Data Server Driver for JDBC and SQLJ” on page 89 for more information.
Figure 4-56 Overview applications using JDBC type 2 and type 4
1. A Java application running on z/OS uses JDBC type 2 to connect to DB2 for z/OS
2. A Java application uses JDBC type 4 to directly or indirectly connect to DB2 for z/OS. The Java application can run on z/OS or non-z/OS platforms.
3. A multiplatform ODBC, .NET or DB2 call level interface (CLI) client directly or indirectly connects to DB2 for z/OS using the IBM Data Server Driver for ODBC and CLI.
Driver configuration
As explained in 4.1.7, “UNIX System Services file system configuration” on page 123 the JDBC driver related files have been installed by SMP/E and made available to our runtime environment by using symbolic links that we defined to point to the appropriate zFS file system data sets.
As a prerequisite for binding the JDBC packages using the DB2Binder utility under UNIX System Services we need to complete the UNIX System Services JDBC configuration to support High Performance DBAT and the DB2 command line processor.
Based on the JDBC install base we carry out the following configuration tasks:
Set DB2 subsystem parameter DESCSTAT to YES as already discussed in “DESCRIBE FOR STATIC field (DESCSTAT)” on page 151
STEPLIB libraries
The following load libraries need to be available through STEPLIB data set allocation in case the application used JDBC type 2 connections to access DB2.
 – DB0ZT.SDSNEXIT
 – DB0ZT.SDSNLOAD
 – DB0ZT.SDSNLOD2
The SDSNLOD2 library contains the JDBC type 2 DLL load modules which are referred to by UNIX System Services through external link definitions (see 4.3.8, “JDBC type 2 DLL and the SDSNLOD2 library” on page 164 for details).
Our WebSphere Application Server environment defines these data sets in application server STEPLIB concatenation to cater for the JDBC type 2 requirement.
Modify the global UNIX System Service profile (/etc/profile) to customize the environment variable settings to reflect the JDBC libraries, paths, and files that the IBM Data Server Driver for JDBC and SQLJ uses. We used the export commands shown in Figure 4-57 to perform these changes.
export PATH=/usr/lpp/db2/d0zg/jdbc/bin:$PATH
export LIBPATH=/usr/lpp/db2/d0zg/jdbc/lib:$LIBPATH
export CLASSPATH=/usr/lpp/db2/d0zg/jdbc/classes/db2jcc.jar:
/usr/lpp/db2/d0zg/jdbc/classes/db2jcc_javax.jar:
/usr/lpp/db2/d0zg/jdbc/classes/sqlj.zip:
/usr/lpp/db2/d0zg/jdbc/classes/db2jcc_license_cisuz.jar:
$CLASSPATH
Figure 4-57 /JDBC etc/profile changes
Enable the DB2-supplied stored procedures
The following IBM Data Server Driver for JDBC and SQLJ required stored procedures have been implemented in our environment during DB2 installation.
 – SQLCOLPRIVILEGES
 – SQLCOLUMNS
 – SQLFOREIGNKEYS
 – SQLFUNCTIONCOLS
 – SQLFUNCTIONS
 – SQLGETTYPEINFO
 – SQLPRIMARYKEYS
 – SQLPROCEDURECOLS
 – SQLPROCEDURES
 – SQLPSEUDOCOLUMNS
 – SQLSPECIALCOLUMNS
 – SQLSTATISTICS
 – SQLTABLEPRIVILEGES
 – SQLTABLES
 – SQLUDTS
 – SQLCAMESSAGE
You can run installation job DSNTIJRV to confirm that these procedure have been appropriately implemented.
For more information about installing and setting up the IBM Data Server Driver for JDBC refer to Chapter 8. Installing the IBM Data Server Driver for JDBC and SQLJ of DB2 10 for z/OS, Application Programming Guide and Reference for Java, SC19-2970.
4.3.8 JDBC type 2 DLL and the SDSNLOD2 library
The JDBC type 2 DLL2 are loaded through the JDBC DLL directory that we specified in the export LIBPATH command shown in Figure 4-57 on page 163. When we ran the UNIX System Services command shown in Figure 4-58 we noticed that the executables in the JDBC type 2 DLL directory consist of external links that point to DLL load modules that reside outside the unix file system in the SDSNLOD2 load library.
ls -l /usr/lpp/db2/d0zg/jdbc/lib
1 2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos.so -> DSNAQJL2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos4.so -> DSNAJ3L2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos4_64.so -> DSNAJ6L2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos_64.so -> DSNAQ6L2
Figure 4-58 JDBC type 2 DLL external links
1. The first character of the command output (the e character) identifies the file as an external link.
2. Following the right arrow the output shows the name of the external load module the external link points to.
When the runtime environment loads a DLL that refers to an external load module it uses the following search order when locating the DLL:
1. STEPLIB
2. Link Pack Area (LPA)
3. z/OS Linklist
To be able to use the JDBC type 2 driver we included the SDSNLOD2 library in the WebSphere Application Server STEPLIB library concatenation. When we listed the members of the SDSNLOD2 library as shown in Figure 4-59 we located the external load module names referred to in Figure 4-58.
BROWSE DB0ZT.SDSNLOD2
Command ===>
Name Size TTR AC AM RM
_________ DSNAJ3L2 00064F68 000010 00 31 ANY
_________ DSNAJ6L2 00082FB8 00000E 00 64 ANY
_________ DSNAQJL2 00064E40 00000F 00 31 ANY
_________ DSNAQ6L2 00082E48 00000D 00 64 ANY
Figure 4-59 JDBC type 2 DLL in SDSNLOD2
During DB2 installation SMP/E executes the UNIX System Services commands shown in Figure 4-60 to associate the SDSNLOD2 load modules shown in Figure 4-59 on page 164 with the UNIX System Services path names shown in Figure 4-58 on page 164.
ln -e DSNAQJL2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos.so
ln -e DSNAJ3L2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos4.so
ln -e DSNAJ6L2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos4_64.so
ln -e DSNAQ6L2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos_64.so
Figure 4-60 UNIX System Services SDSNLOD2 external link definition
4.3.9 Bind JDBC packages
Upon successful DDF implementation and after we completed the JDBC driver configuration we use the DB2Binder utility to bind the JDBC packages in our DB2 for z/OS system into the following two package collection IDs:
NULLID - default collection ID used if no collection ID is set in the setCurrentPackagePath data source custom property.
JDBCHDBAT - collection ID for applications wanting to take advantage of DB2 High Performance DBAT. Packages in this collection ID are bound with bind parameter RELEASE(DEALLOCATE).
JDBCNOHDBAT - collection ID for applications not wanting to take advantage of DB2 High Performance DBAT. Packages in this collection ID are bound with bind parameter RELEASE(COMMIT).
By using dedicated JDBC collections we deliberately do not change the NULLID collection ID which is commonly used by the majority of DB2 remote applications. Globally rebinding packages belonging to the NULLID collection with RELEASE(DEALLOCATE) is not suitable, because some of your workload better qualifies for using bind options KEEPDYNAMIC(YES) and RELEASE(COMMIT). See 4.3.6, “High Performance DBATs” on page 160 where we discuss these bind options.
The DB2Binder invocation examples shown in Example 4-23, Example 4-24 on page 167, and in Example 4-25 on page 167 use a JDBC type 4 connection to bind the packages shown in Figure 4-61.
Binder performing action "add" to
"jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z"
under collection "JDBCHDBAT":
Package "SYSSTAT": Bind succeeded.
Package "SYSLH100": Bind succeeded.
Package "SYSLH200": Bind succeeded.
Package "SYSLH300": Bind succeeded.
Package "SYSLH400": Bind succeeded.
Package "SYSLN100": Bind succeeded.
Package "SYSLN200": Bind succeeded.
Package "SYSLN300": Bind succeeded.
Package "SYSLN400": Bind succeeded.
Package "SYSLH101": Bind succeeded.
Package "SYSLH201": Bind succeeded.
Package "SYSLH301": Bind succeeded.
Package "SYSLH401": Bind succeeded.
Package "SYSLN101": Bind succeeded.
Package "SYSLN201": Bind succeeded.
Package "SYSLN301": Bind succeeded.
Package "SYSLN401": Bind succeeded.
Package "SYSLH102": Bind succeeded.
Package "SYSLH202": Bind succeeded.
Package "SYSLH302": Bind succeeded.
Package "SYSLH402": Bind succeeded.
Package "SYSLN102": Bind succeeded.
Package "SYSLN202": Bind succeeded.
Package "SYSLN302": Bind succeeded.
Package "SYSLN402": Bind succeeded.
DB2Binder finished.
Figure 4-61 JDBC packages bound by DB2Binder utility
 
Important: The DB2Binder utility requires a JDBC type 4 connection. Binding the JDBC packages therefore require the DB2 Distributed Data Facility (DDF) address space to be operating even if you only plan to use JDBC type 2 connections which do not require DDF.
Package collection NULLID
To bind the JDBC packages into collection NULLID we executed the DB2Bind command shown in Example 4-23 under UNIX System Services.
Example 4-23 Bind NULLID package collection
java com.ibm.db2.jcc.DB2Binder -url
jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z
-user DB2R3 -password <password> -collection NULLID
-action replace
Package collection JDBCHDBAT
As recommended in “JDBC bind recommendation” on page 161 we create JDBC package collections to provide support for High Performance DBAT. JDBC applications potentially enable themselves for High Performance DBAT processing by including the JDBCHDBAT collection ID in their setCurrentPackagePath data source custom property setting.
To bind the JDBC packages into collection JDBCHDBAT we executed the DB2Bind command shown in Example 4-24 under UNIX System Services.
Example 4-24 Bind High Performance DBAT eligible package collection
java com.ibm.db2.jcc.DB2Binder -url
jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z
-user DB2R3 -password <password> -collection JDBCHDBAT
-release deallocate
For information about the execute privileges that we granted on these packages refer to “Grant execute privileges on JDBC packages” on page 171
Package collection JDBCNOHDBAT
Our JDBC applications exclude themselves from High Performance DBAT processing by including the JDBCNOHDBAT collection ID in their setCurrentPackagePath data source custom property setting.
In z/OS UNIX System Services we ran the DB2Bind command shown in Example 4-25 to bind the JDBC packages into collection JDBCNOHDBAT
Example 4-25 Bind High Performance DBAT ineligible package collection
java com.ibm.db2.jcc.DB2Binder -url
jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z
-user DB2R3 -password <password> -collection JDBCNOHDBAT
-release commit
For information about the execute privileges that we granted on these packages refer to “Grant execute privileges on JDBC packages” on page 171.
4.3.10 UNIX System Services command line processor configuration
DB2 for z/OS provides a Java based DB2 command line processor that runs under UNIX System Services. Command line processor (CLP) uses the IBM Data Server Driver for JDBC to connect to DB2 using a JDBC type 4 connection. Besides the SQL capabilities that you have with SPUFI. you can use CLP to bind packages using DBRMs that are stored in a UNIX System Services file system directory, invoke stored procedures, register and remove XML schemas, issue describe for tables and SQL call statements.
As CLP is a Java application that connects to DB2 using a JDBC type 4 connection it provides an excellent tool to check out your local JDBC configuration. You can invoke CLP from a UNIX System Services shell and as such it can be invoked from telnet, secure shell, under TSO from OMVS, from BPXBATCH or from the JZOS batch launcher. Because CLP connects to DB2 through a JDBC type 4 connection it furthermore triggers zIIP offload for local database connections.
For the CLP implementation we performed the following configuration tasks:
Change global profile in /etc/profile to include the clp.jar file in the CLASSPATH configuration and configure the CLPPROPERTIES variable
 – export CLPHOME=/usr/lpp/db2/d0zg/base
 – export CLASSPATH=$CLPHOME/lib/clp.jar
 – export CLPPROPERTIES=$HOME/clp.roperties
Copy file /usr/lpp/db2/d0zg/base/samples/clp.properties into the home directory
Customize local clp.properties file
Define the following alias in the global profile (/etc/profile)
 – alias db2="java com.ibm.db2.clp.db2"
Invoke CLP from an UNIX System Services shell
 – db2
Within CLP we then ran the commands shown in Figure 4-62 to check out our local JDBC configuration.
db2 => connect to 9.12.4.153:39000/DB0Z user DB2R3 using <password>;
connect to 9.12.4.153:39000/DB0Z user DB2R3 using <password>
com.ibm.net.SocketKeepAliveParameters
Database Connection Information
Database server =DB2 DSN10015
SQL authorization ID =DB2R3
JDBC Driver =IBM Data Server Driver for JDBC and SQLJ 4.13.136
DSNC101I : The "CONNECT" command completed successfully.
db2 => select current server from sysibm.sysdummy1;
select current server from sysibm.sysdummy1
1
DB0Z
1 record(s) selected
db2 => terminate;
Figure 4-62 DB2 CLP to check out JDBC configuration
For more information about implementing and using the DB2 UNIX System Services CLP refer to:
GC19-2974-07, DB2 10 for z/OS, Installation and Migration Guide, Configuring the DB2 command line processor
SC19-2972-04, DB2 10 for z/OS, Command Reference, Chapter 9. Command line processor
4.3.11 Using the TestJDBC Java sample
DB2 for z/OS provides the TestJDBC Java sample program in the JDBC samples directory illustrated in Figure 4-63 on page 169.
DB2R3 @ SC64:/u/db2r3>ls -l /usr/lpp/db2/d0zg/jdbc/samples
total 66
drwxr-xr-x 2 HARJANS TTY 320 Nov 16 2010 IBM
-rw-r--r-- 2 HARJANS TTY 13783 Jun 12 15:06 TestJDBC.java
-rw-r--r-- 2 HARJANS TTY 11752 Jun 12 15:06 TestSQLJ.sqlj
Figure 4-63 TestJDBC samples directory
The TestJDBC application exercises basic JDBC functionality (by default through a Type-2 z/OS Connection) using the DB2 JDBC Driver. TestJDBC receives its parameters (JDBC connect url either in type 2 or type 4 format) as input argument. in Figure 4-64 we use the TestJDBC application to confirm appropriate driver installation. For more information about the TestJDBC application and the input parameters it supports, see the inline documentation of the TestJDBC Java program.
javac TestJDBC.java
java TestJDBC jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z db2r3 <password>
Loading DB2 JDBC Driver: com.ibm.db2.jcc.DB2Driver
com.ibm.net.SocketKeepAliveParameters
successful driver load, version 4.13.136
Establishing Connection to URL: jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z
successful connect
Acquiring DatabaseMetaData
successful... product version: DSN10015
Creating Statement
successful creation of Statement
About to execute SELECT
successful execution of SELECT
About to fetch from ResultSet, maxRows=10
CREATOR: <ATE> NAME: <DEPT>
...
Figure 4-64 Invoke TestJDBC application
4.3.12 DB2 security considerations
The WebSphere Application Server instances illustrated in Figure 4-2 on page 104 access our data sharing members from within the Parallel Sysplex and within a secure TCP/IP network. We therefore had no need to use SSL encryption in DB2 for z/OS. Beyond this we performed the following security configuration:
Permit users to create DB2 connections using RRSAF and DDF
Grant access to JDBC packages
Grant access to the DayTrader-EE6 tables
We did not use DB2 SSL encryption because our DB2 data sharing members.
Allow users to connect to DB2 using RRSAF and DDF
In a WebSphere Application Server environment the following users need to be authorized to connect to DB2:
WebSphere Application Server Deployment Manager address space user for performing data source connection testing
WebSphere Application Server JAAS alias user names referred to in data source definitions
User IDs referred to in DB2 trusted context WITH USE FOR clauses
To allow these users to connect to DB2 through JDBC type 2 or JDBC type 4 we executed the RACF commands shown in Example 4-26.
Example 4-26 RACF DSNR class D0Z*.DIST, D0Z*.RRSAF
/* secure RRSAF and DIST connection creation */
RDEFINE DSNR (D0Z*.DIST) UACC(NONE) OWNER(DB2)
RDEFINE DSNR (D0Z*.RRSAF) UACC(NONE) OWNER(DB2)
/* PERMIT group D0ZGDIST to access D0Z*.RRS */
AG D0ZGRRS OWNER(DB2) SUPGROUP(DB2) /* RRSAF RACF GROUP*/
CO WASTEST GROUP(D0ZGRRS)
CO WASUSER GROUP(D0ZGRRS)
CO MZACRU GROUP(D0ZGRRS)
CO MZADMIN GROUP(D0ZGRRS)
CO MZASRU GROUP(D0ZGRRS)
CO WASCTX1 GROUP(D0ZGRRS)
CO WASCTX2 GROUP(D0ZGRRS)
CO WASCTX3 GROUP(D0ZGRRS)
PERMIT D0Z*.RRSAF CLASS(DSNR) ID(D0ZGRRS) ACCESS(READ)
/* PERMIT group D0ZGDIST to access D0Z*.DIST */
AG D0ZGDIST OWNER(DB2) SUPGROUP(DB2)
CO WASTEST GROUP(D0ZGDIST)
CO WASUSER GROUP(D0ZGDIST)
CO MZACRU GROUP(D0ZGDIST)
CO MZADMIN GROUP(D0ZGDIST)
CO MZASRU GROUP(D0ZGDIST)
CO WASCTX1 GROUP(D0ZGDIST)
CO WASCTX2 GROUP(D0ZGDIST)
CO WASCTX3 GROUP(D0ZGDIST)
PERMIT D0Z*.DIST CLASS(DSNR) ID(D0ZGDIST) ACCESS(READ)
SETROPTS RACLIST(DSNR) REFRESH
Plan authorization considerations
In our workload scenario we drive the DayTrader-EE6 workload using JDBC type 2 and JDBC type 4 connections.
JDBC type 4 is not using user bound application plans, it uses DB2 packages.
With JDBC type 2 you can bind your own application plan with a package list referring to the JDBC package collection ID that you intend to use. If you intend to use an application plan for your JDBC type 2 connections you will have to take care of plan authorization. In our workload scenario we do not use an application plan for JDBC type 2 connections. Instead we use JDBC packages which we authorized as described in “Grant execute privileges on JDBC packages” on page 171. Plan authorization for our DayTrader-EE6 JDBC type 2 workload scenario was therefore not required.
Grant execute privileges on JDBC packages
We bound the JDBC packages into the package collections JDBCHDBAT and JDBCNOHDBAT. For these collections we ran the SQL data control language (DCL) statements shown in Example 4-27 to revoke the execute privilege from PUBLIC and to grant execute authorization to the packages of these collection IDs. The GRANT TO PUBLIC was implicitly performed by the DB2Binder utility that we explained in 4.3.9, “Bind JDBC packages” on page 165.
Example 4-27 Grant execute authorization on packages
revoke execute on package jdbchdbat.* from public;
revoke execute on package jdbcnohdbat.* from public;
grant execute on package NULLID.SYSSTAT to MZASRU;
grant execute on package jdbchdbat.* to role WASTESTROLE;
grant execute on package jdbchdbat.* to role DTRADEROLE;
grant execute on package jdbcnohdbat.* TO ROLE WASTESTROLE;
grant execute on package jdbcnohdbat.* TO ROLE DTRADEROLE;
WebSphere Application Server Deployment Manager package authorization
When we used the WebSphere Application Server Integrated Solutions Console (ISC) to perform a data source connection test as described in 6.6.1, “Data source connection tests on z/OS” on page 328, we obtained the error message shown in Figure 4-65 indicating a lack of package execute authorization on package NULLID.SYSSTAT.
Figure 4-65 WebSphere deployment manager data source test error message
DB2 performed authorization checking on package NULLID.SYSSTAT because we configured the JDBC type 2 data source to use a DB2 package list for locating DB2 packages required for SQL execution. As a consequence WebSphere Application Server Deployment Manager created a DB2 RRSAF thread with a RRSAF default plan name of ?RRSAF using the NULLID collection ID for package allocation. DB2 package NULLID.SYSSTAT caused the error shown in message Figure 4-65 because it was the first package the connection test tried to use.
The error message shown in Figure 4-65 refers to mzdmnode with user MZASRU not being authorized to execute package NULLID.SYSSTAT which we did not expect as we had configured the data source to use package collection JDBCHBAT and to use the JAAS alias user name for creating the connection to DB2. Instead, the data source connection request was performed by the WebSphere Application Server deployment manager address space user (MZASRU) trying to probe the DB2 connection using package NULLID.SYSSTAT.
Because we had activated the DB2 audit trace to keep track of authorization failures we could confirm this behavior through the audit report shown in Figure 4-66.
LOCATION: DB0Z OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V5R1M1)
GROUP: DB0ZG AUDIT REPORT - DETAIL
MEMBER: D0Z2
SUBSYSTEM: D0Z2 ORDER: PRIMAUTH-PLANNAME
DB2 VERSION: V10 SCOPE: MEMBER
PRIMAUTH CORRNAME CONNYPE
ORIGAUTH CORRNMBR INSTNCE
PLANNAME CONNECT TIMESTAMP TYPE DETAIL
-------- -------- ----------- ----------- -------- ---------------------------------------------------
MZASRU MZDMGRS RRS 04:31:38.90 AUTHFAIL AUTHID CHECKED: MZASRU PRIVILEGE: EXECUTE
MZASRU 'BLANK' CA82947981B OBJECT TYPE : PACKAGE REASON: 0 RC: - 1
?RRSAF RRSAF SOURCE OBJECT : SYSSTAT SOURCE OWNER: NULLID
TARGET OBJECT : N/A TARGET OWNER: N/A
MLS RID : N/P SECLABEL: N/P
TEXT: N/P
Figure 4-66 WebSphere deployment manager AUTHFAIL audit report
After we had granted the deployment manager address space user the privilege to execute the NULLID.SYSSTAT package as shown in Example 4-28 we successfully completed the data source connection test using the ISC application.
Example 4-28 Grant NULLID.SYSSTAT privilege to deployment manager
grant execute on package NULLID.SYSSTAT to MZASRU ;
Upon data source connection test completion we received the ISC message box shown in Figure 4-67.
Figure 4-67 WebSphere deployment manager data source test successful
Grant access to the DayTrader-EE6 tables
We ran the SQL grant statements shown in Example 4-29 to grant the required table privileges to role DTRADEROLE. The role s assigned through trusted context when running the DayTrader-EE6 application.
Example 4-29 Grant DayTrader-EE6 table privileges
GRANT ALL ON TABLE SG248074.HOLDINGEJB TO ROLE DTRADEROLE;
GRANT ALL ON TABLE SG248074.ACCOUNTPROFILEEJB TO ROLE DTRADEROLE;
GRANT ALL ON TABLE SG248074.QUOTEEJB TO ROLE DTRADEROLE;
GRANT ALL ON TABLE SG248074.KEYGENEJB TO ROLE DTRADEROLE;
GRANT ALL ON TABLE SG248074.ACCOUNTEJB TO ROLE DTRADEROLE;
GRANT ALL ON TABLE SG248074.ORDEREJB TO ROLE DTRADEROLE;
4.3.13 Trusted context
A trusted context object is entirely defined in DB2 and is used to establish a trusted relationship between DB2 and an external entity. An external entity includes the following types of DB2 for z/OS clients:
DB2 allied address space locally connect to DB2 through RRSAF, TSO or the CAF attachment facility interface. WebSphere Application Server connecting to DB2 through JDBC type 2 use RRSAF DB2 attachment interface.
 
Note: APAR PM69429 adds support for Trusted Context calls for a CAF application.
DRDA application requestors connected to DB2 through database access threads (DBAT). WebSphere Application Server connecting to DB2 through JDBC type 4 use the DB2 Distributed Data Facility (DDF) interface.
During connection processing DB2 evaluates a set of trust attributes to determine if a specific context is to be trusted. The trust attributes specify a set of characteristics about a specific connection. These attributes include the IP address, domain name, or SERVAUTH security zone name for remote DRDA clients and the job or task name for local clients.
In case the trusted context applies DB2 performs all authorization checking using the authorization ID or database role that assigned by the trusted context
4.3.14 Trusted context application scenarios
Based on the information we provided in “Authentication in a three-tier architecture using DB2 trusted context” on page 128 we explain the use of a trusted context in a WebSphere Application Server environment for the following scenarios:
DayTrader-EE6 application workload using JDBC type 2 connections
DayTrader-EE6 application workload using JDBC type 4 connections
IBM Data Web Service servlet application D0ZG_QueriesWASTestTC1_war using a JDBC type 4 connection
4.3.15 DayTrader-EE6 application using JDBC connections
The DayTrader-EE6 scenario does not require any further configuration in WebSphere Application Server, because the trusted context definitions we use for that application do not perform an authorization ID switch.
A trusted context is based on the system authid (in WebSphere Application Server often referred to as the technical data source user) and a set of trust attributes. We describe the trust attributes that we used for running the DayTrader-EE6 application in “DayTrader-EE6 JDBC type 2 related trusted context attributes” on page 174, “DayTrader-EE6 JDBC type 4 related trusted context attributes” on page 174.
DayTrader-EE6 JDBC type 2 related trusted context attributes
For JDBC type 2 connections the trusted context provides the job names (address space names) from which local data base connections are established. A star can be specified for the last character of the job name. For WebSphere Application Server on z/OS this are the address space names of the WebSphere Application Server servant region establishing the JDBC type 2 connections. Example 4-30 shows a trusted context covering the attributes that we specified for our DayTrader-EE6 application server JCBC type 2 environment.
Example 4-30 JDBC type 2 trusted context with system authid and job name
CREATE TRUSTED CONTEXT CTXDTRADET2
BASED UPON CONNECTION USING SYSTEM AUTHID MZADMIN 1
ATTRIBUTES (JOBNAME 'MZSR01*') 2
DEFAULT ROLE DTRADEROLE 3
WITHOUT ROLE AS OBJECT OWNER
ENABLE
1. Data source JAAS alias user name. This user name is often referred to by the data source technical user.
2. Address space names of our WebSphere Application Server servant regions. Our STC names start with the characters MZSR01
3. Optional: DB2 role to be assigned when the trusted context is applied
We use the trusted context shown in Example 4-30 to run the JDBC type 2 DayTrader-EE6 workload. Because database privileges are exercised by role DTRADEROLE the data source user MZADMIN does not need to hold any privileges in DB2. This solves an important audit concern as MZADMIN can no longer be used to access data in DB2 within or outside the trusted context. Granting privileges to a role increases data security further as a role is unusable outside a trusted context.
DayTrader-EE6 JDBC type 4 related trusted context attributes
For JDBC type 4 connections the trusted context contains the IP addresses or domain names from which DRDA connections are established. Generic names are not supported. For WebSphere Application Server instances the ADDRESS attribute includes IP addresses or domain names of the IP host the application server instances run in. We recommend the use of domain names to avoid problems in case a server dynamically obtains its IP address using domain name service (DNS). Example 4-31 shows a trusted context covering the attributes that we specified for the DayTrader-EE6 application server JCBC type 4 environment.
Example 4-31 JDBC type 4 trusted context with system authid and address
CREATE TRUSTED CONTEXT CTXDTRADET4
BASED UPON CONNECTION USING SYSTEM AUTHID MZADMIN 1
DEFAULT ROLE DTRADEROLE 3
WITHOUT ROLE AS OBJECT OWNER
ENABLE
NO DEFAULT SECURITY LABEL
ATTRIBUTES (
ENCRYPTION 'NONE',
ADDRESS 'wtsc64.itso.ibm.com', 2
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'wtsc63.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com'
) ;
1. Data source JAAS alias user name
2. Domain names the application server instance runs on
3. Optional: DB2 role to be assigned if the trusted context is to be applied
We use the trusted context shown in Example 4-31 on page 174 to run the JDBC type 4 DayTrader-EE6 workload. Because database privileges are exercised by role DTRADEROLE the data source user MZADMIN does not need to hold any privileges in DB2. This solves an important audit concern as MZADMIN can no longer be used to access data in DB2 within or outside the trusted context. Granting privileges to a role increases data security further as a role is unusable outside the trusted context.
4.3.16 Data Web Service servlet with trusted context AUTHID switch
The IBM Data Web Service servlet application that we use requires the application server data source configuration steps described in 5.9, “Enabling trusted context for applications that are deployed in WebSphere Application Server” on page 276, because this application has been configured to use HTTP base authentication to enable the application server to pass the ID of the authenticated user for trusted context AUTHID ID switch to DB2.
Building the Data Web Service application
We use the IBM Data Studio full client Web Services feature to generate an IBM Data Web Services servlet application that provides Web Service operations for the SQL statements shown in Example 4-32 and in Example 4-33. For further reference, we use the name D0ZG_QueriesWASTestTC1_war when we refer to that application.
Example 4-32 Data Web Service query to select DB2 special registers
SELECT
CURRENT CLIENT_ACCTNG AS CLIENT_ACCTNG
,CURRENT CLIENT_APPLNAME AS CLIENT_APPLNAME
,CURRENT CLIENT_USERID AS CLIENT_USERID
,CURRENT CLIENT_WRKSTNNAME AS CLIENT_WRKSTNNAME
,CURRENT PATH AS PATH
,CURRENT SCHEMA AS SCHEMA
,CURRENT TIMESTAMP AS TIMESTAMP
,CURRENT TIMEZONE AS TIMEZONE
,CURRENT SERVER AS LOCATION
,GETVARIABLE('SYSIBM.DATA_SHARING_GROUP_NAME') AS GROUPNAME
,GETVARIABLE('SYSIBM.SSID') AS SSID
,GETVARIABLE('SYSIBM.SYSTEM_NAME') AS ATTACH
,GETVARIABLE('SYSIBM.VERSION') AS DB2VERSION
,GETVARIABLE('SYSIBM.PLAN_NAME') AS PLAN
,GETVARIABLE('SYSIBM.PACKAGE_NAME') AS PACKAGE
,GETVARIABLE('SYSIBM.PACKAGE_SCHEMA') AS COLLID
FROM SYSIBM.SYSDUMMY1;
Example 4-33 Data Web Service query to invoke the GRACFGRP external scalar UDF
SELECT T.* FROM XMLTABLE
('$d/GROUPS/GROUP'
PASSING XMLPARSE (DOCUMENT GRACFGRP()) AS "d"
COLUMNS
"RACF User" VARCHAR(08) PATH '../USER/text()',
"RACF Group" VARCHAR(08) PATH './text()'
) AS T
The external UDF GRACFGRP is an assembler program that extracts the RACF groups the current UDF caller is connected to from the RACF ACEE control block which has been created by DB2 because of the SECURITY USER UDF attribute. GRACFGRP then returns an XML document containing the RACF group names as a VARCHAR scalar value. Listing of DDL and ASM is provided in
For information about how to convert SQL statements into IBM Data Web Services, refer to IBM Data Studio V2.1: Getting Started with Web Services on DB2 for z/OS, REDP-4510.
Creating the trusted context
The D0ZG_QueriesWASTestTC1_war WebSphere Application Server servlet application has been configured to use a JDBC type 4 connection to connect to DB2. It invokes an IBM Data Web Service operation to obtain either the DB2 special registers referred to in Example 4-32 on page 175 or to invoke the external scalar UDF referred to in Example 4-33 on page 175. When you create the trusted context you can use one of the following options to control the user names that are to be enabled for trusted context switching:
Trusted context users hard coded in the DDL
We created the trusted context using the SQL DDL shown in Figure 4-68 on page 177. In that sample we hard code the list of trusted context users (WASCTX1 through WASCTX9) in the DDL. As a consequence you need to alter the trusted context if you want to change the list of trusted context users.
CREATE ROLE WASTESTDEFAULTROLE;
CREATE ROLE WASTESTROLE;
GRANT EXECUTE ON SPECIFIC FUNCTION DB2R3.GRACFGRP
TO ROLE WASTESTROLE;
 
CREATE TRUSTED CONTEXT CTXWASTESTT4 1
BASED UPON CONNECTION USING SYSTEM AUTHID WASTEST 2
ATTRIBUTES (ADDRESS 'wtsc63.itso.ibm.com', 3
ADDRESS 'wtsc64.itso.ibm.com',
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com')
DEFAULT ROLE WASTESTDEFAULTROLE 4
WITHOUT ROLE AS OBJECT OWNER WITH USE FOR
WASCTX1 ROLE WASTESTROLE 5,
WASCTX2 ROLE WASTESTROLE ,
WASCTX3 ROLE WASTESTROLE ,
WASCTX4 ROLE WASTESTROLE ,
WASCTX5 ROLE WASTESTROLE ,
WASCTX6 ROLE WASTESTROLE ,
WASCTX7 ROLE WASTESTROLE ,
WASCTX8 ROLE WASTESTROLE ,
WASCTX9 ROLE WASTESTROLE
WITHOUT AUTHENTICATION
ENABLE 6
 
Figure 4-68 JDBC type 4 trusted context
1. Trusted context name
2. SYSTEM AUTHID - the system user ID provided by the application server. This ID corresponds to the user ID the data source JAAS alias user name.
3. ADDRESS - IP addresses or domain names
We included the list of domain names that we observed during workload testing. After initial workload testing we used DB2 accounting traces to determine the IP addresses that had to be taken into account for trusted context creation. For convenience we loaded the DB2 accounting information into OMPE data warehouse tables and ran the query shown in Example 4-34 to determine these IP addresses that we had to consider for our JDBC type 4 trusted context definition.
Example 4-34 Determine trusted context IP addresses and domain names
---------+---------+---------+---------+---------+--------
SELECT COUNT(*) , REQ_LOCATION
FROM "DB2R3"."DB2PMFACCT_GENERAL"
GROUP BY REQ_LOCATION
---------+---------+---------+---------+---------+--------
REQ_LOCATION
---------+---------+---------+---------+---------+--------
231416 ::9.12.4.142
134789 ::9.12.4.138
664221 ::9.12.6.9
127981 ::9.12.6.70
DSNE610I NUMBER OF ROWS DISPLAYED IS 4
For each IP address shown in Example 4-34 on page 177 we ran the UNIX System Services command shown in Example 4-35 to determine the domain names that we had to consider in our trusted context definition.
Example 4-35 Determine domain names by IP address
host 9.12.4.142
EZZ8321I d0z2.itso.ibm.com has addresses 9.12.4.142
host 9.12.4.138
EZZ8321I d0z1.itso.ibm.com has addresses 9.12.4.138
host 9.12.6.9
EZZ8321I wtsc64.itso.ibm.com has addresses 9.12.6.9
host 9.12.6.70
EZZ8321I wtsc63.itso.ibm.com has addresses 9.12.6.70
4. DEFAULT ROLE - database role to be used if no role assignment is performed by the trusted context
5. Optionally user IDs and roles for authorization ID (AUTHID) switching
Trusted context users controlled by RACF profile
DB2 supports an option that allows you to control the list of trusted context users through a RACF profile.
To use these options we performed the following implementation tasks:
Create a RACF profile in the DSNR class as shown in Example 4-36. You might notice that we permitted read access to the DSNR profile to RACF group WASCTX into which we had connected the trusted context users WASCTX1 through WASCTX9. In case you want to remove or add trusted context users there is no need to alter the trusted context in DB2. All you need to do is to add or remove users from RACF group WASCTX.
Example 4-36 Create RACF DSNR trusted context profile
RDEFINE DSNR (D0ZG.TRUSTEDCTX.DBZGWAS) UACC(NONE)
PERMIT D0ZG.TRUSTEDCTX.DBZGWAS CLASS(DSNR) ACCESS(READ) -
ID( WASCTX)
SETROPTS RACLIST(DSNR) REFRESH
Create a trusted context that refers to the RACF profile created in Example 4-36 in its WITH USE FOR clause as shown in Example 4-37.
Example 4-37 Create trusted context using RACF DSNR trusted context profile
CREATE TRUSTED CONTEXT CTXWASTESTT5
BASED UPON CONNECTION USING SYSTEM AUTHID WASSRV
DEFAULT ROLE WASTESTDEFAULTROLE
WITHOUT ROLE AS OBJECT OWNER
ENABLE
NO DEFAULT SECURITY LABEL
ATTRIBUTES (
ENCRYPTION 'NONE',
ADDRESS 'wtsc64.itso.ibm.com',
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'wtsc63.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com'
)
WITH USE FOR
EXTERNAL SECURITY PROFILE "D0ZG.TRUSTEDCTX.D0ZGWAS"
ROLE WASTESTROLE 1
WITHOUT AUTHENTICATION
1. The trusted context DDL shown in Example 4-37 on page 178 uses the same attributes as the trusted context DDL shown in Figure 4-68 on page 177 except for the RACF DSNR profile D0ZG.TRUSTEDCTX.D0ZGWAS profile which we created in Example 4-36 on page 178.
Testing the trusted connection
After the trusted connection has been established the trusted context allows the external entity to use a database connection under a different end-user ID without the database server having to authenticate that ID. This process, which is also known as authorization ID switching, calls RACF to check the authorization ID and, if provided by the trusted context, assigns a role that is to be used for authorization checking in DB2. A role disassociates DB2 privileges from the end-user. DB2 privileges granted to a role can only be acquired through a trusted context and thus unavailable outside of it.
During Data Web Service testing we collected the DB2 command output shown in Figure 4-69 which confirms trusted context usage with exactly the attributes we defined in Figure 4-68 on page 177.
DSNV473I -D0Z2 ACTIVE THREADS FOUND FOR MEMBER: D0Z1
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
SERVER SW * 7 db2jcc_appli WASCTX1 DISTSERV 0084 427
V485-TRUSTED CONTEXT=CTXWASTESTT4,
SYSTEM AUTHID=WASTEST,
ROLE=WASTESTROLE
V437-WORKSTATION=WTSC64, USERID=wastest,
APPLICATION NAME=dwsClientinformationDS
V441-ACCOUNTING=JCC03640WTSC64 dwsClientinformation
'
V429 CALLING FUNCTION=DB2R3.GRACFGRP,
PROC= , ASID=0000, WLM_ENV=DSNWLMDB0Z_GENERAL
V482-WLM-INFO=DDFONL:1:2:550
V445-G90C0609.M72E.CA71F39F97F6=427 ACCESSING DATA FOR
( 1)::9.12.6.9
Figure 4-69 DWS trusted context display thread output
Authorization failure during authid switch
The trusted context definition shown in Figure 4-68 on page 177 allows for an authorization switch to be performed to one of the users specified in the trusted context WITH USE FOR clause. We ran the Data Web Service application under user WASUSER which is not defined in the WITH USE FOR clause of the trusted context definition. WebSphere Application Server successfully authenticated WASUSER. The application server then tried to reuse the existing database connection and asked DB2 to perform an authorization ID switch on that connection to user WASUSER. Because WASUSER cannot be used by the trusted connection DB2 returned SQLCODE -20361 to the application.
The SQLCODE was confirmed by the IFCID 269 (audit trace class 10) record trace shown in Figure 4-70.
!-----------------------------------------------------------------------
!CONNECTION TYPE: REUSED STATUS: FAILED SQLCODE: -20361
!SECURITY LABEL : N/P
!
!TRUSTED CONTEXT NAME: CTXWASTESTT4
!SYSTEM AUTHID USED : WASTEST
!REUSE AUTHID : WASUSER
!-----------------------------------------------------------------------
Figure 4-70 Trusted context IFCID 269 record trace with SQLCODE -20361
The application server log provided the corresponding runtime message shown in Figure 4-71 indicating the auth ID switch failure.
J2CA0056I:The Connection Manager received a fatal connection error from the Resource Adapter for resource jdbc/Josef. The exception is:
com.ibm.db2.jcc.am.DisconnectRecoverableException:
Ýjcc¨Ýt4¨Ý2040¨Ý11215¨Ý3.64.82¨ An error occurred during a deferred
connect reset and the connection has been terminated. See chained
exceptions for details. ERRORCODE=-4499,
SQLSTATE=null:com.ibm.db2.jcc.am.SqlException:
Ýjcc¨Ýt4¨Ý20130¨Ý12466¨Ý3.64.82¨ Trusted user switch failed.
ERRORCODE=-4214,SQLSTATE=null:com.ibm.db2.jcc.am.SqlSyntaxErrorException:
DB2R3;CTXWASTESTT4
Figure 4-71 Failure of trusted user switch
4.3.17 Using DB2 profiles
DB2 for z/OS provides a profile table facility that you can use to:
Optimize subsystem parameters for SQL statements by setting or disabling DB2 subsystem parameters (DSNZPARM) for particular SQL statements. The DSNZPARMs you can control include:
 – NPGTHRSH
 – OPTIOWGT
 – STARJOIN
 – SJTABLES
Maintain copies of access paths by overriding the PLANMGMT and PLANMGMTSCOPE bind options and subsystem wide parameters settings for particular collections and packages.
Create a test subsystem modelled on production environment CPU, memory and DB2 pool settings. Refer to “Simulate production like buffer pool sizes and catalog statistics” on page 153 for a discussion on this topic.
Set thresholds for query acceleration
Monitor database access threads and connections
In the workload scenario used in this book we focus on using profiles to monitor database access threads and connections. Other use cases for using profiles are not discussed. If you need further information about using these additional DB2 profile use cases refer to “Using profiles to monitor and optimize performance”. DB2 10 for z/OS, Managing Performance, SC19-2978.
4.3.18 Using profiles to optimize and monitor threads and connections
DB2 10 for z/OS provides a profile table monitoring facility to support the filtering and threshold monitoring for system related activities, such as the number of connections, the number of threads, and the period of time that a thread can stay idle.
This enhancement allows you to enforce the thresholds (limits) that were previously available only at the system level using DSNZPARM, such as CONDBAT, MAXDBAT, and IDTHTOIN, at a more granular level. Setting these limits allows you to control connections using the following categories:
IP Address (LOCATION)
Product Identifier (PRDID)
Role and Authorization Identifier (ROLE, AUTHID)
Collection ID and Package Name (COLLID, PKGNAME)
DB2 client information (CLIENT_APPLNAME, CLIENT_USERID, CLIENT_WORKSTNNAME)
This enhancement also provides the option to define the type of action to take after these thresholds are reached. You can display a warning message or an exception message when the connection, thread, and idle thread timeout thresholds are exceeded. If you choose to display a warning message, a DSNT771I or DSNT772I message is issued, depending on DIAGLEVEL and processing continues. In the case of exception processing, a message is displayed to the console and the action taken (that is queuing, suspension, or rejection).
DB2 profile tables
Profile monitoring requires the following tables to be created:
SYSIBM.DSN_PROFILE_TABLE
SYSIBM.DSN_PROFILE_HISTORY
SYSIBM.DSN_PROFILE_ATTRIBUTES
SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
These tables are created by installation job DSNTIJSG. The profile history and attributes history tables have the same columns as their corresponding profile and profile attributes tables, except for the STATUS column which is added to keep track of profile status information and except for the REMARKS column that does not exist in the profile history and attributes history tables. The STATUS column indicates whether a profile was accepted or why it was rejected during START PROFILE command execution.
Storing a profile in the DSN_PROFILE_TABLE
DSN_PROFILE_TABLE stores one row per monitoring or execution profile. Rows are inserted by authorized users using SQL. A row can apply either to statement monitoring or to system level activity monitoring, but not both. Monitoring can be performed based on options such as IP address, product ID, authid, role, collection ID, package name, and DB2
client information.
To monitor connections or threads, you need to insert a row into DSN_PROFILE_TABLE with the appropriate criteria. Valid filtering criteria for monitoring system activities can be organized into categories as shown in Table 4-5.
Table 4-5 Profile table filter criteria
Filter category
Columns to specify
IP address or domain name
Specify only the LOCATION column
Client product identifier
Specify only the PRDID column
Role and / or authorization ID
Specify one or all of the following columns
ROLE
AUTHID
Collection ID and / or package name
Specify one or all of the following columns
COLLID
PKGNAME
DB2 client information
Specify on of the following columns
CLIENT_APPNAME
CLIENT_USERID
CLIENT_WORKSTNNAME
For connection monitoring you can only filter on IP address or domain name for which you provide the filter value by populating the profile table LOCATION column.
You create a profile by inserting a row into SYSIBM.DSN_PROFILE_TABLE providing the column values that are required to implement one of the filter criteria referred to in Table 4-5. For illustration, see the list of profile table columns in Figure 4-72.
AUTHID 1 VARCHAR 128
PLANNAME 2 VARCHAR 24
COLLID 3 VARCHAR 128
PKGNAME 4 VARCHAR 128
LOCATION 5 VARCHAR 254
PROFILEID 6 INTEGER 4
PROFILE_TIMESTAMP 7 TIMESTMP 10
PROFILE_ENABLED 8 CHAR 1
GROUP_MEMBER 9 VARCHAR 24
REMARKS 10 VARCHAR 762
ROLE 11 VARCHAR 128
PRDID 12 CHAR 8
CLIENT_APPLNAME 13 VARCHAR 255
CLIENT_USERID 14 VARCHAR 255
CLIENT_WRKSTNNAME 15 VARCHAR 255
Figure 4-72 DSN_PROFILE_TABLE
Besides the PROFILEID, which also is the profile table primary key, there are further columns that you use to provide information about the monitoring filter criteria identifying the thread, connection, or SQL statement you want monitoring to be performed for.
For MAXDBAT and IDTHTOIN monitoring you can enter the filter criteria using any of the combinations shown in Table 4-5.
For CONDBAT monitoring you can only specify an IP address or a domain name in the LOCATION column. Other combinations of criteria are not accepted for CONDBAT monitoring function.
Storing profile attributes in the DSN_PROFILE_ATTRIBUTES table
After you have created your profile by inserting a profile table row into the DSN_PROFILE table you need to provide profile attributes to provide monitoring thresholds and actions that are to be performed in case the threshold is exceeded.
To provide these information you insert a row into the profile attributes table (SYSIBM.DSN_PROFILE_ATTRIBUTES) to store the required threshold and action related information. For illustration purpose we provide a list of the profile attributes table columns in Figure 4-73. The table contains a PROFILEID column which corresponds to a profile table row with the same PROFILEID column value.
PROFILEID 1 INTEGER 4
KEYWORDS 2 VARCHAR 128
ATTRIBUTE1 3 VARCHAR 1024
ATTRIBUTE2 4 INTEGER 4
ATTRIBUTE3 5 FLOAT 8
ATTRIBUTE_TIMESTAM 6 TIMESTMP 10
REMARKS 7 VARCHAR 762
Figure 4-73 DSN_PROFILE_ATTRIBUTES table
For DBAT or remote connection monitoring you can enter one of the attribute values shown in Table 4-6 to provide monitoring threshold and actions depending on the kind of thread, connection or IDLE thread monitoring you want to perform. Profile attribute column ATTRIBUTE3 is not used for thread and connection monitoring.
Table 4-6 Profile attributes
Keywords
Attribute1
Attribute2
MONITOR IDLE THREADS
(IDTHTOIN)
WARNING
WARNING_DIAGLEVEL1
WARNING_DIAGLEVEL2
EXCEPTION
EXCEPTION_DIAGLEVEL1
EXCEPTION DIAGLEVEL2
Maximum number
of seconds that active server threads are allowed to remain
idle
A value of 0 disables IDTHTOIN for this profile
MONITOR THREADS
(MAXDBAT)
WARNING
WARNING_DIAGLEVEL1
WARNING_DIAGLEVEL2
EXCEPTION
EXCEPTION_DIAGLEVEL1
EXCEPTION DIAGLEVEL2
Insert a value to indicate the threshold for the maximum allowed number of server threads that meet the profile criteria.
 
The value that you specify must be less than or equal to the value of the MAXDBAT subsystem parameter.
MONITOR CONNECTIONS
(CONDBAT)
WARNING
WARNING_DIAGLEVEL1
WARNING_DIAGLEVEL2
EXCEPTION
EXCEPTION_DIAGLEVEL1
EXCEPTION DIAGLEVEL2
Insert a value to indicate the threshold for the maximum allowed number of remote connections that meet the profile criteria.
 
The value that you specify must be less than or equal to the value of the CONDBAT subsystem parameter
Starting profiles
You start DB2 profiles by issuing the DB2 START PROFILE command:
-START PROFILE
Triggered by the START PROFILE command DB2 starts profile rows with the value Y in the PROFILE_ENABLED profile table column (SYSIBM.DSN_PROFILE_TABLE column PROFILE_ENABLED = Y).
In data sharing the START and STOP PROFILE commands have member scope and affect only the data sharing member they have been issued for. You therefore need to issue these commands for each data sharing member you want to have profile monitoring started or stopped.
In our environment we use the administrative task scheduler to issue the START PROFILE command at DB2 startup time. In Appendix A, “DB2 administrative task scheduler” on page 483. we describe the administrative task scheduler (ADMT) setup to trigger batch jobs, DB2 commands, and for autonomic statistics monitoring.
Stopping profiles
You stop profiles by issuing the STOP PROFILE command:
-STOP PROFILE
Monitoring for individual profiles can be stopped by updating the PROFILE_ENABLED column in the SYSIBM.DSN_PROFILE_TABLE to N and issuing a START PROFILE command again.
Profile history tables
During START PROFILE command execution DB2 considers DSN_PROBILE_TABLE rows with the ENABLE column set to Y and their corresponding DSN_PROFILE_ATTRIBUTES rows for profile activation. Before DB2 starts an individual profile it uses the profile information found in the profile and the profile attributes tables to perform profile validation and externalizes the profile and profile attributes information together with profile status information into the following corresponding profile history and profile attributes history tables.
SYSIBM.DSN_PROFILE_HISTORY
SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY
DSN_PROFILE_HISTORY table
During profile activation DB2 validates each profile to be started and documents its activation status by inserting one row into table SYSIBM.DSN_PROFILE_HISTORY. As shown in Figure 4-74 on page 185 the DSN_PROFILE_HISTORY table consists of column information of the DSN_PROFILE_TABLE (except for the REMARKS column) plus a STATUS column to provide information about the profile activation status.
AUTHID 1 VARCHAR 128
PLANNAME 2 VARCHAR 24
COLLID 3 VARCHAR 128
PKGNAME 4 VARCHAR 128
LOCATION 5 VARCHAR 254
PROFILEID 6 INTEGER 4
PROFILE_TIMESTAMP 7 TIMESTMP 10
PROFILE_ENABLED 8 CHAR 1
GROUP_MEMBER 9 VARCHAR 24
STATUS 10 VARCHAR 254
ROLE 11 VARCHAR 128
PRDID 12 CHAR 8
CLIENT_APPLNAME 13 VARCHAR 255
CLIENT_USERID 14 VARCHAR 255
CLIENT_WRKSTNNAME 15 VARCHAR 255
Figure 4-74 DSN_PROFILE_HISTORY table
The STATUS column provides one of the following information:
REJECTED - DUPLICATED SCOPE SPECIFIED
REJECTED - INVALID LOCATION SPECIFIED
REJECTED - INVALID SCOPE SPECIFIED
REJECTED - NO VALID RECORD FOUND IN ATTRIBUTE TABLE
REJECTED - INVALID SCOPE SPECIFIED. SYSTEM LEVEL MONITORIN SCOPE CAN BE SPECIFIED ONLY ON NFM
REJECTED - INVALID SCOPE SPECIFIED. FOR SYSTEM LEVEL MONITORING, ONLY IP ADDR, PRDID, ROLE AND/OR AUTHID,COLLECTION ID AND/OR PACKAGE NAME CAN BE SPECIFIED
ACCEPTED - DOMAIN NAME IS RESOLVED INTO IP ADDRESS
ACCEPTED
DSN_PROFILE_ATTRIBUTES_HISTORY table
Profile activation that we describe in “DSN_PROFILE_HISTORY table” on page 184 furthermore triggers profile attribute validation.
During START PROFILE execution DB2 externalizes the attribute status of each profile attribute involved by inserting corresponding rows into the profile attributes history table (SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY).
As shown in Figure 4-75 the DSN_PROFILE_ATTRIBUTES_HISTORY table consists of column information of the DSN_PROFILE_TABLE_ATTRIBUTES table (except for the REMARKS column) plus a STATUS column to provide information about the profile attribute activation status.
PROFILEID 1 INTEGER 4
KEYWORDS 2 VARCHAR 128
ATTRIBUTE1 3 VARCHAR 1024
ATTRIBUTE2 4 INTEGER 4
ATTRIBUTE3 5 FLOAT 8
ATTRIBUTE_TIMESTAM 6 TIMESTMP 10
STATUS 7 VARCHAR 254
Figure 4-75 DSN_PROFILE_ATTRIBUTES_HISTORY table
The STATUS column indicates whether the profile was accepted, and when a profile was rejected contains information about the reason for the rejection.
Verify profile activation status
Each time DB2 attempts to start a profile a row is inserted into the DSN_PROFILE_HISTORY table. After we issued the START PROFILE command we ran the query shown in Example 4-38 to verify the status of the profile that we created for active thread monitoring.
Example 4-38 Verify DSN_PROFILE_TABLE status
SELECT PROFILEID,PROFILE_TIMESTAMP,STATUS
FROM "SYSIBM"."DSN_PROFILE_HISTORY"
WHERE PROFILEID = 1
ORDER BY PROFILE_TIMESTAMP DESC
FETCH FIRST ROW ONLY
---------+---------+---------+---------+---------+---------+-
PROFILEID PROFILE_TIMESTAMP STATUS
---------+---------+---------+---------+---------+---------+-
1 2012-10-25-14.18.04.615794 ACCEPTED BY D0Z2
To verify the profile activation status of the attributes that we defined for the profile we ran the query shown in Example 4-39.
Example 4-39 Verify DSN_PROFILE_ATTRIBUTES status
---------+---------+---------+---------+---------+---------+--------
SELECT
SUBSTR(KEYWORDS,1,14) AS KEYWORDS
,SUBSTR(ATTRIBUTE1,1,20) AS ATTRIBUTE1
,ATTRIBUTE2
,STATUS
FROM "SYSIBM"."DSN_PROFILE_ATTRIBUTES_HISTORY"
WHERE PROFILEID = 1
ORDER BY ATTRIBUTE_TIMESTAMP DESC
FETCH FIRST ROW ONLY
---------+---------+---------+---------+---------+---------+--------
KEYWORDS ATTRIBUTE1 ATTRIBUTE2 STATUS
---------+---------+---------+---------+---------+---------+--------
MONITOR THREAD WARNING_DIAGLEVEL2 7 ACCEPTED BY D0Z2
The status returned by the queries shown in Example 4-38 on page 186 and in Example 4-39 on page 186 confirms that the profile with PROFILEID = 1 was successfully activated on member D0Z2.
4.3.19 Configure thread monitoring for the DayTrader-EE6 application
In our application scenario we configure active thread monitoring for the DayTrader-EE6 application to determine the number of active threads the application consumes in DB2.
Creating the DayTrader-EE6 thread monitoring profile
We ran the SQL insert statements shown in Example 4-40 to populate table DSN_PROBILE_TABLE with the information required for thread monitoring. DayTrader-EE6 provides the clientApplicationInformaton data source custom property value of TraderClientApplication when connecting to DB2. The PROFILE_ENABLED column is set to Y to have the profile activated when a START PROFILE command is issued.
Example 4-40 DayTrader-EE6 DSN_PROFILE_TABLE row
INSERT INTO SYSIBM.DSN_PROFILE_TABLE ( "AUTHID" , "PLANNAME" ,
"COLLID" ,
"PKGNAME" , "LOCATION" , "PROFILEID" , "PROFILE_TIMESTAMP" ,
"PROFILE_ENABLED" , "GROUP_MEMBER" , "REMARKS" , "ROLE" , "PRDID" ,
"CLIENT_APPLNAME" , "CLIENT_USERID" , "CLIENT_WRKSTNNAME" )
VALUES (
NULL -- AUTHID
,NULL -- PLANNAME
,NULL -- COLLID
,NULL -- PKGNAME
,NULL -- LOCATION
, 1 -- PROFILEID
,CURRENT TIMESTAMP -- PROFILE_TIMESTAMP
,'Y' -- PROFILE_ENABLED
,'' -- GROUP_MEMBER
,'DayTrader profile' -- REMARKS
,NULL -- ROLE
,NULL -- PRDID
,'TraderClientApplication' -- CLIENT_APPLNAME
,NULL -- CLIENT_USERID
,NULL -- CLIENT_WRKSTNNAM
)
;
We then ran the SQL statement shown in Example 4-41 to insert a corresponding row into DSN_PROFILE_ATTRIBUTES table.
Example 4-41 DayTrader-EE6 DSN_PROFILE_ATTRIBUTES row
INSERT INTO SYSIBM.DSN_PROFILE_ATTRIBUTES
( "PROFILEID" , "KEYWORDS" , "ATTRIBUTE1" , "ATTRIBUTE2" ,
"ATTRIBUTE3" , "ATTRIBUTE_TIMESTAMP" , "REMARKS" )
VALUES (
1 -- PROFILEID
,'MONITOR THREADS' -- monitors number of concurrent active threads
,'WARNING_DIAGLEVEL2' -- DB2 issues DSNT772I when threshold exceeded
, 7 -- number of active threads allowed
, NULL -- ATTRIBUTE3
, CURRENT TIMESTAMP -- ATTRIBUTE_TIMESTAMP
,'DayTrader' -- REMARKS
);
The attributes shown in Example 4-41 on page 187 define active thread monitoring for PROFILEID 1, allowing for a maximum of seven active threads, causing DB2 to issue warning message DSNT772I in case this number of active threads is exceeded. Processing continues with no thread queuing or suspension.
Activating thread monitoring
We configured the administrative task scheduler (ADMT) to issue START PROFILE and DISPLAY PROFILE commands within DB2 subsystem startup processing. In Appendix A, “DB2 administrative task scheduler” on page 483. we describe the administrative task scheduler (ADMT) setup to trigger batch jobs, DB2 commands, and for autonomic statistics monitoring.
The output of the ADMT initiated DB2 command processing is shown in Figure 4-76.
-START PROFILE
DSNT741I -D0Z1 DSNT1SDV START PROFILE IS COMPLETED.
DSN9022I -D0Z1 DSNT1STR 'START PROFILE' NORMAL COMPLETION
DSN
-DIS PROFILE
DSNT753I -D0Z2 DSNT1DSP DISPLAY PROFILE REPORT FOLLOWS:
STATUS = ON
TIMESTAMP = 2012-10-25-14.18.04.615794
PUSHOUTS = 0 OUT OF 10000
DISPLAY PROFILE REPORT COMPLETE.
DSN9022I -D0Z2 DSNT1DSP 'DISPLAY PROFILE' NORMAL COMPLETION
Figure 4-76 START PROFILE command
Verifying thread monitoring status
For each monitoring profile that is to be started (SYSIBM.DSN_PROFILE_TABLE, column PROFILE_ENABLED = Y) DB2 externalizes profile status information to the corresponding profile history tables. We ran the query shown in Example 4-42 to verify the status of the monitoring profile referred to in Example 4-40 on page 187 and Example 4-41 on page 187.
Example 4-42 verify DSN_PROFILE_TABLE status
SELECT PROFILEID,PROFILE_TIMESTAMP,STATUS,CLIENT_APPLNAME
FROM "SYSIBM"."DSN_PROFILE_HISTORY"
WHERE PROFILEID = 1
ORDER BY PROFILE_TIMESTAMP DESC
FETCH FIRST ROW ONLY
---------+---------+---------+---------+---------+---------+-
PROFILEID PROFILE_TIMESTAMP STATUS
---------+---------+---------+---------+---------+---------+-
1 2012-10-25-14.18.04.615794 ACCEPTED BY D0Z2
We then ran the query shown in Example 4-43 to verify the status of the monitoring attributes.
Example 4-43 Verify DSN_PROFILE_ATTRIBUTES status
---------+---------+---------+---------+---------+---------+--------
SELECT
SUBSTR(KEYWORDS,1,14) AS KEYWORDS
,SUBSTR(ATTRIBUTE1,1,20) AS ATTRIBUTE1
,ATTRIBUTE2
,STATUS
FROM "SYSIBM"."DSN_PROFILE_ATTRIBUTES_HISTORY"
WHERE PROFILEID = 1
ORDER BY ATTRIBUTE_TIMESTAMP DESC
FETCH FIRST ROW ONLY
---------+---------+---------+---------+---------+---------+--------
KEYWORDS ATTRIBUTE1 ATTRIBUTE2 STATUS
---------+---------+---------+---------+---------+---------+--------
MONITOR THREAD WARNING_DIAGLEVEL2 7 ACCEPTED BY D0Z2
The status returned by the queries shown in Example 4-42 on page 188 and Example 4-43 confirms that our thread monitoring profile was successfully activated.
DayTrader-EE6 active thread monitoring messages
When we run the DayTrader-EE6 workload we observed the DB2 messages shown in Figure 4-77 issued by the DB2 master address space.
DSNT772I -D0Z1 DSNLQDIS A MONITOR PROFILE WARNING
CONDITION OCCURRED
1 TIME(S)
IN PROFILE ID=1
WITH PROFILE FILTERING SCOPE=CLIENT_APPLNAME
WITH REASON=00E30505
Figure 4-77 DSNT772I active thread monitoring warning message
4.3.20 Using profiles to keep track of DRDA client levels
In this scenario we show how to monitor the DB2 clients that use certain levels of the DB2 client software. To perform this kind of monitoring we use profiles to monitor client threads that connect to DB2 for z/OS using a certain client level.
This monitoring function can assist you in identifying outdated levels of DB2 client software used in your environment. After you have identified the clients and remote locations you can use profiles to issue warnings in case such back level clients are being used and finally disable the use of such client levels after a planned grace period has expired.
Use of DB2 Connect
Keeping track of DRDA client levels becomes especially important when your clients go through a DB2 Connect gateway to connect to DB2 for z/OS, because upgrading to a new version of DB2 for z/OS might force you migrate your DB2 Connect gateways to the level that is supported by the new version of DB2. Using a new level of DB2 Connect in turn might trigger DB2 client migrations, as DB2 Connect itself only supports certain back-levels
of clients.
With DB2 clients directly connecting to DB2 for z/OS servers this back-level consideration no longer is an issue, because during the DRDA hand shake DB2 for z/OS and the DB2 client agree on the DRDA level to be used which happens to be the lowest DRDA level either of the client or the server. Supporting the lowest DRDA level for application processing alleviates the requirement of having to upgrade your DB2 clients to the most recent level. However, you are reminded that upgrading your clients is recommended, especially if you want to take advantage of new functions provided by the DB2 for z/OS server.
DB2 for z/OS and DB2 Connect
Up to DB2 9 for z/OS the use of DB2 Connect was required in some situations due to the capacity limit of that version of DB2. For instance, due to virtual storage constraints a DB2 9 for z/OS server was only able to support a limited number of database access threads (DBATs).
In DB2 10 for z/OS this and many other constraints are relieved which enables DB2 for z/OS to support a generous number of database access threads that is sufficient enough to replace existing DB2 Connect functionality by DB2 clients that directly connect to the DB2 for z/OS server. An illustration of that architecture is shown in Figure 4-78.
Figure 4-78 DB2 Client configuration to directly access DB2 for z/OS
Figure 4-78 shows Java clients directly connecting to DB2 for z/OS using JDBC type 4 connections while the DB2 Connect infrastructure still is in place. This approach allows for a staged migration of DB2 clients in which DB2 client access is redirected from using DB2 Connect to DB2 direct access by updating the DB2 client configuration as illustrated in Figure 4-79 on page 191.
Changing the DB2 client configuration in that situation enables you to make use of new DB2 10 for z/OS configuration options. For instance you can perform online changes to dynamically activate DB2 location aliases allowing you to direct workloads to the data sharing group, to a subset of data sharing members, or to a single data sharing member.
Figure 4-79 DB2 client configuration for DB2 direct access
Controlling database access threads
DB2 10 for z/OS is now able to serve huge numbers of DB2 clients directly connected to the DB2 server; new monitoring functionality has been introduced to estimate resource usage of DRDA applications and to avoid situations in which DRDA clients monopolize DB2 server resources.
Identify DRDA client levels
The product identifier is the access relational database product specific ID (PRDID) representing the product ID of the DB2 client (also referred to by the application requestor).
The DB2 client product ID has the format PPPVVRRM where
PPP is the product identifier. Possible values are
 – DSN - DB2 for z/OS
 – ARI - DB2 for VSE and VM
 – SQL - DB2 for Linux, UNIX, and Windows
 – JCC - IBM Data Server Driver for JDBC and SQLJ
 – QSQ - DB2 for IBM eServer iSeries®
VV is the version number
RR is the release number
M is the modification level
Identifying DRDA PRDIDs used in your system
You can use one of the following options to keep track of the DRDA PRDIDs used by your remote DB2 clients.
DB2 command DISPLAY LOCATION
DB2 command DISPLAY LOCATION returns the PRDID of DRDA clients connected to your system. The command output provided in Figure 4-80 shows to remote locations currently connected to member D0Z1. One of them of PRDID JCC04130 (JDBC version 4 release 13 modification level 0) and the other of SQL10010 (DB2 LUW version 10 release 01 modification level 0).
-dis location
DSNL200I -D0Z1 DISPLAY LOCATION REPORT FOLLOWS-
LOCATION PRDID T ATT CONNS
::9.12.6.9 JCC04130 S 1
::9.145.139.205 SQL10010 S 1
DISPLAY LOCATION REPORT COMPLETE
Figure 4-80 DISPLAY LOCATION with PRDID information
Field QLSTPRID of the statistics trace record (IFCID 0001)
Field QLACPRID of the accounting trace record (IFCID 0003)
You can use DB2 accounting reports to determine the product IDs used by your distributed clients. For convenience we run a query against the PDB that we discuss in 4.4, “Tivoli OMEGAMON XE for DB2 Performance Expert for z/OS” on page 201 to obtain the different distributed product IDs used in our environment. The query result is shown in Figure 4-81.
SELECT
COUNT(*) AS NO
, SUBSTR(REQ_LOCATION ,01,15) AS REQ_LOCATION
, SUBSTR(CLIENT_TRANSACTION ,01,15) AS CLIENT_TRANSACTION
, REMOTE_PRODUCT_ID
FROM DB2PMSACCT_DDF
GROUP BY
REQ_LOCATION
, CLIENT_TRANSACTION
, REMOTE_PRODUCT_ID
---------+---------+---------+---------+---------+---------+---------+-
NO REQ_LOCATION CLIENT_TRANSACTION REMOTE_PRODUCT_ID
---------+---------+---------+---------+---------+---------+---------+-
11 ::9.12.4.142 TraderClientApp JCC03640
2 ::9.12.6.9 db2jcc_applicat JCC03630
2 ::9.12.6.9 db2jcc_applicat JCC03640
15 ::9.12.6.9 TraderClientApp JCC03640
11 ::9.12.6.9 TraderClientApp JCC03630
1 ::9.30.28.118 db2jcc_applicat JCC04130
DSNE610I NUMBER OF ROWS DISPLAYED IS 6
Figure 4-81 Use PDB to query PRDIDs
Activate PRDID based thread monitoring
In our scenario we illustrate how to use profiles to monitor DB2 clients using a certain JDBC driver level. The profile tables changes we performed for this kind of monitoring are shown in Example 4-44.
Example 4-44 Profile table changes for PRDID monitoring
-- --------------------------------------------------------------
-- DSN_PROFILE_TABLE
-- --------------------------------------------------------------
INSERT INTO SYSIBM.DSN_PROFILE_TABLE ( "AUTHID" , "PLANNAME" ,
"COLLID" ,
"PKGNAME" , "LOCATION" , "PROFILEID" , "PROFILE_TIMESTAMP" ,
"PROFILE_ENABLED" , "GROUP_MEMBER" , "REMARKS" , "ROLE" , "PRDID" ,
"CLIENT_APPLNAME" , "CLIENT_USERID" , "CLIENT_WRKSTNNAME" )
VALUES (
NULL -- AUTHID
,NULL -- PLANNAME
,NULL -- COLLID
,NULL -- PKGNAME
,NULL -- LOCATION
, 4 -- UNIQUE PROFILEID
,CURRENT TIMESTAMP -- PROFILE_TIMESTAMP
,'Y' -- PROFILE_ENABLED
,'' -- GROUP_MEMBER
,'THREAD MONITORING PRDID' -- REMARKS
,NULL -- ROLE
,'SQL10010' -- PRDID
,NULL -- CLIENT_APPLNAME
,NULL -- CLIENT_USERID
,NULL -- CLIENT_WRKSTNNAM
)
;
-- --------------------------------------------------------------
-- DSN_PROFILE_ATTRIBUTES
-- --------------------------------------------------------------
INSERT INTO SYSIBM.DSN_PROFILE_ATTRIBUTES
( "PROFILEID" , "KEYWORDS" , "ATTRIBUTE1" , "ATTRIBUTE2" ,
"ATTRIBUTE3" , "ATTRIBUTE_TIMESTAMP" , "REMARKS" )
VALUES (
4 -- PROFILEID
,'MONITOR THREADS' -- monitors number of concurrent active threads
,'WARNING_DIAGLEVEL2' -- DB2 issues DSNT772I when threshold exceeded
, 1 -- NUMBER OF ACTIVE THREADS ALLOWED
, NULL -- ATTRIBUTE3
, CURRENT TIMESTAMP -- ATTRIBUTE_TIMESTAMP
,'PRDID' -- REMARKS
);
In Example 4-44 we configure DB2 profile monitoring to issue warning message DSNT772I when the number of threads using the DB2 client level indicated by product ID SQL10010 (max 1 in our example used for illustration) is exceeded. The application itself continues processing as we configured the profile attribute to issue a warning in case the threshold is exceeded. If we wanted the application to receive a negative SQLCODE we would have set profile attribute ATTRIBUTE1 to the value of EXCEPTION.
We then created multiple DB2 connections using DB2 clients of product ID SQL10010 to cause DB2 to issue message DSNT772I. The message that we received is shown in Figure 4-82.
DB2 reason code 00E30505 indicates that a warning occurred because the number of concurrent active threads exceeded the warning setting for the MONITOR THREADS keyword in a monitor profile for one of the PRDID filtering scope.
DSNT772I -D0Z1 DSNLTACC A MONITOR PROFILE WARNING 421
CONDITION OCCURRED
1 TIME(S)
IN PROFILE ID=4
WITH PROFILE FILTERING SCOPE=PRDID
WITH REASON=00E30505
Figure 4-82 DSNT772I PRDID monitoring
4.3.21 Using profiles to disable idle thread timeout at application level
We explain the subsystem wide setting for idle thread timeout “IDLE THREAD TIMEOUT field (IDTHDOIN)” on page 139. IDTHTOIN controls idle thread timeout interval at subsystem level which affects all database access threads (DBAT) served by the subsystem or data sharing member. In case an application misbehaves (for instance, held locks due to missing commit processing or declared temporary tables not explicitly dropped at the end of the application) IDLTHDOIN might need to be set to 0 to keep your production up and running. Setting the parameter to 0 disables the idle thread timeout processing for the entire subsystem or data sharing member affecting not only the misbehaving application.
You can use profiles to control IDTHTOIN processing at application level which gives you the option to disable idle thread timeout processing just for the application you have to disable timeout processing for. The subsystem wide setting for IDTHTOIN still applies to all the DBATs not qualifying for idle thread timeout profile processing.
For instance, to disable idle thread timeout processing for the client application name NonCommittingProgram you would have run the SQL insert statements shown in Example 4-45. and subsequently issue the command shown in “Stopping profiles” on page 184 to activate the profile. In this example the misbehaving application set its clientApplicationInformation to the value of NonComittingProgram.
Example 4-45 Profile sample disable IDTHTOIN
-- SYSIBM.DSN_PROBILE_TABLE
INSERT INTO SYSIBM.DSN_PROFILE_TABLE ( "AUTHID" , "PLANNAME" ,
"COLLID" ,
"PKGNAME" , "LOCATION" , "PROFILEID" , "PROFILE_TIMESTAMP" ,
"PROFILE_ENABLED" , "GROUP_MEMBER" , "REMARKS" , "ROLE" , "PRDID" ,
"CLIENT_APPLNAME" , "CLIENT_USERID" , "CLIENT_WRKSTNNAME" )
VALUES (
NULL -- AUTHID
,NULL -- PLANNAME
,NULL -- COLLID
,NULL -- PKGNAME
,NULL -- LOCATION
, 1 -- unique PROFILEID
,CURRENT TIMESTAMP -- PROFILE_TIMESTAMP
,'Y' -- PROFILE_ENABLED
,'' -- GROUP_MEMBER
,'Disable IDTHTOIN timout' -- REMARKS
,NULL -- ROLE
,NULL -- PRDID
,'NonCommittingProgram' -- CLIENT_APPLNAME
,NULL -- CLIENT_USERID
,NULL -- CLIENT_WRKSTNNAM
)
;
-- ---------------------------------------------------------------------
-- SYSIBM.DSN_PROBILE_ATTRIBUTES table
-- ---------------------------------------------------------------------
INSERT INTO SYSIBM.DSN_PROFILE_ATTRIBUTES
( "PROFILEID" , "KEYWORDS" , "ATTRIBUTE1" , "ATTRIBUTE2" ,
"ATTRIBUTE3" , "ATTRIBUTE_TIMESTAMP" , "REMARKS" )
VALUES (
1 -- unique PROFILEID
,'MONITOR IDLE THREADS' -- IDHTDOIN monitoring
,'WARNING_DIAGLEVEL2' -- DB2 issues DSNT772I when threshold exceeded
, 0 -- IDTHTOIN = 0
, NULL -- ATTRIBUTE3
, CURRENT TIMESTAMP -- ATTRIBUTE_TIMESTAMP
,'disable IDTHTOIN' -- REMARKS
);
4.3.22 Using profiles for remote connection monitoring
You can use profiles to monitor the number of concurrent inbound DDF connections at the requesting location level. This monitoring function helps you to keep track of the number of remote connections used by a particular remote location. To use this monitoring function you need to provide the requestor’s IP address or domain name as filter criteria in the profile table LOCATION column. To activate connection monitoring for IP address 9.146.231.122 we created a profile using the SQL statements shown in Example 4-46 and issued a START PROFILE command.
Example 4-46 Sample of profile for remote connection monitoring
-- --------------------------------------------------------------
-- DSN_PROFILE_TABLE
-- --------------------------------------------------------------
INSERT INTO SYSIBM.DSN_PROFILE_TABLE ( "AUTHID" , "PLANNAME" ,
"COLLID" ,
"PKGNAME" , "LOCATION" , "PROFILEID" , "PROFILE_TIMESTAMP" ,
"PROFILE_ENABLED" , "GROUP_MEMBER" , "REMARKS" , "ROLE" , "PRDID" ,
"CLIENT_APPLNAME" , "CLIENT_USERID" , "CLIENT_WRKSTNNAME" )
VALUES (
NULL -- AUTHID
,NULL -- PLANNAME
,NULL -- COLLID
,NULL -- PKGNAME
,'9.146.231.122' -- LOCATION
, 5 -- UNIQUE PROFILEID
,CURRENT TIMESTAMP -- PROFILE_TIMESTAMP
,'Y' -- PROFILE_ENABLED
,'' -- GROUP_MEMBER
,'Connection Monitoring ' -- REMARKS
,NULL -- ROLE
,NULL -- PRDID
,NULL -- CLIENT_APPLNAME
,NULL -- CLIENT_USERID
,NULL -- CLIENT_WRKSTNNAM
)
;
-- --------------------------------------------------------------
-- DSN_PROFILE_ATTRIBUTES
-- --------------------------------------------------------------
INSERT INTO SYSIBM.DSN_PROFILE_ATTRIBUTES
( "PROFILEID" , "KEYWORDS" , "ATTRIBUTE1" , "ATTRIBUTE2" ,
"ATTRIBUTE3" , "ATTRIBUTE_TIMESTAMP" , "REMARKS" )
VALUES (
5 -- PROFILEID
,'MONITOR CONNECTIONS' -- monitors number of connections
,'WARNING_DIAGLEVEL2' -- DB2 issues DSNT772I when threshold exceeded
, 1 -- NUMBER OF ACTIVE THREADS ALLOWED
, NULL -- ATTRIBUTE3
, CURRENT TIMESTAMP -- ATTRIBUTE_TIMESTAMP
,'PRDID' -- REMARKS
);
From the requesting location (in our test scenario this was a DB2 LUW client machine) we used multiple instances of the DB2 command line processor to create the desired number of DB2 connections. After the profile threshold entered in Example 4-46 on page 195 was exceeded we observed the DB2 message shown in Figure 4-83.
DB2 reason code 00E30503 indicates that a warning occurred because the number of connections exceeded the warning setting for the MONITOR CONNECTIONS keyword in a monitor profile for the LOCATION filtering scope.
DSNT772I -D0Z1 DSNLILNR A MONITOR PROFILE WARNING
CONDITION OCCURRED
1 TIME(S)
IN PROFILE ID=5
WITH PROFILE FILTERING SCOPE=IPADDR
WITH REASON=00E30503
Figure 4-83 Message DSNT772I for threshold exceeded
Additional information
For information about managing and implementing DB2 profile monitoring, refer to Chapter 45. Using profiles to monitor and optimize performance of DB2 10 for z/OS, Managing Performance, SC19-2978.
4.3.23 SYSPROC.ADMIN_DS_LIST stored procedure
The SYSPROC.ADMIN_DS_LIST stored procedure invokes the z/OS Catalog Search Interface (CSI) to obtain information about data sets contained in integrated catalog facility (ICF) catalogs. Data set entries are selected using a generic data set filter. The data set filter can be a fully-qualified name, in which case one entry is returned, or a generic filter key containing wild cards so that multiple entries can be returned on a single invocation. The syntax for providing a generic filter keys is similar to providing the dsname level information in the ISPF data set list utility.
You can use the SYSPROC.ADMIN_DS_LIST stored procedure to perform regular monitoring on data set extends, DASD usage, VSAM high allocated and high used RBA (relative byte address). SYSPROC.ADMIN_DS_LIST returns the data set information through a result set cursor that it opens on the temporary table SYSIBM.DSLIST. A list of columns returned by the result set cursor is shown in Example 4-47.
Example 4-47 SYSPROC.ADMIN_DS_LIST result set
DSNAME 1 VARCHAR 44
CREATE_YEAR 2 INTEGER 4
CREATE_DAY 3 INTEGER 4
TYPE 4 INTEGER 4
VOLUME 5 CHAR 6
PRIMARY_EXTENT 6 INTEGER 4
SECONDARY_EXTENT 7 INTEGER 4
MEASUREMENT_UNIT 8 CHAR 9
EXTENTS_IN_USE 9 INTEGER 4
DASD_USAGE 10 CHAR 8
HARBA 11 CHAR 6
HURBA 12 CHAR 6
ERRMSG 13 VARCHAR 256
A sample on how to invoke the stored procedure to retrieve data set related information for table and index space related VSAM LDS data sets for database DBTR8074 is provided in Example 4-48.
Example 4-48 SYSPROC.ADMIN_DS_LIST stored procedure invocation
CALL SYSPROC.ADMIN_DS_LIST('DB0ZD.DSNDBD.DBTR8074.*.I%%%%.A%%%', 'N', 'N', 99999,'N',?,?)
The information about DASD usage (DASD_USAGE), high used (HURBA) and high allocated RBA (HARBA) are returned as binary character string which is not useful when it comes to performing computations using these information. For instance, you might want to subtract the high used RBA from the high allocated RBA to calculate the real DASD usage in bytes or to determine table or index space over or under allocation. To cast the binary character string information to a big integer value we use the DB2 UNIX System Services command line processor to run the SQL shown in Example 4-49.
Example 4-49 SYSPROC.ADMIN_DS_LIST cast to BIGINT
UPDATE COMMAND OPTIONS using c OFF ; 1
CONNECT TO localhost:39000/DB0Z USER DB2R3 USING <password>; 2
--
CALL SYSPROC.ADMIN_DS_LIST( 3
'DB0ZD.DSNDBD.DBTR8074.*.I%%%%.A%%%', 'N',
'N', 99999,'N',?,?);
--
SELECT 4
DSNAME
, db2r3.bigint(DASD_USAGE) AS DASD_USAGE 5
, db2r3.bigint(HARBA) AS HARBA5
- db2r3.bigint(HURBA) AS HURBA 5
, CREATE_YEAR , CREATE_DAY , TYPE , VOLUME , PRIMARY_EXTENT
, SECONDARY_EXTENT , MEASUREMENT_UNIT , EXTENTS_IN_USE
FROM SYSIBM.DSLIST ;
--
terminate;
The SQL shown in Example 4-49 on page 197 performs the following processing steps:
1. SYSPROC.ADMIN_DS_LIST stores its result in the temporary tale SYSIBM.DSLIST. The temporary table is dropped at commit. The update command in Example 4-49 on page 197 deactivates auto commit to make the temporary table available for processing across the current commit scope.
2. Next we connect to DB2 using the data sharing group IP address, the SQL port and the DB2 location name.
3. We then call the SYSPROC.ADMIN_DS_LIST stored procedure. We ignore the procedure result because we
4. subsequently query the SYSIBM.DSLIST temporary table that was created and populated by the stored procedure.
In the SQL select list we use the BIGINT user defined scalar function (scalar UDF) to cast the binary character value to BIGINT which enables us to use SQL to calculate the difference between high allocated RBA and high used RBA. This calculation determines the amount of table or index space over or under allocation.
We provided the program source and DDL for implementing and defining the DB2R3.BIGINT scalar UDF in Appendix G, “External user-defined functions” on page 563.
4.3.24 DB2 real time statistics
DB2 RTS provides another powerful tool that you can use in your daily DB2 object maintenance strategy. DB2 provides the following RTS tables that you can query:
SYSIBM.SYSTABLESPACESTATS - RTS for table spaces and partitions
SYSIBM.SYSINDEXSPACESTATS - RTS for index spaces and partitions
You can query DB2 RTS to obtain the following information about table space, index space, and on partition level.
SQL DELETE, INSERT and UPDATE frequency since the last LOAD, RUNSTATS or COPY utility. You can use this information to determine how frequently table spaces and indexes are accessed for DELETE, INSERT and UPDATE DML operations.
Number of active pages
Number of allocated pages
Number of data set extents
Whether you should run the REORG, RUNSTATS or COPY utility.
Total number of rows stored in the table space
Total number of index entries in the index space
Size of data occupied by rows. You can compare this information with the number of active pages to review page usage efficiency.
Type of the disk (HDD or SSD) the table or index space VSAM data set resides on
High performance list prefetch facility capability indicator of the disk the VSAM data set resides on
Number of index levels in the index tree
Number of pages containing pseudo deleted index entries
The date when the index was last used for SELECT, FETCH, searched UPDATE, searched DELETE, or used to enforce referential integrity constraints. This information can be useful to determine unused indexes.
The number of times the index was used for SELECT, FETCH, searched UPDATE, searched DELETE, or used to enforce referential integrity constraints, or since the object was created.
RTS snapshot tables
In our workload scenario we take RTS snapshots to determine the number of DELETE, INSERT, and UPDATE statements. For index spaces we use RTS to identify unused indexes. We use the SQL statements shown in Example 4-50 to create our RTS shadow table used to store the RTS snapshots taken before and after workload execution.
Example 4-50 Create RTS snapshot table
CREATE TABLE TABLESPACESTATS LIKE SYSIBM.SYSTABLESPACESTATS;
COMMIT;
ALTER TABLE TABLESPACESTATS ADD COLUMN SNAPSHOTTS TIMESTAMP;
COMMIT;
CREATE TABLE INDEXSPACESTATS LIKE SYSIBM.SYSINDEXSPACESTATS;
COMMIT;
ALTER TABLE INDEXSPACESTATS ADD COLUMN SNAPSHOTTS TIMESTAMP;
COMMIT;
Populating the RTS snapshot tables
Before and after workload execution we stopped and started the DayTrader database table spaces to trigger the externalization of the RTS information an ran the SQL statements shown in Example 4-51 to take the RTS snapshot information.
Example 4-51 Take RTS snapshot information
-- ------------------------------
-- snapshot indexspace RTS
-- ------------------------------
INSERT INTO INDEXSPACESTATS
SELECT A.* , CURRENT TIMESTAMP
FROM SYSIBM.SYSINDEXSPACESTATS A
WHERE DBNAME = 'DBTR8074';
-- ------------------------------
-- snapshot tablespace RTS
-- ------------------------------
INSERT INTO TABLESPACESTATS
SELECT A.* , CURRENT TIMESTAMP
FROM SYSIBM.SYSTABLESPACESTATS A
WHERE DBNAME = 'DBTR8074';
Querying the RTS snapshot tables
You can use SQL queries on the RTS snapshot tables to determine the number of table or index changes that occurred during workload testing. All you need to do is to take the actions described in “Populating the RTS snapshot tables” on page 199 and run SQL queries to determine the difference between the SQL DML counters that you stored in your RTS snapshot tables before and after workload execution.
In the SQL sample query shown in Example 4-52 we query the RTS table space snapshot table to determine the number of inserts, updates and deletes performed on the DayTrader database during the workload execution that we performed between 2012-08-17-22.57.57.673670 and 2012-08-17-22.57.57.673670.
Example 4-52 Query RTS snapshot table
WITH
Q1 AS
( SELECT
DBNAME,NAME,PARTITION,
NACTIVE,NPAGES,REORGINSERTS,REORGDELETES,REORGUPDATES,
REORGMASSDELETE,TOTALROWS,SNAPSHOTTS
FROM TABLESPACESTATS
WHERE SNAPSHOTTS = '2012-08-17-22.57.57.673670'
AND DBNAME = 'DBTR8074'
ORDER BY DBNAME, NAME, SNAPSHOTTS),
Q2 AS
( SELECT
DBNAME,NAME,PARTITION,
NACTIVE,NPAGES,REORGINSERTS,REORGDELETES,REORGUPDATES,
REORGMASSDELETE,TOTALROWS,SNAPSHOTTS
FROM TABLESPACESTATS
WHERE SNAPSHOTTS = '2012-08-17-23.08.49.191718'
AND DBNAME = 'DBTR8074'
ORDER BY DBNAME, NAME, SNAPSHOTTS)
SELECT
SUBSTR(Q1.DBNAME,1,8) AS DBNAME,
SUBSTR(Q1.NAME ,1,8) AS NAME,
Q1.PARTITION,
Q2.TOTALROWS - Q1.TOTALROWS AS #ROWS,
Q2.REORGINSERTS - Q1.REORGINSERTS AS INSERTS ,
Q2.REORGDELETES - Q1.REORGDELETES AS DELETES ,
Q2.REORGUPDATES - Q1.REORGUPDATES AS UPDATES ,
Q2.REORGMASSDELETE - Q1.REORGMASSDELETE AS MASSDELETE
FROM Q1,Q2
WHERE
(Q1.DBNAME,Q1.NAME,Q1.PARTITION) = (Q2.DBNAME,Q2.NAME,Q2.PARTITION)
---------+---------+---------+---------+---------+---------+---------+---
DBNAME NAME PARTITION #ROWS INSERTS DELETES UPDATES MASSDELETE
---------+---------+---------+---------+---------+---------+---------+---
DBTR8074 TSACCEJB 0 11489 11489 0 133129 1
DBTR8074 TSACPREJ 0 11489 11489 0 21603 1
DBTR8074 TSHLDEJB 0 2476 24037 21561 21561 1
DBTR8074 TSKEYGEN 0 3 3 0 83 0
DBTR8074 TSORDEJB 0 45598 45598 0 158305 1
DBTR8074 TSQUOEJB 0 1000 1000 0 45580 1
DSNE610I NUMBER OF ROWS DISPLAYED IS 6
4.3.25 Using RTS to obtain COPY, REORG and RUNSTATS recommendations
Rather than querying the RTS tables yourself we recommend to use the SYSPROC.DSNACCOX stored procedure to obtain COPY, REORG and RUNSTATS utility recommendations. DSNACCOX intelligently combines your input parameters and filters with built in intelligence, data from the DB2 catalog and RTS to determine whether table space or index space reorganizations, Runstats or Copy utilities are due for execution. For instance, DSNACCOX with DB2 10 for z/OS implements specific code to reduce the reorg requirements for table spaces residing on SSD volumes.
For information about DB2 REORG and SSD disks refer to deverloperWorks article Solid-state drives: Changing the data world at http://www.ibm.com/developerworks/data/library/dmmag/DMMag_2011_Issue3/Storage/index.html.
As for RUNSTATS recommendations we use the administrative scheduler to make use of the autonomic statistics maintenance feature. Autonomic statistics maintenance internally calls the DSNACCOX procedure to obtain its RUNSTATS recommendations. See Appendix A, “DB2 administrative task scheduler” on page 483.
Additional information
For additional information about using the DSNACCOX stored procedure refer to Chapter 34. Setting up your system for real-time statistics of DB2 10 for z/OS, Managing Performance, SC19-2978.
4.4 Tivoli OMEGAMON XE for DB2 Performance Expert for z/OS
We use the OMEGAMON performance database (PDB) tables to store historical DB2 accounting and statistics information in DB2 tables. For information about how we create and load the PDB tables refer to Appendix D, “IBM OMEGAMON XE for DB2 performance database” on page 527.
4.4.1 Extract, transform, and load DB2 accounting FILE and statistics information
The processing flow shown in Figure 4-84 illustrates the major processing steps that are required to extract and load non-aggregated accounting and statistics information into OMPE PDB tables.
Figure 4-84 ETL accounting FILE and statistics data
1. The OMEGAMON XE Performance Expert batch utility executes ACCOUNTING and STATISTICS FILE commands to convert the information provided by SMF, GTF or OMEGAMON formatted DB2 trace data into the OMEGAMON XE Performance Expert FILE format output.
2. The DB2 load utility loads the OMEGAMON XE Performance Expert formatted FILE data set into the PDB accounting and statistics tables.
4.4.2 Extract, transform and load DB2 accounting SAVE information
The processing flow shown in Figure 4-85 on page 203 illustrates the major processing steps that are required to extract and load aggregated accounting information into the OMPE PDB accounting SAVE tables.
Figure 4-85 ETL accounting SAVE data
1. The OMEGAMON XE Performance Expert DB2PM batch utility executes an ACCOUNTING SAVE command to provide the requested one minute interval aggregated accounting data. The aggregated data is written to a VSAM KSDS data set.
2. OMPE utility DGOPMICO converts the information provided by the VSAM KSDS into a loadable sequential data set.
3. The DB2 load utility loads the OMEGAMON XE Performance Expert formatted accounting SAVE data set its corresponding the PDB accounting tables.
4.4.3 Querying the performance database tables
After the PDB accounting and statistics tables are in place and regularly populated you can create and run your own queries to profile and monitor your applications. In our scenario we use the DB2 clientApplicationInformation provided by the DayTrader application for application profiling and monitoring. To encapsulate query complexity we created an SQL table UDF to allow others to reuse the PDB query just by referencing the UDF as shown in Example 4-53.
Example 4-53 Using the PDB table UDF
---------+---------+---------+---------+---------+---------+---------+--
select * from table(accounting_profile('TraderClientApplication')) a;
---------+---------+---------+---------+---------+---------+---------+--
DATETIME CLIENT_TRANSACTION ELAPSED
---------+---------+---------+---------+---------+---------+---------+--
2012-08-14-22.48 TraderClientApplication 250
2012-08-14-22.47 TraderClientApplication 3242
4.4.4 Additional information
For additional information about implementing and using the OMPE performance database you might want to refer to the following manuals:
A Deep Blue View of DB2 Performance: IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS, SG24-72244
Chapter 5. Advanced reporting concepts, The Performance Database and the Performance Warehouse of IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS, SH12-6927
4.5 DB2 database and application design considerations
There are design and implementation standards that are to be considered for application and database resilience in a DB2 for z/OS environment. Such standards provide recommendations to help you with database design, to support you in implementing your backup, recovery and reorg strategy and to provide application design and coding guidelines.
Design and implementation best practice recommendations are extensively discussed in the DB2 for z/OS documentation and in the DB2 for z/OS Best Practices web site. For further information, refer to the following documentation:
Achieving the Highest Levels of Parallel Sysplex Availability in a DB2 Environment, IBM REDP-3960.
DB2 10 for z/OS, Managing Performance, SC19-2978.
 – Part 4, Improving concurrency
 – Part 6, Programming applications for performance
 – Part 7, Maintaining data organization and statistics
 – Part 8, Managing query access paths
Database migration projects do not always apply best practice recommendations. This leads to SLA violations because of elongated application response times which often has a negative impact on application availability and scalability. To bring the most commonly observed issues to your attention, we provide the following list of database and application design pitfalls that can cause such undesired application behavior:
There is a tendency to accept default configuration properties for WebSphere Application Server data source properties, which can be extremely painful. Always review the data source custom properties to make sure, the recommended settings in 5.11, “Configuring data source properties (webSphereDefaultlsolationLevel, currentPackagePath, pkList, and keepDynamic)” on page 288 are being used.
 – AutoCommit
The default setting switches autocommit to ON. For read-only SQL, this can cause high CPU on the DB2 server because of connection and DBAT management. This can happen especially when the application designer believes no unit of work is necessary. The attitude is “I don't care about the unit of work - all I want is the data. Why does the database impose a unit of work on me by asking me to choose a commit point?”
 – CursorHold
The default, again, is to turn this on. If the application fails to close a cursor, then the connection cannot go inactive. This can inflate the number of threads required. Prior to DB2 10 the major concern is virtual storage, DB2 10 onwards it is real storage.
 – Default isolation level
TRANSACTION_REPEATABLE_READ (i.e. RS) with obvious consequences for concurrency - locking conflicts, time-outs and deadlocks.
Where AutoCommit has been turned off, there can be a problem where read-only applications fail to commit. This can cause an increase in the number of threads, and can also make if difficult for utilities to execute concurrently with application workloads.
Some update transactions commit too infrequently. This usually happens where data volumes exceed the application design expectations (or lack of) and can have detrimental effects, especially in data sharing, as it affects the global CLSN making the lock avoidance mechanism ineffective. This also occurs where the Java object is represented in a hierarchy of tables. meaning a large number of locks might have to be taken.
As well as update transactions, there is the impact of Java Batch, where the mechanism for calculating commit frequency either is not present or means commits are too infrequent. The biggest challenge is those Java Batch applications which contain no restart logic and those where the batch process is an all-or-nothing process. These latter often occur where the data volumes exacerbate the duration of the batch window, such processes can linger on into the online day and cause severe problems.
Unrestricted use of KEEPDYNAMIC(YES) can prevent threads from going inactive.
Numerous tables are often stored in the same table space. In DB2 for z/OS each table should be stored in its own table space, because important tasks such as DB2 utilities, I/O tuning and table space tuning can only be performed at table space level. For instance you cannot backup or restore individual tables within the same table space. Instead, you can perform the copy or recover utility at table space level which copies or recovers all tables in the table space. The same applies to the other utilities and to table space tuning parameters. Creating one table per table space enables you to perform such tasks at table level.
DB2 large object (LOB) auxiliary table spaces are often defined with LOG NO. This setting, while saving LOG space and improving performance for real large LOBs, might compromise data integrity in case of rollback or point in time recovery processing.
The number of indexes can run out of control. For instance, an application might depend on DB2 for Linux, UNIX, and Windows to detect and eliminate duplicate index specifications. The DDL, as a result, has a significant number of duplicate index specifications which are not eliminated by DB2 for z/OS. As well as impacting INSERT and UPDATE performance, this also increases PREPARE time, and can in some cases make access path selection less effective because of the number of choices available.
In some case, the installation DDL allows no customization of buffer pool assignment. This means when installing into an environment which supports multiple applications, that applications can impact each other. The DBA has to find out about this by experience and then has to perform post-installation customization, which is likely to be undone when a fix pack is applied.
There is also a tendency to almost random buffer pool assignment, meaning indexes, data pages and LOBs are all staged in the same buffer pool, with inevitable consequences. As well as separating these out, the application designers should have some understanding of random versus sequential objects and assign them to separate buffer pools where appropriate.
Careless use of page sizes. Some Java application tables can have large row sizes, meaning the page size can be significant in terms of space usage and performance. This is perhaps most true of indexes which are susceptible to leaf page splits because of INSERT patterns, though the most frequent problem is with LOBs which are often assigned less-than-optimal page sizes. Another point about index page sizes is the sensitivity of the Optimizer to the number of index levels.
Blanket use of row-level locking, even where not needed. This can have a significant impact in data sharing because of page P-lock propagation.
No provision of appropriate RUNSTATS advice. In many cases, it is assumed the customer will collect the correct statistics. DBA standards vary, of course, which is the first drawback, but the main problem is that a lot of these applications depend on optimal access path selection to achieve maximum concurrency. Without the correct statistics, of course, this is difficult, and there are often key tables which require specific statistics to be collected. The most frequent complaint is that this information is missing and that educated guesses have to be made post-installation.
Most difficult to manage are the work-flow tables, which can grow and shrink rapidly and frequently. These often require statistics to be collected at the correct time, and then the DBA has to ensure these statistics are not overwritten. Having to solve this problem post migration by experience requires a lot of DBA experience and application knowledge which makes it extremely difficult to identify and solve this problem as one team on its own often lacks the required knowledge, skills or experience.
Applications not been designed with high transaction volumes in mind, and as such tend to perform poorly, and are likely to have contention problems.
Some application design causes the DB2 thread never to become inactive which in turn causes the idle thread timeout to be triggered. In some cases this causes the application to fail. Until DB2 9 for z/OS the idle thread timeout (IDTHTOIN) DSNZPARM had to be disabled at subsystem level to avoid such application failures. In DB2 10 for z/OS profiles can be used to disable IDTHTOIN at application level. For a discussion on this topic refer to 4.3.21, “Using profiles to disable idle thread timeout at application level” on page 194.
 

1 Maximum number of database access threads as determined by the “MAX REMOTE ACTIVE” value in the DSNTIPE installation panel.
2 dynamic link library
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset