IBM OMEGAMON XE for DB2 performance database
OMEGAMON XE for DB2 provides a performance database (PDB), which you can use to store historical information in DB2 tables. Using these tables can be useful for problem determination, application profiling, KPI monitoring, and capacity planning.
In this appendix, we introduce the OMEGAMON PDB, and outline how to create the PDB database, and extract, transform, and load (ETL) DB2 trace information into the PDB tables. We used this functionality to implement the activity that is described at 4.4, “Tivoli OMEGAMON XE for DB2 Performance Expert for z/OS” on page 201.
The appendix covers the following topics:
D.1 Introduction
The PDB consists of a set of tables that you can populate with information from DB2 statistics, accounting, performance, locking, and audit traces. The population process is also referred to as extract, transform, and load (ETL). We provide an overview of the PDB ETL process in Figure D-1.
Figure D-1 OMEGAMON PDB ETL overview
As indicated in Figure D-1, ETL processes non-aggregated (FILE format) and aggregated (SAVE format) information.
Aggregated information
Several records are summarized by specific identifiers. In a report, each entry represents aggregated data. You run the SAVE subcommand to generate a VSAM data set that contains the aggregated data. When the data is saved, you use the Save-File utility to generate a DB2 -loadable data set. As you might have noticed in Figure D-1, this format is supported only for statistics and accounting trace information. This option is useful if you must process huge volumes of accounting information.
Non-aggregated information
For non-aggregated data, each record is listed in the order of occurrence. In a trace, each entry represents non-aggregated data. You run the FILE subcommand to generate a data set that contains non-aggregated data. This format is supported for all DB2 trace information. Analyzing non-aggregated accounting information can be useful if you want to use the report capabilities of SQL to drill down on thread level accounting information. In our scenario, the volume of DB2 trace information is not expected to be large. We therefore decided to load the PDB tables with non-aggregated information.
With PDB ETL, you can process DB2 trace data of the following input formats:
System Measurement Facility (SMF) record types 100 (statistics), 101 (accounting), and 102 (performance and audit).
Generalized Trace Facility (GTF).
OMPE ISPF interface (collect report data).
Batch program FPEZCRD. For an example of how to run program FPEZCRD in batch, refer the JCL sample that is provided in the RKO2SAMP library, member FPEZCRDJ.
Near term history sequential data sets.
In our DB2 environment, we processed DB2 traces that we collected through SMF and GTF.
D.1.1 Performance database structure
The PDB database design is provided by OMEGAMON and comes with a set of tables to store DB2 trace data of the following information categories:
Accounting
Audit
Exceptions
Locking
Record trace
Statistics
System parameters
For this book, we focused on using non-aggregated accounting and statistics information. If you need details about using the PDB for the other information categories, see Chapter 5, “Advanced reporting concepts. The Performance Database and the Performance Warehouse”, in IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS Reporting User's Guide, SH12-6927.
Accounting tables
Figure D-2 shows the accounting table categories that are provided by the performance database. PDB stores each data type in its own DB2 table.
Figure D-2 PDB structure accounting tables
OMPE provides two sets of accounting tables:
FILE accounting tables, which detailed information so that you can use SQL to query accounting information on a thread level.
SAVE accounting tables, which store aggregated data so that you can use SQL to query summarized accounting information of the time interval boundary.
FILE accounting tables
Each table type that is shown in Figure D-2 stores the following information:
General data: General accounting information (one row per thread)
Group buffer pool: For each thread, one row per group buffer pool that is being used
Package data: For each thread, one row per package that is being used
Buffer pool: For each thread, one row per buffer pool that is being used
Resource limit facility: One row per resource limit type that is encountered
SAVE accounting tables
Each table type that is shown in Figure D-2 on page 529 stores the following
aggregated information:
General data: General accounting information, one row per aggregation interval
Group buffer pool: For each aggregation interval, one row per group buffer pool that is being used
Package data: For each aggregation interval, one row per package that is being used
Buffer pool: For each aggregation interval, one row per buffer pool that is being used
Accounting table DDL and load statements
OMPE provides sample create table DDL, load utility control statement templates, and table metadata descriptions in the RKO2SAMP library members that are shown Table D-1 and Table D-2. We used these templates to create and load these accounting tables.
Table D-1 FILE accounting table DDL and load statements
Table name
Type
RKO2SAMP
create table DDL
RKO2SAMP
load utility
statements
RKO2SAMP
table metadata
documentation
DB2PMFACCT_BUFFER
Buffer
pool
data
DGOACFBU
DOGALFBU
DGOABFBU
DB2PMFACCT_GENERAL
General
data
DGOACFGE
DGOALFGE
DGOABFGE
DB2PMFACCT_GBUFFER
Group
buffer
pool
DGOACFGP
DGOALFGP
DGOABFGP
DB2PMFACCT_PROGRAM
Package
data
DGOACFPK
DGOALFPK
DGOABFPK
DB2PMFACCT_DDF
DDF
data
DGOACFDF
DGOALFDF
DGOABFDF
Table D-2 SAVE accounting table DDL and load statements
Table name
Type
RKO2SAMP
create table
DDL
RKO2SAMP
load utility
statements
RKO2SAMP
table metadata
documentation
DB2PMSACCT_BUFFER
Buffer
pool
data
DGOACSBU
DOGALSBU
DOGABSBU
DB2PMSACCT_GENERAL
General
data
DGOACSGE
DOGALSGE
DOGABSGE
DB2PMFACCT_GBUFFER
Group
buffer
pool
DGOACSGP
DOGALSGP
DOGABSGP
DB2PMFACCT_PROGRAM
Package
data
DGOACSPK
DOGALSPK
DOGABSPK
DB2PMFACCT_DDF
DDF
data
DGOACSDF
DOGALSDF
DOGABSDF
Statistics tables DDL and load statements
Figure D-3 shows the structure of each of the statistics tables in the performance database. PDB stored each data type in its own DB2 table.
Figure D-3 PDB structure statistics tables
In our environment, we generate loadable input records in the FILE data format. In that format, each table type that is shown in Figure D-3 stores the following information:
General data: One row for each Statistics delta record, containing data from IFCID 1 and 2. A delta record describes the activity between two consecutive statistics record pairs.
Group buffer pool data: One row per group buffer pool that is active at the start of the corresponding delta record.
DDF data: One row per remote location that is participating in distributed activity by using the DB2 private protocol and one row for all remote locations that used DRDA.
Buffer pool data: One row per buffer pool that is active at the start of the corresponding delta record.
Buffer pool data set data: One row for each open data set that has an I/O event rate at least one event per second during the reporting interval. To obtain that statistics trace information, you must activate statistics trace class 9.
OMEGAMON provides sample create table DDL, load utility control statement templates, and table metadata descriptions in the RKO2SAMP library members that are shown in Table D-3. We used these templates to create and load these statistics tables.
Table D-3 Statistics table DDL and load statements
Table name
Type
RKO2SAMP
create Table DDL
RKO2SAMP
load utility
statements
RKO2SAMP
table metadata
documentation
DB2PM_STAT_GENERAL
General
data
DGOSCGEN
DGOSLGEN
DGOSBGEN
DB2PM_STAT_GBUFFER
Group
buffer pool
data
DGOSCGBP
DGOSLGBP
DGOSBGBP
DDB2PM_STAT_DDF
DDF
data
DGOSCDDF
DGOSLDDF
DGOSBDDF
DB2PM_STAT_BUFFER
Buffer
pool
data
DGOSCBUF
DGOSLBUF
DGOSBBUF
D.2 Creating the performance database
We used the create table DDL RK02SAMP library members that are described in D.1.1, “Performance database structure” on page 529 to create the PDB accounting and statistics tables. To create the PDB, we performed the following activities:
Create a DB2 for z/OS database to store the PDB tables.
Customize PDB create table DDL.
Create PDB tables.
D.2.1 Creating a DB2 z/OS database
We ran the SQL shown in Example D-1 to create the DB2 for z/OS database that we used to create the PDB tables. In our PDB environment, table spaces use buffer pool BP1, and index spaces use BP2.
Example D-1 PDB create DB2 z/OS database
CREATE DATABASE PMPDB
BUFFERPOOL BP1
INDEXBP BP2
CCSID EBCDIC
STOGROUP SYSDEFLT;
D.2.2 Customizing the PDB create table DDL
The OMEGAMON -provided PDB create table DDL statements require customization, as OMEGAMON does not provide an interface for providing PDB table qualifier, database, and table space names. In addition, the PDB provided database design does not provide create table space DDL and does not provide for indexes that are required to ensure uniqueness of data and to support query performance. To perform this customization, we performed the following tasks:
Generate a create table DDL data set that contains all DDL statements.
Modify a create table DDL to reflect the PDB database name and table qualifier.
Generating a create table DDL data set
We ran the JCL that is shown in Example D-2 on page 533 to merge the accounting and statistics create DDL statements that are shown in Table D-1 on page 530, Table D-2 on page 530, and Table D-3 on page 531 in to a data set. For application profiling, we run queries on aggregated accounting information we created in the OMPE accounting tables that are described in “SAVE accounting tables” on page 530.
Example D-2 PDB generate create table DDL data set
//S1GEN EXEC PGM=IEBGENER
//SYSUT1 DD *
SET CURRENT SCHEMA = 'PDB';
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFBU)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFDF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFGE)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFGP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFPK)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSBU)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSDF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSGE)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSGP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSPK)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSRF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCBUF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCDDF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCGBP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCGEN)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCSET)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWCSFP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC106)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC201)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC202)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC230)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC256)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCBND)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCBRD)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCCHG)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCCNT)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCDDL)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCDML)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCFAI)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCSQL)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCUTI)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
//SYSUT2 DD DISP=SHR,DSN=DB2R3.PM.CNTL($04DDLTB)
//SYSPRINT DD SYSOUT=*
//SYSIN DD DUMMY
Customizing a create table DDL
Next, we customize the create table DDL data set that we generated in Example D-2 on page 533. You might notice that we set the current schema to control the table qualifier and that we inserted a semicolon to separate the create table statements for SQL batch processing. We ran the ISPF edit command that is illustrated in Figure D-4 to modify the DDL to use the database that we created in D.2.1, “Creating a DB2 z/OS database” on page 532 for table creation.
File Edit Edit_Settings Menu Utilities Compilers Test Help
-----------------------------------------------------------------------------
EDIT DB2R3.PM.CNTL($03DDL) - 01.00 Columns 0010 0072
Command ===> c 'IN DB2PM.' 'IN PMPDB.' ALL Scroll ===> CSR
****** ***************************** Top of Data *****************************
000001 SET CURRENT SCHEMA = 'PDB';
000002 --**Start of Specifications********************************************
000003 --* *
000004 --* MODULE-NAME = DGOSCBUF *
000005 --* DESCRIPTIVE-NAME = SQL for creating Statistics Buffer Pool Table *
Figure D-4 Customize a create table DDL
Creating table spaces
The create table DDL statements reference the following table spaces in the table
space clause:
PMPDB.TSPAFBU
PMPDB.TSPAFDF
PMPDB.TSPAFGE
PMPDB.TSPAFGP
PMPDB.TSPAFPK
PMDB.TSPASBU
PMDB.TSPASDF
PMDB.TSPASGE
PMDB.TSPASGP
PMDB.TSPASPK
PMPDB.TSPSBUF
PMPDB.TSPSDDF
PMPDB.TSPSGBP
PMPDB.TSPSGEN
PMPDB.TSPSSET
As these table spaces do not yet exist, we used the create table space DDL template that is shown in Example D-3 to create these table spaces. The template supports table space compression and uses the primary and secondary space quantity sliding scale feature to take advantage of autonomic space management.
Example D-3 Create table space template
CREATE TABLESPACE <tsname>
IN PMPDB
USING STOGROUP SYSDEFLT
PRIQTY -1 SECQTY -1
ERASE NO
FREEPAGE 0 PCTFREE 5
GBPCACHE CHANGED
TRACKMOD YES
LOGGED
SEGSIZE 64
BUFFERPOOL BP1
LOCKSIZE ANY
LOCKMAX SYSTEM
CLOSE YES
COMPRESS YES
CCSID EBCDIC
DEFINE YES
MAXROWS 255;
D.2.3 Creating the PDB accounting and statistics tables
Now, the DB2 for z/OS database PMPDB and the table spaces that are required for the tables are created and a generated a data set with customized create table DDL statements exists. Next, we run the batch JCL that is shown in Example D-4 to run the create table DDL statements that we customized in “Customizing a create table DDL” on page 534.
Example D-4 Batch JCL PDB accounting and statistics table creation
//S10TEP2 EXEC PGM=IKJEFT1B,DYNAMNBR=20,TIME=1440
//STEPLIB DD DISP=SHR,DSN=DB0ZT.SDSNEXIT
// DD DISP=SHR,DSN=DB0ZT.SDSNLOAD
// DD DISP=SHR,DSN=DB0ZT.RUNLIB.LOAD
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(D0ZG)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP10)
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD DISP=SHR,DSN=DB2R3.PM.CNTL($04DDLTB)
D.3 Extracting, transforming, and loading accounting and statistics data
Next, we extract, transform, and load (ETL) DB2 accounting and statistics trace information in to the PDB tables that we created in D.2.3, “Creating the PDB accounting and statistics tables” on page 535. The ETL process consists of the following processing steps:
1. Extract and transform DB2 trace data into an OMEGAMON Performance Expert (OMPE) FILE formatted data set
2. Load the OMPE FILE formatted data set into DB2 tables.
3. Extract transform and DB2 trace date into an OMEGAMON Performance Expert SAVE formatted data set.
4. Load the OMPE SAVE data into DB2 tables.
D.3.1 Extracting and transforming DB2 trace data into the FILE format
We ran the batch JCL that is shown in Example D-5 to extract and to transform SMF DB2 accounting and statistics data into the OMEGAMON XE for DB2 PE FILE format. We obtained the accounting and statistics data in a sequential data set that we later use for the DB2 LOAD utility to load the data into DB2 Performance Database accounting and
statistics tables.
Example D-5 OMPE extract and transform DB2 trace data into FILE format
//* --------------------------------------------------------------
//*DOC Extract and transform accounting and statistics trace data
//*DOC into Omegamon PE FILE format
//* --------------------------------------------------------------
//DB2PM1 EXEC PGM=DB2PM,REGION=0M
//STEPLIB DD DISP=SHR,DSN=<omhlq>.RKANMOD
//INPUTDD DD DISP=SHR,DSN=SMF.DUMP.G0033V00
//STFILDD1 DD DISP=(NEW,CATLG,DELETE),DSN=DB2R3.PM.STAT.FILE,
// SPACE=(CYL,(050,100),RLSE),
// UNIT=SYSDA,
// DATACLAS=COMP /*trigger DFSMS compression */
//ACFILDD1 DD DISP=(NEW,CATLG,DELETE),DSN=DB2R3.PM.ACCT.FILE,
// SPACE=(CYL,(050,100),RLSE),
// UNIT=SYSDA,
// DATACLAS=COMP /*trigger DFSMS compression */
//JOBSUMDD DD SYSOUT=A
//DPMLOG DD SYSOUT=A
//SYSOUT DD SYSOUT=A
//SYSIN DD *
GLOBAL
INCLUDE(SSID(DB1S)) TIMEZONE(-1)
STATISTICS
FILE DDNAME(STFILDD1)
ACCOUNTING
FILE DDNAME(ACFILDD1)
EXEC
D.3.2 Extracting and transforming DB2 trace data into the SAVE format
We ran the batch JCL that is shown in Example D-6 to extract and to transform SMF DB2 accounting data into OMEGAMON XE for DB2 PE accounting SAVE format. We obtained the accounting data in a sequential data set, which we later use as input for the DB2 LOAD utility to load the data into DB2 Performance Database save accounting tables
Example D-6 Extract and transform accounting SAVE format
//IDC01 EXEC PGM=IDCAMS
//* =============================================================
//* Def Cluster source: RKO2SAMP(DGOPJAMI)
//* =============================================================
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DELETE (DB2SMF.WASRB.ACCTLOAD) NONVSAM
SET MAXCC = 0
DELETE (DB2SMF.WASRB.ACCTSAVE ) CLUSTER
SET MAXCC = 0
DEFINE CLUSTER -
(NAME(DB2SMF.WASRB.ACCTSAVE ) -
CYL(100,40) -
BUFFERSPACE(40960) -
KEYS(255 0) -
REUSE -
RECORDSIZE(2800 4600) -
) -
DATA (CISZ(8192)) -
INDEX (CISZ(4096))
//SAVE02 EXEC PGM=DB2PM,REGION=0M
//STEPLIB DD DISP=SHR,DSN=OMEGA5RT.SC63.RKANMOD
//INPUTDD DD DISP=SHR,DSN=DB2SMF.WASRB.SC63.T4.SMFDB2
//ACSAVDD DD DISP=SHR,DSN=DB2SMF.WASRB.ACCTSAVE
//DPMLOG DD SYSOUT=A
//JOBSUMDD DD SYSOUT=A
//SYSOUT DD SYSOUT=A
//SYSIN DD *
GLOBAL
INCLUDE(SUBSYSTEMID(D0Z*))
TIMEZONE(+4)
ACCOUNTING
/* 1 minute interval */
REDUCE INTERVAL(1) BOUNDARY(60)
SAVE
EXEC
//CONV03 EXEC PGM=DGOPMICO,PARM=CONVERT,COND=(0,NE)
//STEPLIB DD DISP=SHR,DSN=OMEGA5RT.SC63.RKANMOD
//SYSPRINT DD SYSOUT=*
//INPUT DD DSN=DB2SMF.WASRB.ACCTSAVE,DISP=SHR
//OUTPUT DD DSN=DB2SMF.WASRB.ACCTLOAD,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(200,10),RLSE),
// UNIT=SYSDA,
// DCB=(RECFM=VB,LRECL=9072,BLKSIZE=9076)
D.3.3 Preparing a load job
Loading data into DB2 tables requires that a DB2 load utility batch JCL be available for batch job submission. To prepare the required batch JCL, we performed the following tasks:
Consolidate and customize load utility control statements for loading PDB accounting and statistics data.
Provide batch JCL for DB2 load utility job submission.
Load utility control statements
We ran the batch JCL that is shown in Example D-7 and Example D-8 to merge the load utility control statements that we referenced in “Accounting table DDL and load statements” on page 530 and in “Statistics tables DDL and load statements” on page 531 into a consolidated data set.
Example D-7 Merge statistics and accounting file load utility control statements
//S1GEN EXEC PGM=IEBGENER
//SYSUT1 DD *
--OPTIONS PREVIEW
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLBUF)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLDDF)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLGBP)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLGEN)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLSET)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFBU)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFDF)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFGE)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFGP)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFPK)
//SYSUT2 DD DISP=SHR,DSN=DB2R3.PM.CNTL($08LOATB)
//SYSPRINT DD SYSOUT=*
//SYSIN DD DUMMY
Example D-8 Merge accounting save load utility control statements
//COPY1 EXEC PGM=IEBGENER,DYNAMNBR=20,TIME=1440
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD DUMMY
//SYSUT2 DD DISP=SHR,DSN=DB2R3.SG.PM.LOAD(LOADACCS)
//SYSUT1 DD *
-- OPTIONS PREVIEW
// DD DISP=SHR,DSN=<LOADHLQ>.RKO2SAMP(DGOALSBU)
// DD DISP=SHR,DSN=<LOADHLQ>.RKO2SAMP(DGOALSDF)
// DD DISP=SHR,DSN=<LOADHLQ>.RKO2SAMP(DGOALSGE)
// DD DISP=SHR,DSN=<LOADHLQ>.RKO2SAMP(DGOALSGP)
// DD DISP=SHR,DSN=<LOADHLQ>.RKO2SAMP(DGOALSPK)
// DD DISP=SHR,DSN=<LOADHLQ>.RKO2SAMP(DGOALSRF)
We then modified the generated data set to reflect the table qualifier and the appropriate input DD statement and implemented the load utility options that we needed to use. Here are the load options that we use:
RESUME YES
LOG NO
KEEPDICTIONARY
NOCOPYPEND
D.3.4 Loading accounting and statistics tables
We use the DB2 load utility to load the data that is referred to in D.3.1, “Extracting and transforming DB2 trace data into the FILE format” on page 536 and in D.3.2, “Extracting and transforming DB2 trace data into the SAVE format” on page 537 into the PDB accounting and statistics tables.
D.3.5 Maintaining PDB tables
Your DB2 installation regularly performs Image Copy, Runstats, and Reorg on your tables to comply with your recovery requirements and to support good query performance.
Image copy
We ran the batch JCL that is shown in Example D-9 to perform image copy on PDB accounting and statistics tables.
Example D-9 Image copy batch JCL
//COPY EXEC DSNUPROC,SYSTEM=DB1S,
// LIB='SYS1.DSNDB1S.SDSNLOAD',
// UID='PDBCOPY'
//DSNUPROC.SYSIN DD *
--OPTIONS PREVIEW
TEMPLATE TPDB DSN DB1SIC.IC.&DB..&TS..D&DATE..T&TIME.
DATACLAS COMP
LISTDEF LPDB INCLUDE TABLE PDB.*
COPY LIST LPDB COPYDDN(TPDB) CHANGELIMIT(0) PARALLEL
Runstats
Because we configured the administrative scheduler to perform autonomic statistics maintenance on non-catalog table spaces, there was no need to plan any further
Runstats activity.
Reorg
We ran the batch JCL that is shown in Example D-10 to perform Reorg on PDB accounting and statistics tables.
Example D-10 Reorg batch JCL
//REORG1 EXEC DSNUPROC,SYSTEM=DB1S,
// UID='PDBREO'
//DSNUPROC.SYSIN DD *
--OPTIONS PREVIEW
LISTDEF LPDB INCLUDE TABLE PDB.*
TEMPLATE TCOPY DSN DB1SIC.IC.&DB..&TS..D&DATE..T&TIME.
DATACLAS COMP
TEMPLATE TSYSUT1 DSN(DB1SIC.&DB..&TS..&UTILID..SYSUT1)
DISP(NEW,DELETE,KEEP)
DATACLAS COMP
TEMPLATE TSORTOUT DSN(DB1SIC.&DB..&TS..&UTILID..SORTOUT)
DISP(NEW,DELETE,KEEP)
DATACLAS COMP
TEMPLATE TPUNCH DSN(DB1SIC.&DB..&TS..&UTILID..PUNCH )
DISP(NEW,DELETE,KEEP)
DATACLAS COMP
TEMPLATE TSYSREC DISP(NEW,DELETE,KEEP)
DSN(DB1SIC.&DB..&TS..&UTILID..SYSREC)
DATACLAS COMP
 
REORG TABLESPACE LIST LPDB
LOG NO
SHRLEVEL REFERENCE
SORTDATA
SORTDEVT SYSDA
SORTNUM 4
UNLDDN TSYSREC
WORKDDN(TSYSUT1,TSORTOUT)
STATISTICS
COPYDDN(TCOPY)
PUNCHDDN(TPUNCH)
D.4 Sample query for application profiling
We created the DB2 SQL table UDF that is shown in Example D-11 to provide an interface for querying the DB2PMSACCT_GENERAL and DB2PMSACCT_BUFFER PDB tables for application profiling. The UDF receives two input parameters and joins DB2 general and buffer pool accounting information. The result is filtered by the DB2 client application information and the connection type (RRS or DRDA) to provide profiling information for a particular clientApplicationInformation for JDBC type 2 (connection type RRS) or for JDBC type 4 (connection type DRDA) applications.
Example D-11 OMPE SQL table UDF
CREATE FUNCTION ACCOUNTING
(CLIENTAPPLICATION VARCHAR(128),
CONNTYPE CHAR(8) )
RETURNS TABLE (
"DateTime" VARCHAR(16)
, "ClientApplication" VARCHAR(40)
, "Elapsed" DECIMAL(9,2)
, "TotCPU" DECIMAL(9,2)
, "TotzIIP" DECIMAL(9,2)
, DB2CPU DECIMAL(9,2)
, "DB2zIIP" DECIMAL(9,2)
, "Commit" INTEGER
, SQL INTEGER
, "Locks" INTEGER
, "RowsFetched" INTEGER
, "RowsInserted" INTEGER
, "RowsUpdated" INTEGER
, "RowsDeleted" INTEGER
, "GetPage" INTEGER
,"AVG-Time" DECIMAL(15, 6)
,"AVG-CPU" DECIMAL(15, 6)
,"Time/SQL" DECIMAL(15, 6)
,"CPU/SQL" DECIMAL(15, 6)
,"AVG-SQL" DECIMAL(15, 6)
,"LOCK/Tran" DECIMAL(15, 6)
,"LOCK/SQL" DECIMAL(15, 6)
,"GETP/Tran" DECIMAL(15, 6)
,"GETP/SQL" DECIMAL(15, 6)
)
LANGUAGE SQL READS SQL DATA NO EXTERNAL ACTION
DETERMINISTIC
RETURN
WITH
Q1 AS
(SELECT
substr(char(INTERVAL_TIME),1,16 ) AS DATETIME
, CLIENT_TRANSACTION
, DECIMAL(CLASS1_ELAPSED,9,2 ) AS ELAPSED
, DECIMAL(CLASS1_CPU_NNESTED+CLASS1_CPU_STPROC+CLASS1_CPU_UDF
+CLASS1_IIP_CPU,9,2 ) AS CPU
, DECIMAL(CLASS1_IIP_CPU,9,2 ) AS ZIIP
, DECIMAL(CLASS2_CPU_NNESTED+CLASS2_CPU_STPROC+CLASS2_CPU_UDF
+CLASS2_IIP_CPU,9,2 ) AS DB2CPU
, DECIMAL(CLASS2_IIP_CPU,9,2 ) AS DB2ZIIP
, DECIMAL(COMMIT,9,2 ) AS COMMIT
, DECIMAL(SELECT+INSERT+UPDATE+DELETE+FETCH+MERGE,9,2) AS SQL
, DECIMAL(LOCK_REQ,9,2 ) AS LOCKS
, INTEGER(ROWS_FETCHED ) AS ROWS_FETCHED
, INTEGER(ROWS_INSERTED ) AS ROWS_INSERTED
, INTEGER(ROWS_UPDATED ) AS ROWS_UPDATED
, INTEGER(ROWS_DELETED ) AS ROWS_DELETED
FROM DB2PMSACCT_GENERAL
WHERE CONNECT_TYPE = ACCOUNTING.CONNTYPE
AND CLIENT_TRANSACTION = ACCOUNTING.CLIENTAPPLICATION
AND COMMIT > 0 ),
Q2 AS
(SELECT
substr(char(INTERVAL_TIME),1,16 ) AS DATETIME
, CLIENT_TRANSACTION
, decimal(SUM(BP_GETPAGES),9,2 ) AS GETPAGE
FROM DB2PMSACCT_BUFFER
WHERE CONNECT_TYPE = ACCOUNTING.CONNTYPE
AND CLIENT_TRANSACTION = CLIENTAPPLICATION
GROUP BY substr(char(INTERVAL_TIME),1,16), CLIENT_TRANSACTION ),
Q3 AS
(SELECT Q1.*, Q2.GETPAGE FROM Q1, Q2 WHERE
(Q1.DATETIME,Q1.CLIENT_TRANSACTION) =
(Q2.DATETIME,Q2.CLIENT_TRANSACTION) AND Q1.SQL > 0),
Q4 AS
(SELECT Q3.*,
ELAPSED/COMMIT as "AVG-Time",
CPU/COMMIT as "AVG-CPU",
ELAPSED/SQL as "Time/SQL",
CPU/SQL as "CPU/SQL",
SQL/COMMIT as "AVG-SQL",
LOCKS/COMMIT as "LOCK/Tran",
LOCKS/SQL as "LOCK/SQL",
GETPAGE/COMMIT as "GETP/Tran",
GETPAGE/SQL as "GETP/SQL"
FROM Q3)
SELECT * FROM Q4
For each interval, the UDF returns the following information:
DateTime: Interval date and time
ClientApplication: Client application name
Elapsed: Total elapsed time
TotCPU: Total CPU time, including the time that was processed on a zIIP processor
TotzIIP: Total zIIP processor time
DB2CPU: DB2 part of the total CPU time
DB2zIIP: DB2 part of the zIIP processor time
Commit: Total number of commits
SQL: Total number of SQL SELECT, INSERT, UPDATE, DELETE, FETCH, and MERGE statements
Locks: Total number of lock requests
RowsFetched: Number of rows that were fetched
RowsInserted: Number of rows that were inserted
RowsUpdated: Number of rows that were updated
RowsDeleted: Number of rows that were deleted
GetPage: Number of getpage requests
AVG-Time: Average elapsed time
AVG-CPU: Average CPU time, including zIIP time
Time/SQL: Average elapsed time per SQL
CPU/SQL: Average CPU time per SQL
AVG-SQL: Average number of SQL per commit
LOCK/Tran: Average number of lock requests per commit
LOCK/SQL: Average number of locks per SQL
GETP/Tran: Average number of getpage requests per commit
GETP/SQL: Average number of getpage requests per SQL
D.5 Using the UDF for application profiling
We used the query that is shown in Example D-12 to start the UDF for JDBC type 2 (connection type RRS) application profiling.
Example D-12 Starting UDF for JBC driver Type 4
select * from
table(accounting('TraderClientApplication','RRS')) a
order by "DateTime" ;
---------+---------+---------+---------+---------+---------+---------+---------+
DateTime ClientApplication Elapsed To
---------+---------+---------+---------+---------+---------+---------+---------+
2012-08-14-22.39 TraderClientApplication 11.96
2012-08-14-22.41 TraderClientApplication 2417.46 3
For more information about how we used the UDF in our application scenario, see Chapter 8, “Monitoring WebSphere Application Server applications” on page 361.
D.6 Additional information
For more information about using the OMEGAMON Performance Expert PDB, see the following resources:
Chapter 5, “Advanced reporting concepts”, in IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS, Reporting User's Guide, SH12-6927
A Deep Blue View of DB2 Performance: IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS, SG24-72244
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset