CHAPTER 10

image

Tracing SQL Execution

Tracing session activity is at the heart of most SQL performance tuning exercises. Oracle provides a rich set of tools to trace SQL activity. This chapter introduces the Oracle SQL trace facility and shows you how to set up SQL tracing in your environment. Oracle provides numerous “events” that help you perform various types of traces.

Although there are several tracing methods available, Oracle now recommends that you use the DBMS_MONITOR package for most types of tracing. The chapter contains several recipes that explain how to use this package to generate traces. In addition, we show how to trace sessions by setting various Oracle events, the setting of which is often requested by Oracle Support. You'll learn how to trace a single SQL statement, a session as well as an entire instance, as well as how to trace parallel queries. There are recipes that show how to trace another user's session and how to use a trigger to start a session trace. You'll also learn how to trace the Oracle optimizer's execution path.

Oracle provides the TKPROF utility as well as the freely downloadable profiler named Oracle Trace Analyzer. This chapter shows how to use both of these profilers to analyze the raw trace files you generate.

10-1. Preparing Your Environment

Problem

You want to make sure your database is set up correctly for tracing SQL sessions.

Solution

You must do three things before you can start tracing SQL statements:

  1. Enable timed statistics collection.
  2. Specify a destination for the trace dump file.
  3. Adjust the trace dump file size.

You can enable the collection of timed statistics by setting the timed_statistics parameter to true. Check the current value of this parameter first:

SQL> sho parameter statistics
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------
statistics_level                     string      TYPICAL
timed_statistics                     boolean     TRUE
SQL>

If the value of the timed_statistics parameter is false, you set it to true with the following statement.

SQL> alter system set timed_statistics=true scope=both;
 
System altered.
 
SQL>

You can also set this parameter at the session level with the following statement:

SQL> alter session set timed_statistics=true

You can find the location of the trace directory with the following command:

SQL> select name,value from v$diag_info
  2* where name='Diag Trace'
SQL> /
 
NAME                                              VALUE
-----------------------------    ----------------------------------------
 
Diag Trace                      c:apporadiag dbmsorcl1orcl1 race
 
 
SQL>

Tthe default value of the max_dump_file_size parameter is unlimited, as you can verify by issuing the following command:

SQL> sho parameter dump
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------
...
max_dump_file_size                   string      unlimited
...
SQL>

An unlimited dump file size means that the file can grow as large as the operating system permits.

How It Works

Before you can trace any SQL sessions, ensure that you’ve set the timed_statistics initialization parameter to true. If the value for this parameter is false, SQL tracing is disabled. Setting the timed_statistics parameter to true enables the database to collect statistics such as the CPU and elapsed times and store them in various dynamic performance tables. The default value of this parameter, starting with the Oracle 11.1.0.7.0 release, depends on the value of the initialization parameter statistics_level (default value is TYPICAL). If you set the statistics_level parameter to basic, the default value of the timed_statistics parameter is false. If you set statistics_level to the value typical or all, the default value of the timed_statistics parameter is true. The timed_statistics parameter is dynamic, meaning you don’t have to restart the database to turn it on—you can turn this parameter on for the entire database without a significant overhead. You can also turn the parameter on only for an individual session.

When you trace a SQL session, Oracle generates a trace file that contains diagnostic data that’s very useful in troubleshooting SQL performance issues. Starting with Oracle Database 11g, the database stores all diagnostic files under a dedicated diagnostic directory that you specify through the diagnostic_dest initialization parameter. The structure of the diagnostic directory is as follows:

<diagnostic_dest>/diag/rdbms/<dbname>/<instance>

The diagnostic directory is called the ADR Home. If your database name is prod1 and the instance name is prod1 as well, then the ADR home directory will be the following:

<diagnostic_dest>/diag/rdbms/prod1/prod1

The ADR home directory contains trace files in the <ADR Home>/trace subdirectory. Trace files usually have the extension .trc. You’ll notice that several trace files have a corresponding trace map file with the .trm extension. The .trm files contain structural information about trace files, which the database uses for searching and navigation. You can view the diagnostic directory setting for a database with the following command:

SQL> sho parameter diagnostic_dest
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------
diagnostic_dest                      string      C:APPORA
SQL>

The V$DIAG_INFO view shows the location of the various diagnostic directories, including the trace directory, which is listed in this view under the name Diag Trace. Although the new database diagnosability infrastructure introduced in Oracle Database 11g ignores the user_dump_dest initialization parameter, the parameter still exists and points to the same directory as the $ADR_BASEdiag dbms<database><instance> race directory, as the following command shows:

SQL> show parameter user_dump_dest
 
NAME                           TYPE     VALUE
------------------------       ------   ------------------------------------
user_dump_dest                 string   /u01/app/oracle/diag/rdbms/orc
                                                /orcl/trace
SQL>

In Oracle Database 12g, you don’t have to set the max_dump_file_size parameter to specify the maximum size of a trace file.

10-2. Tracing a Specific SQL Statement

Problem

You want to trace a specific SQL statement, in order to find out where the database is spending its time during the execution of the statement.

Solution

In an Oracle 11.1 or higher release, you can use the enhanced SQL tracing interface to trace one or more SQL statements. Here are the steps to tracing a set of SQL statements.

  1. Issue the alter session set events statement, as shown here, to set up the trace.
    SQL> alter session set events 'sql_trace level 12';
     
    Session altered.
    SQL>
  2. Execute the SQL statements.
    SQL> select count(*) from sales;
  3. Set tracing off.
    SQL> alter session set events 'sql_trace off';
     
    Session altered.
     
    SQL>

The previous example showed how to trace a SQL statement as a one-off operation. You can alternatively set up a trace for a specific SQL statement each time the instance executes the SQL statement. You can choose to trace specific SQL statements by specifying the SQL ID of a statement in the alter session set events statement. Here are the steps:

  1. Find the SQL ID of the SQL statement by issuing this statement:
     SQL> select sql_id,sql_text
          from v$sql
          where sql_text='select sum(quantity_sold) from sales';
     
    SQL_ID                       SQL_TEXT
    ----------------           ------------------------------------
    fb2yu0p1kgvhr              select sum(quantity_sold) from sales
     
    SQL>
  2. Set tracing on for the specific SQL statement whose SQL ID you’ve retrieved.
    SQL> alter session set events 'sql_trace [sql:fb2yu0p1kgvhr] level 12';
     
    Session altered.
     
    SQL>
  3. Execute the SQL statement.
    SQL> select sum(quantity_sold) from sales;
     
    SUM(QUANTITY_SOLD)
    ------------------
                918843
  4. Turn off tracing.

    SQL> alter session set events 'sql_trace[sql:fb2yu0p1kgvhr] off';
     
    Session altered.
     
    SQL>

You can trace multiple SQL statements by separating the SQL IDs with the pipe (|) character, as shown here:

SQL> alter session set events ‘sql_trace [sql: fb2yu0p1kgvhr|4v433su9vvzsw]‘;

You can trace a specific SQL statement running in a different session by issuing an alter system set events statement:

SQL> alter system set events 'sql_trace[sql:fb2yu0p1kgvhr] level 12';
 
System altered.
SQL>

You can get the SQL ID for the statement by querying the V$SQL view as shown earlier in this recipe, or you can get it through the Oracle Enterprise Manager. Once the user in the other session completes executing the SQL statement, turn off tracing with the following command:

SQL> alter system set events 'sql_trace[sql:fb2yu0p1kgvhr] off';
 
System altered.
 
SQL>

How It Works

In Oracle Database 11g, you can set the Oracle event SQL_TRACE to trace the execution of one or more SQL statements. You can issue either an alter session or an alter system statement for tracing a specific SQL statement. Here’s the syntax of the command:

alter session/system set events ‘sql_trace [sql:<sql_id>|<sql_id>] ... event specification‘;

Even if you execute multiple SQL statements before you turn the tracing off, the trace file will show just the information pertaining to the SQL_ID or SQL_IDs you specify.

10.3. Enabling Tracing in Your Own Session

Problem

You want to trace your own session.

Solution

Ordinary users can use the DBMS_SESSION package to trace their sessions, as shown in this example:

SQL>execute dbms_session.session_trace_enable(waits=>true, binds=> false);

To disable tracing, the user must execute the session_trace_disable procedure, as shown here:

SQL> execute dbms_session.session_trace_disable();

How It Works

The DBMS_MONITOR package, which Oracle recommends for all tracing, is by default executable only by a user with the DBA role. You as an Oracle DBA can grant the execute privilege on the DBMS_MONITOR package to a user, and that’ll let the user trace their own sessions using the DBMS_MONITOR package instead of the DBMS_SESSION package.  And no matter the case, users can use the dbms_session.session_trace_enable procedure to trace their own session.

10-4. Finding the Trace Files

Problem

You’d like to find a way to easily identify your trace files.

Solution

Issue the following statement to set an identifier for your trace files, before you start generating the trace:

SQL> alter session set tracefile_identifier='MyTune1';

To view the most recent trace files the database has created, in Oracle Database 11.1 and newer releases, you can query the Automatic Diagnostic Repository (ADR) by executing the following command (see Chapter 5 for details on the adrci utility):

adrci> show tracefile -t
05-NOV-13 11:44:54  diag/rdbms/orcl/orcl/trace/orcl_ckpt_3047.trc
   05-NOV-13 11:50:06  diag/rdbms/orcl/orcl/trace/alert_orcl.log
   05-NOV-13 11:56:03  diag/rdbms/orcl/orcl/trace/orcl_tmon_3085.trc
   05-NOV-13 12:22:23  diag/rdbms/orcl/orcl/trace/orcl_vktm_3024.trc
   05-NOV-13 12:36:26  diag/rdbms/orcl/orcl/trace/orcl_mmon_3055.trc
adrci>

To find out the path to your current session’s trace file, issue the following command:

SQL> select value from v$diag_info
      where name = 'Default Trace File';
 
VALUE
-------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4287.trc

 
SQL>

To find all trace files for the current instance, issue the following query:

SQL> select value from v$diag_info where name = 'Diag Trace'

How It Works

Often, it’s hard to find the exact trace file you’re looking for, because there may be a bunch of other trace files in the trace directory, all with similar-looking file names. A best practice during SQL tracing is to associate your trace files with a unique identifier. Setting an identifier for the trace files you’re going to generate makes it easy to identify the SQL trace files from among the many trace files the database generates in the trace directory.

You can confirm the value of the trace identifier with the following command:

SQL> sho parameter tracefile_identifier
NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------
tracefile_identifier                 string      MyTune1
SQL>

The column TRACEID in the V$PROCESS view shows the current value of the tracefile_identifier parameter as well. The trace file identifier you set becomes part of the trace file name, making it easy to pick the correct file name for a trace from among a large number of trace files in the trace directory. You can modify the value of the tracefile_identifier parameter multiple times for a session. The trace file names for a process will contain information to indicate that they all belong to the same process.

Once you set the tracefile_identifier parameter, the trace files will have the following format, where sid is the Oracle SID, pid is the process ID, and traceid is the value you’ve set for the tracefile_identifier initialization parameter.

sid_ora_pid_traceid.trc

10-5. Examining a Raw SQL Trace File

Problem

You want to examine a raw SQL trace file.

Solution

Open the trace file in a text editor to inspect the tracing information. Here are portions of a raw SQL trace generated by executing the dbms_monitor.session_trace_enable procedure:

PARSING IN CURSOR #3 len=490 dep=1 uid=85 oct=3 lid=85 tim=269523043683 hv=672110367 ad='7ff18986250' sqlid='bqasjasn0z5sz'
 
PARSE #3:c=0,e=647,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=269523043680
EXEC #3:c=0,e=1749,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=3969568374,tim=269523045613
WAIT #3: nam='Disk file operations I/O' ela= 15833 FileOperation=2 fileno=4 filetype=2 obj#=-1 tim=269523061555
FETCH #3:c=0,e=19196,p=0,cr=46,cu=0,mis=0,r=1,dep=1,og=1,plh=3969568374,tim=269523064866
STAT #3 id=3 cnt=12 pid=2 pos=1 obj=0 op='HASH GROUP BY (cr=46 pr=0 pw=0 time=11 us cost=4 size=5317 card=409)'
STAT #3 id=4 cnt=3424 pid=3 pos=1 obj=89079 op='TABLE ACCESS  FULL DEPT (cr=16 pr=0 pw=0 time=246 us cost=3 size=4251 card=327)'

As you can see from this excerpt of the raw trace file, you can glean useful information, such as parse misses, waits, and the execution plan of the SQL statement.

How It Works

The usual practice after getting a session trace file is to analyze it using a tool such as TKPROF. However, you can examine a trace file by visually reading the trace output. The main benefit in doing so is having the actual sequence of all the events that have happened during the course of the execution of a query. The raw trace files capture information for each of the following three steps of SQL statement processing:

Parse: During this stage, the database converts the SQL statement into an execution plan and checks for authorization and the existence of tables and other objects.

Execute: The database executes the SQL statement during this phase. For a SELECT statement, the execute phase identifies the rows the database must retrieve. The database modifies the data for DML statements such as insert, update, and delete.

Fetch: This step applies only for a SELECT statement. During this phase, the database retrieves the selected rows.

A SQL trace file will contain detailed statistics for each of the three phases of execution, in addition to wait event information. You usually format the raw trace files with a utility such as TKPROF. However, there are times when a raw trace file can show you useful information very quickly, by a simple scroll through the file. A locking situation is a good example where you can visually inspect a raw trace file. TKPROF doesn’t provide you details about latches and locks (enqueues). If you suspect that a query was waiting on a lock, digging deep into a raw trace file shows you exactly where and why a query was waiting. In the WAIT line, the elapsed time (ela) shows the amount of time waited (in microseconds). In our example, elapsed wait time for “Disk file operations I/O” is 15,833 microseconds. Since 1 second = 1,000,000 microseconds, this is not a significant wait time. The raw trace file clearly shows if an I/O wait event, as is true in this case, or another type of wait event held up the query. If the query was waiting on a lock, you’ll see something similar to the following: WAIT #2: nam='enqueue ela-300....

We’ve purposefully kept the discussion short in this recipe, because tools such as TKPROF and the Oracle Trace Analyzer provide you sophisticated diagnostic information by profiling the raw trace files.

10-6. Analyzing Oracle Trace Files

Problem

You want to know how to analyze an Oracle trace file.

Solution

There are multiple ways to interpret a SQL trace file. Here are the different approaches:

  • Read the raw SQL trace file in a text editor.
  • Use the Oracle-provided TKPROF (Trace Kernel Profiler) utility.
  • Use Oracle Trace Analyzer, a free product you can download from Oracle Support.
  • Use third-party tools.

How It Works

Getting a SQL trace is often the easy part—analyzing it is, of course, more of a task than collecting the trace. Sometimes, if you’re particularly adept at it, you can certainly directly view the source trace file itself, but in most cases, you need a tool to interpret and profile the huge amount of data that a trace file can contain..

You can easily read certain trace files such as the trace file for event 10053 (which details the choices made by the optimizer in evaluating the execution path of a query), since the file doesn’t contain any SQL execution statistics (such as parsing, executing, and fetching statistics), and no wait event analysis—it mostly shows how the cost-based optimizer (CBO) selected a particular execution plan.. However, for any SQL execution trace files, such as those you generate with the dbms_monitor.session_trace_enable procedure, a visual inspection of the trace file, while technically possible, is not only time-consuming but the raw data is not in a summary form and key events are often described in obscure ways. Therefore, using a profiler such as the TKPROF utility is really your best option.

The TKPROF utility is an Oracle-supplied profiling tool that most Oracle DBAs use on a routine basis. Recipes 10-7 and 10-8 show how to use TKPROF.

Oracle’s Trace Analyzer is free (you have to download it from Oracle Support), easy to install and use, and produces clear reports with plenty of useful diagnostic information. You do have to install the tool first, but it takes only a few minutes to complete the installation. Thereafter, you just pass the name of the trace file to a script to generate the formatted output. Recipe 10-9 shows how to install and use the Oracle Trace Analyzer.

There are also third-party profiling tools that offer features not found in the TKPROF utility. Some of these tools generate pretty HTML trace reports and some include charts as well to help you visually inspect the details of the execution of the SQL statement that you’ve traced. Note that in order to use some of these products, you’ll have to upload your trace files for analysis. If your trace files contain sensitive data or security information, this may not work for you.

10-7. Formatting Trace Files with TKPROF

Problem

You’ve traced a session, and you want to use TKPROF to format the trace file.

Solution

You run the TKPROF utility from the command line. Here’s an example of a typical tkprof command for formatting a trace file.

$ tkprof user_sql_001.trc user1.prf explain=hr/hr table=hr.temp_plan_table_a sys=no
  sort=exeela,prsela,fchela

In the example shown here, the tkprof command takes the user_sql_001.trc trace file as input and generates an output file named user1.prf. The “How it Works” section of this recipe explains key optional arguments of the TKPROF utility.

How It Works

TKPROF is a utility that lets you format any extended trace files that you generate with the dbms_monitor.session_trace_enable procedure. You can use this tool to generate reports for analyzing results of the various types of SQL tracing explained in this chapter. You can run TKPROF on a single trace file or a set of trace files that you’ve concatenated with the trcsess utility. TKPROF shows details of various aspects of SQL statement execution, such as:

  • SQL statement text
  • SQL trace statistics
  • Number of library cache misses during the parse and execute phases
  • Execution plans for all SQL statements
  • Recursive SQL calls

You can view a list of all the arguments you can specify issuing the tkprof command without any arguments, as shown here:

$ tkprof
Usage: tkprof tracefile outputfile [explain= ] [table= ]
              [print= ] [insert= ] [sys= ] [sort= ]
...

Here’s a brief explanation of the important arguments you can specify with the tkprof command:

filename1: Specifies the name of the trace file

filename2: Specifies the formatted output file

waits: Specifies whether the output file should record a summary of the wait events; default is yes.

sort: By default, TKPROF lists the SQL statements in the trace file in the order they were executed. You can specify various options with the sort argument to control the order in which TKPROF lists the various SQL statements.

  • prscpu: CPU time spent parsing
  • prsela: Elapsed time spent parsing
  • execpu: CPU time spent executing
  • exeela: Elapsed time spent executing
  • fchela: Elapsed time spent fetching

print: By default TKPROF will list all traced SQL statements. By specifying a value for the print option, you can limit the number of SQL statements listed in the output file.

sys: By default TKPROF lists all SQL statements issued by the user SYS, as well as recursive statements. Specify the value no for the sys argument to make TKPROF omit these statements.

explain: Writes execution plans to the output file; TKPROF connects to the database and issues explain plan statements using the username and password you provide with this parameter.

table: By default, TKPROF uses a table named PLAN_TABLE in the schema of the user specified by the explain parameter, to store the execution plans. You can specify an alternate table with the table parameter.

width: This is an integer that determines the output line widths of some types of output, such as the explain plan information.

10-8. Analyzing TKPROF Output

Problem

You’ve formatted a trace file with TKPROF, and you now want to analyze the TKPROF output file.

Solution

Invoke the TKPROF utility with the tkprof command as shown here:

c:>tkprof orcl1_ora_6448_mytrace1.trc ora6448.prf explain=hr/hr sys=no sort=prsela,exeela,fchela
 
TKPROF: Release 11.2.0.1.0 - Development on Sat May 14 11:36:35 2011
 
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
 
c:apporadiag dbmsorcl1orcl1 race>

In this example, orcl1_ora_6448_mytrace1.trc is the trace file you want to format. The ora6448.prf file is the TKPROF output file. The “How it Works” section that follows shows how to interpret a TKPROF output file.

How It Works

In our example, there’s only a single SQL statement. Thus, the sort parameters (prsecla, exeela, fchela) don’t really matter, because they come into play only when TKPROF needs to list multiple SQL statements. Here’s a brief description of the key sections in a TKPROF output file.

Header

The header section shows the trace file name, the sort options, and a description of the terms used in the output file.

Trace file: orcl1_ora_6448_mytrace1.trc
Sort options: prsela  exeela  fchela
*******************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
*************************************************************************

Execution Statistics

TKPPROF lists execution statistics for each SQL statement in the trace file. TKPROF lists the execution statistics for the three steps that are part of SQL statement processing: parse, execute, and fetch.

call     count      cpu     elapsed      disk       query    current       rows
------- ------  -------- ---------- ---------- ---------- ----------  ---------
Parse        1      0.01       0.03          0         64          0         0
Execute      1      0.00       0.00          0          0          0         0
Fetch     5461      0.29       0.40          0       1299          0       901
------- ------  -------- ---------- ---------- ---------- ----------  ---------
total     5463      0.31       0.43          0       1363          0     81901

The following is what the SQL execution statistics in the table stand for:

count: The number of times the database parsed, executed, or fetched this statement

cpu: The CPU time used for the parse/execute/fetch phases

elapsed: Total elapsed time (in seconds) for the parse/execute/fetch phases

disk: Number of physical block reads for the parse/execute/fetch phases

query: Number of data blocks read with logical reads from the buffer cache in consistent mode for the parse/fetch/execute phases (for a select statement)

current: Number of data blocks read and retrieved with logical reads from the buffer cache in current mode (for insert, update, delete, and merge statements)

rows: Number of fetched rows for a select statement or the number of rows inserted, deleted, or updated, respectively, for an insert, delete, update, or merge statement

Row Source Operations

The next section of the report shows the number of misses in the library cache, the current optimizer mode, and the row source operations for the query. Row source operations show the number of rows that the database processes for each operation such as joins or full table scans. It’s important to note that the row source operation information shows the actual execution plan used by the SQL statement. The actual execution plan may differ from that showed by explain plan statemenet because of the following reasons:

  • Optimizer stats, cursor sharing, bind variable peeking, and dynamic instance parameters all impact execution plans.
  • EXPLAIN PLAN doesn’t do “bind peeking” to glean the values of the bind variables used in a SQL statement.
  • EXPLAIN PLAN doesn’t check the library cache for an aleready parsed version of the SQL statement.

That’s why it’s often said that an explain plan may “lie” but not the actual trace of the execution.

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85  (HR)
 
Rows     Row Source Operation
-------  ----------------------------------------------------------------------
  81901  HASH JOIN  (cr=1299 pr=0 pw=0 time=3682295 us cost=22 size=41029632 card=217088)
   1728   TABLE ACCESS FULL DEPT (cr=16 pr=0 pw=0 time=246 us cost=6 size=96768 card=1728)
   1291   TABLE ACCESS FULL EMP (cr=1283 pr=0 pw=0 time=51213 us cost=14 size=455392 card=3424)

The “Misses in library cache during parse” indicates the number of hard parses during the parse and execute database calls. In the Row Source Operation column, the output includes several statistics for each row source operation. These statistics quantify the various types of work performed during a row source operation. The following is what the different statistics stand for (you may not see all of these for every query):

cr: Blocks retrieved through a logical read in the consistent mode (consistent read mode, also known as “query mode”,  means the data that’s read as of a fixed point in time—more specifically, data as it existed when the query began).

pr: Number of blocks read through a physical disk read

pw: Number of blocks written with a physical write to a disk

time: Total time (in microseconds) spent processing the operation

cost: Estimated cost of the operation

size: Estimated amount of data (bytes) returned by the operation

card: Estimated number of rows returned by the operation

The Execution Plan

If you specified the explain parameter when issuing the tkprof command, you’ll find an execution table showing the execution plan for each SQL statement.

Rows     Execution Plan
-------  ---------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
  81901   HASH JOIN
   1728    TABLE ACCESS (FULL) OF 'DEPT' (TABLE)
   1291    TABLE ACCESS (FULL) OF 'EMP' (TABLE)

In our example, the execution plan shows that there were two full table scans and a hash join following it. It’s important to understand the execution plan displayed here may not be the execution plan that is actually used to execute the statement. This is just like with the execution plan generated by autotrace and Explain Plan, which also might not be the actual execution plan used to execute the statement.

Wait Events

You’ll see the wait events section only if you’ve specified waits=>true in your trace command. The wait events table summarizes waits during the trace period:

Elapsed times include waiting on following events:
  Event waited on                            Times     Max. Wait   Total Waited
                                             Waited
 ----------------------------------------   --------- -----------  -----------
SQL*Net message from client                      5461      2.95        62.81
db file sequential read                             1      0.05         0.05
*******************************************************************************

In this example, the SQL*Net message from client waits account for most of the waits, but these are usually idle waits. The "SQL *Net message from client" message means that the server has completed its work by returning the results requested by the client and is waiting for the client to give the server more work to perform. Even though “SQL*Net message from client” is classified as an idle wait, do pay attention to the average wait time for the client (and in theory, the network) side wait. If the average duration for this wait event is high, it could mean that the network link is slow. You should usually expect to see average network wait tiems between 0.00001 and 0.002 seconds.

A long, average wait time for “SQL*Net message from client” can also be due to the fact that the client receiving the data was very busy. If you see wait events such as the “db file sequential read event” (due to indexed reads) or the db file scattered read event (due to full table scans)  with a significant number of waits (and/or total wait time), you need to investigate those wait events further.

Note that the TKPROF output doesn’t show you information about any bind variables.

10-9. Analyzing Trace Files with Oracle Trace Analyzer

Problem

You want to use the Oracle Trace Analyzer to analyze trace files.

Solution

The Oracle Trace Analyzer, also known as TRCANLZR or TRCA, is a SQL trace profiling tool that’s an alternative to the TKPROF utility. You must download the TRCA from Oracle Support. Once you download TRCA, unzip the files and install TRCA by executing the /trca/install/trcreate.sql script.

Once you install TRCA, you must log in as a user with the SYSDBA privilege to execute the tacreate.sql script. The tacreate.sql generates the formatted output files for any traces you’ve generated. The script asks you for information relating to the location of the trace files, the output file, and the tablespace where you want TRCA to store its data.

Here are the steps for installing and running TRCA.

  1. Installing TRCA is straightforward, so we just show you a summary of the installation here:
    SQL> @tacreate.sql
    Uninstalling TRCA, please wait
    TADOBJ completed.
    SQL>
    SQL> WHENEVER SQLERROR EXIT SQL.SQLCODE;
    SQL> REM If this DROP USER command fails that means a session is connected wi
    this user.
    SQL> DROP USER trcanlzr CASCADE;
    SQL> WHENEVER SQLERROR CONTINUE;
    SQL>
    SQL> SET ECHO OFF;
    TADUSR completed.
    TADROP completed.
     
    Creating TRCA$ INPUT/BDUMP/STAGE Server Directories
    ...
     
    TACREATE completed. Installation completed successfully.
    SQL>
  2. Set up tracing.
    SQL> alter session set events '10046 trace name context forever, level 12';
     
    System altered.
     
    SQL>
  3. Execute the SQL statement you want to trace.
    SQL> select ...
  4. Turn off tracing.
    SQL> alter session set events '10046 trace name context off';
    System altered.
     
    SQL>
  5. Run the /trca/run/trcanlzr script (START trcanlzr.sql) to profile the trace you’ve just generated. You must pass the trace file name as input to this script:
    c:	race	rca	rca
    un>sqlplus hr/hr
    SQL> START trcanlzr.sql orcl1_ora_7460_mytrace7.trc
    Parameter 1:
    Trace Filename or control_file.txt (required)
    Value passed to trcanlzr.sql:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    TRACE_FILENAME: orcl1_ora_7460_mytrace7.trc
    Analyzing orcl1_ora_7460_mytrace7.trc
    ... analyzing trace(s) ...
    Trace Analyzer completed.
    Review first trcanlzr_error.log file for possible fatal errors.
      ...
       233387  08/14/2013 15:59   trca_e21106.html
       115885  08/14/2013 15:59   trca_e21106.txt
    File trca_e21106.zip has been created
    TRCANLZR completed.
    SQL>
    c: race rca rca un>

You can now view the profiled trace data in text or HTML format—TRCA provides both of these in the ZIP file that it creates when it completes profiling the trace file. TRCA places the ZIP file in the directory from which you run the /trca/run/trcanlzr.sql script.

How It Works

Oracle Support Center of Expertise (CoE) provides the TRCA diagnostic tool. Although many DBAs are aware of the TRCA, few use it on regular basis. Some of us have used it in response to a request by Oracle Support. As you learned in the “Solution” section, the TRCA tool accepts a SQL trace generated by you and outputs a diagnostic report in both text and HTML formats. The TRCA tool also provides you a TKPROF report (it executes the tkprof command as part of its diagnostic data collection). Since TRCA provides a rich set of diagnostic information, consider using it instead of TKPROF.

Apart from the data normally collected by the TKPROF utility, TRCA also identifies expensive SQL statements and gathers their explain plans. It also shows the optimizer statistics and configuration parameters that have a bearing on the  performance of the SQL statements in the trace.

image Tip  Use TRCA instead of TKPROF for analyzing your trace files—it provides you a wealth of diagnostic information, besides giving you a TKPROF output file as part of the bargain.

In our example, we used TRCA to format a trace file on the same system where we generated the trace. However, if you can’t install TRCA in a production system, not to worry. TRCA can also analyze production traces using a different system. The details are in the trca_instructions HTML document, which is part of the TRCA download ZIP file.

Here are the major sections of a TRCA report, and as you can see already, the report offers a richer set of diagnostic information than that offered by TKPROF.

Summary: Provides a breakdown of elapsed time, response time broken down into CPU and non-idle wait time, and other response time-related information

Non-Recursive Time and Totals: Provides a breakdown of response time and elapsed time during the parse, execute, and fetch steps; the report also contains a table that provides total and average waits for each idle and non-idle wait event.

Top SQL: Provides detailed information about SQL statements that account for the most response time, elapsed time, and CPU time, as shown in the following extract from the report:

There are 2 SQL statements with "Response Time Accounted-for" larger than
threshold of 10.0% of the "Total Response Time Accounted-for".

These combined 2 SQL statements are responsible for a total of 99.3% of the "Total Response Time Accounted-for".
 
There are 3 SQL statements with "Elapsed Time" larger than threshold of 10.0%
of the "Total Elapsed Time".

These combined 3 SQL statements are responsible for a total of 75.5% of the
"Total Elapsed Time".


There is only one SQL statement with "CPU Time" larger than threshold of 10.0% of the "Total CPU Time".

Individual SQL: This is a highly useful section, as it lists all SQL statements and shows their elapsed time, response time, and CPU time. It provides the hash values and SQL IDs of each statement.

SQL Self - Time, Totals, Waits, Binds and Row Source Plan: Shows parse, execute, and fetch statistics for each statement, similar to the TKPROF utility; it also shows the wait event breakdown (average and total times) for each statement. There’s also a very nice explain plan for each statement, which shows the time and the cost of each execution step.

Tables and Indexes: Shows the number of rows, partitioning status, the sample size, and the last time the object was analyzed; for indexes, it additionally shows the clustering factor and the number of keys.

Summary: Shows I/O related wait (such as the db file sequential read event) information including average and total waits, for tables and indexes

Hot I/O Blocks: Shows the list of blocks with the largest wait time or times waited

Non-default Initialization Parameters: Lists all non-default initialization parameters

As this brief review of TRCA shows, it’s a far superior tool than TKPROF. Besides, if you happen to love TKPROF reports, it includes them as well in its ZIP file. So, what are you waiting for? Download the TRCA and benefit from its rich diagnostic profiling of problem SQL statements.

10-10. Tracing a Parallel Query

Problem

You’d like to trace a parallel query.

Solution

You can get an event 10046 trace for a parallel query in the same way as you would for any other query. The only difference is that the 10046 event will generate as many trace files as the number of parallel query servers. Here’s an example:

SQL>alter session set tracefile_identifier='MyTrace1';
 
SQL> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
SQL> select /*+ full(sales) parallel (sales 6) */ count(quantity_sold) from
     sales;
 
COUNT(QUANTITY_SOLD)
--------------------
              918843
 
SQL> alter session set events '10046 trace name context off';
 
Session altered.
 
SQL>

You’ll now see a total of seven trace files with the trace file identifier MyTrace1 in the trace directory. Depending on what you’re looking for, you can analyze each of the trace files separately or consolidate them into one big trace file with the trcsess utility before analyzing it with TKPROF or another profiler such as the Oracle Trace Analyzer. You’ll also find several files with the suffix .trm in the trace directory—you can ignore these files, as they are for use by the database.

How It Works

The only real difference between getting an extended trace for a single query and one for a parallel query is that you’ll have multiple trace files, one for each parallel query server. When a user executes a parallel query, Oracle creates multiple parallel query processes to process the query, with each process getting its own session. That’s the reason Oracle creates multiple trace files for a parallel query.

Once you turn off the trace, go to the trace directory and execute the following command to find all the trace files for the parallel query:

$ find . -name '*MyTrace1*'

The find command lists all the trace files for your parallel query (ignore the files ending with .trm in the trace directory). You can move the trace files to another directory and use the trcsess utility to consolidate those files, as shown here:

$ trcsess output=MyTrace1.trc clientid='px_test1' orcl1_ora_8432_mytrace1.trc orcl1_ora_8432_mytrace2.trc

You’re now ready to use the TKPROF utility to profile the parallel query.

When you issue a parallel query, the parallel execution coordinator/query coordinator (QC) controls the execution of the query. The parallel execution servers/slaves (QS) do the actual work. The parallel execution server set is the set of all the query servers that execute an operation. The query coordinator and each of the execution servers generate their own trace files. For example, if one of the slave processes waits for a resource, the database records the resulting wait events in that slave process’s trace file but not in the query coordinator’s trace file.

Note that if you’re using a 10.2 or an older release, the trace files for the user process will be created in the user dump directory and the background processes (slaves) will generate trace files in the background dump directory. In 11.1 and newer databases, the trace files for both background and user processes are in the same directory (trace). You can find the trace file names by issuing the show tracefile -t command after invoking ADRCI or by querying the V$DIAG_INFO view from SQL*Plus.

10-11. Tracing Specific Parallel Query Processes

Problem

You want to trace one or more specific parallel query processes.

Solution

Identify the parallel query processes you want to trace with the following command.

SQL> select inst_id,p.server_name,
     p.status as p_status,
     p.pid as p_pid,
     p.sid as p_sid
     from  gv$px_process p
     order by p.server_name;

Let’s say you decide to trace the processes p002 and p003. Issue the following alter system set events command to trace just these two parallel processes.

SQL> alter system set events ‘sql_trace  {process: pname = p002 | p003}’;

Once you’re done tracing, turn off the trace by issuing the following command:

SQL> alter system set events ‘sql_trace  {process: pname = p002 | p003} off’;

How It Works

Tracing parallel processes is always tricky. One of the improvements made to the tracing infrastructure in the Oracle Database 11g release is the capability to trace a specific statement or a set of statements. This capability comes in handy when there are a large number of SQL statements being executed by a session and you’re sure about the identity of the SQL statement whose execution you want to trace.

10-12. Tracing Parallel Queries in a RAC System

Problem

You’re tracing a parallel query in a RAC environment but aren’t sure in which instance the trace files are located.

Solution

Finding the trace files for the server (or thread or slave) processes is sometimes difficult in a RAC environment, because you aren’t sure on which node or node(s) the database has created the trace files. Here are the steps to follow to make it easier to find the trace files on the different nodes.

  1. Set the px_trace with an alter session command, to help identify the trace files, as shown here:
    SQL> alter session set tracefile_identifier='10046';
    SQL> alter session set  "_px_trace" = low , messaging;
    SQL> alter session set events '10046 trace name context forever,level 12';
  2. Execute your parallel query.
    SQL> alter table bigsales (parallel 4);
    SQL> select count(*) from bigsales;
  3. Turn all tracing off.
    SQL> alter session set events '10046 trace name context off';
    SQL> alter session set "_px_trace" = none;

Specifying px_trace will cause the query coordinator’s trace file to include information about the slave processes that are part of the query and the instance each slave process belongs to. You can then retrieve the trace files from the instances listed in the query coordinator’s trace file.

How It Works

The _px_trace (px trace) parameter is an undocumented, internal Oracle parameter that has existed since the 9.2 release. Once you run the trace commands as shown in the “Solution” section of this recipe, the trace file for the query coordinator (QC) process will show within it the name of each of the slave processes and the instances the processes have run on—for example:

Acquired 4 slaves on 1 instances avg height=4 in 1 set q serial:2049
        P000 inst 1 spid 7512
        P001 inst 1 spid 4088
        P002 inst 1 spid 7340
        P003 inst 1 spid 9256

In this case, you know that Instance 1 is where you must look to get the trace files for the slave processes P000, P001, P002, and P003. On Instance 1, in the ADR trace subdirectory, look for file names that contain the words P000 (or P001/P002/P003), to identify the correct trace files.

10-13. Consolidating Multiple Trace Files

Problem

You have generated multiple trace files for a session in order to tune performance, and you want to consolidate those files into a single trace file.

Solution

Use the trcsess command to merge multiple trace files into a single trace file. Here’s a simple example:

c:	race> trcsess output=combined.trc session=196.614 orcl1_ora_8432_mytrace1.trc orcl1_ora_8432_mytrace2.trc
C: race>

The trcsess command shown here combines two trace files generated for a session into a single trace file. The session parameter identifies the session with a session identifier, consisting of the session index and session serial number, which you can get from the V$SESSION view.

How It Works

The trcsess utility is part of the Oracle database and helps by letting you consolidate multiple trace files during performance tuning and debugging exercises. Here’s the syntax of the trcsess command:

trcsess [output=output_file_name]
           [session=session_id]
           [client_id=cleint_id]
           [service=service_name]
           [action=action_name]
           [module=module_name]
           [trace_files]

You must specify one of these five options when issuing the trcess command: session, client_id, service, action, and module. For example, if you issue the command in the following manner, the command includes all the trace files in the current directory for a session and combines them into a single file:

$ trcsess output=main.trc session=196.614

In our example, we specified the name of the consolidated trace file with the output option. If you don’t specify the output option, trcsess prints the output to standard out. Once you use trcsess to combine the output of multiple trace files into one consolidated file, you can use the TKPROF utility to analyze the file, just as you’d do in the case of a single trace file.

10-14. Finding the Correct Session for Tracing

Problem

You want to initiate a session trace for a user from your own session, and you would like to find out the correct session to trace.

Solution

You must have the SID and the serial number for the user whose session you want to trace. You can find these from the V$SESSION view, of course, once you know the user’s name. However, you must get several other details about the user’s session to identify the correct session, since the user may have multiple sessions open. Use the following query to get the user’s information:

SQL> select a.sid, a.serial#, b.spid, b.pid,
     a.username, a.osuser, a.machine
     from
     v$session a,
     v$process b
     where a.username IS NOT NULL
     and a.paddr=b.addr;

The query provides several attributes such as USERNAME, OSUSER, and MACHINE, which help you unambiguously select the correct session.

How It Works

You can’t always rely on the first set of SID and serial number you manage to find for the user whose session you want to trace. Together, the SID and serial number uniquely identify a session. However, you may find multiple SID and serial number combinations for the same user, because your database may be using common user logins. Therefore, querying the V$SESSION view for other information such as OSUSER and MACHINE besides the SID and serial number helps to identify the correct user session.

V$SESSION view columns such as COMMAND, SERVER, LOGON_TIME, PROGRAM, and LAST_CALL_ET help identify the correct session to trace. The LAST_CALL_ET column stands for Elasped Time of the Last Call, and following is how Oracle documentation describes this column:

  • If the session STATUS is currently ACTIVE, then the value represents the elapsed time (in seconds) since the session has become active.
  • If the session STATUS is currently INACTIVE, then the value represents the elapsed time (in seconds) since the session has become inactive.

If you still can’t find the correct session, you may want to join the V$SESSION and V$SQLAREA views to identify the correct session.

10-15. Tracing a SQL Session

Problem

You want to turn on SQL tracing for a session to diagnose a performance problem.

Solution

There are multiple ways to trace a session, but the Oracle-recommended approach is to use the DBMS_MONITOR package to access the SQL tracing facility. To trace a session, first identify the session using the SQL statement shown in Recipe 10-14 (the previous recipe).

Once you get the SID and SERIAL# from the query shown in Recipe 10-14, invoke the session_trace_enable procedure of the DBMS_MONITOR package, as shown here:

SQL> execute dbms_monitor.session_trace_enable(session_id=>138,serial_num=>242,
waits=>true,binds=>false);
PL/SQL procedure successfully completed.
SQL>

image Caution  SQL tracing does impose an overhead on the database—you need to be very selective in tracing sessions in a production environment, as a trace can fill up a disk or affect CPU usage adversely.

In this example, we chose to trace the wait information as well, but it’s optional. Once you execute this command, have the user execute the SQL statements that you’re testing (in a dev or test environment). In a production environment, wait for a long enough period to make sure you’ve captured the execution of the SQL statements, before turning the tracing off. Invoke the session_trace_disable procedure to disable the SQL tracing for the session, as shown here:

SQL> execute dbms_monitor.session_trace_disable(138,242);
PL/SQL procedure successfully completed.
SQL>

Once you complete tracing the session activity, you can get the trace file for the session from the trace directory and use the TKPROF utility (or a different profiler) to get a report. To trace the current user session, use the following pair of commands:

SQL> execute dbms_monitor.session_trace_enable();
SQL> execute dbms_monitor.session_trace_disable();

How It Works

Tracing an entire session is expensive in terms of resource usage and you must do so only when you haven't identified a poorly performing SQL statement already. A session trace gathers the following types of information.

  • Physical and logical reads for each statement that's running in the session
  • CPU and elapsed times
  • Number of rows processed by each statement
  • Misses in the library cache
  • Number of commits and rollbacks
  • Row operations that show the actual execution plan for each statement
  • Wait events for each SQL statement

You can specify the following parameters for the session_trace_enable procedure:

session_id: Identifies the session you want to trace (SID); if you omit this, your own session will be traced.

serial_num: Serial number for the session

waits: Set it to true if you want to capture wait information (default = false).

binds: Set it to true to capture bind information (default=false).

plan_stat: Determines the frequency with which the row source statistics (execution plan and execution statistics) are dumped

All the parameters for the session_trace_enable procedure are self-evident, except the plan_stat parameter. You can set the following values for this parameter:

never: The trace file won’t contain any information about row source operations.

first_execution (same as setting the plan_stat parameter to the value null): Row source information is written once, after the first execution of a statement.

all_executions: Execution plan and execution statistics are written for each execution of the cursor, instead of only when the cursor is closed.

Since an execution plan for a statement can change during the course of a program run, you may want to set the plan_stat parameter to the value all_executions if you want to capture all possible execution plans for a statement.

10-16. Tracing a Session by Process ID

Problem

You want to identify and trace a session using an operating system process ID.

Solution

Execute the alter session (or alter system) set events command to trace a session by its operating system process ID, which is shown by the SPID column in the V$PROCESS view. The general format of this command is as follows:

alter system set events ‘sql_trace {process:pid}’

Here are the steps to tracing a session by its OS PID.

  1. Get the OS process ID by querying the V$PROCESS view.
    SQL> select spid,pname from v$process;
  2. Once you identify the SPID of the user, issue the following statement to start the trace for that session:
    SQL> alter system set events ‘sql_trace {process:2714}’;
    Session altered.
    SQL>
  3. Turn off tracing the following way:
    SQL> alter system set events ‘sql_trace {process:2714} off’;
    Session altered.
    SQL>
  4. You can also execute the set events command in the following manner, to trace two processes at once:
    SQL> alter system set events ‘sql_trace {process:2714|2936}’;
    System altered.
    SQL> alter system set events ‘sql_trace {process:2714|2936} off’;
    System altered.
    SQL>

When you trace tow processes simultaneously, Oracle generates two separate trace files, one for each process, as shown here:

orcl1_ora_2714.trc
orcl1_ora_2936.trc

How It Works

The alter system set events command allows you to trace a process by specifying the process ID (PID), process name (PNAME), or the Oracle Process ID (ORAPID). Here’s the syntax of the command:

alter session set events ‘sql_trace {process : pid = <pid>, pname = <pname>,
orapid = <orapid>} rest of event specification’

The V$PROCESS view contains information about all currently active processes. In the V$PROCESS view, the following columns help you identify the three process-related values:

PID: the Oracle process identifier
SPID: the Operating System process identifier
PNAME: name of the process

In this recipe, we showed how to generate a trace file using the OS process identifier (SPID column in the V$PROCESS view). You can use the general syntax shown here to generate a trace using the PID or the process name.

10-17. Tracing Multiple Sessions

Problem

You want to trace multiple SQL sessions that belong to a single user.

Solution

You can trace multiple sessions that belong to a user by using the client_id_trace_enable procedure from the DBMS_MONITOR package. Before you can execute the dbms_monitor.client_id_trace_enable procedure, you must set the client_identifier for the session by using the DBMS_SESSION package, as shown here:

SQL> execute dbms_session.set_identifier('MySQLTune1')

Once you set the client identifier as shown here, the client_identifier column in the V$SESSION view is populated. You can confirm the value of the client_identifier column by executing the following statement:

SQL> select sid, serial#,username from v$session where client_identifier='MySQLTune1';

Now you can execute the dbms_monitor.client_id_trace_enable procedure:

SQL> execute dbms_monitor.client_id_trace_enable(client_id=>'SH', waits=>true, binds=>false);

You can disable the trace with the following command:

SQL> execute dbms_monitor.client_id_trace_disable(client_id=>'SH'),

How It Works

Setting the client_identifier column lets you enable the tracing of multiple sessions, when several users may be connecting as the same Oracle user, especially in applications that use connection pools. The client_id_trace_enable procedure collects statistics for all sessions with a specific client ID. Note that the client_id that you must specify doesn’t have to belong to a currently active session. By default, the waits and binds parameters are set to false and you can set the tracing of both waits and binds by adding those parameters when you execute the client_id_trace_enable procedure:

 SQL> exec dbms_monitor.client_id_trace_enable('SH',true,true);
 
PL/SQL procedure successfully completed.

You can query the DBA_ENABLED_TRACES view to find the status of a trace that you executed with a client identifier. In this view, the column TRACE_TYPE shows the value CLIENT_ID and the PRIMARY_ID shows the value of the client identifier.

SQL> select trace_type, primary_id,waits,binds from dba_enabled_traces;
 
TRACE_TYPE       PRIMARY_ID        WAITS     BINDS
-------------    ---------------   --------  --------------
CLIENT_ID        SH                TRUE      TRUE

10-18. Tracing an Instance or a Database

Problem

You want to trace the execution of all SQL statements in the entire instance or database.

Solution

Use the dbms_monitor.database_trace_enable procedure to trace a specific instance or an entire database. Issue the following pair of commands to start and stop tracing for an individual instance.

SQL> execute dbms_monitor.database_trace_enable(instance_name=>'instance1'),
SQL> execute dbms_monitor.database_trace_disable(instance_name=>'instance1'),

You can optionally specify the waits and binds attributes. The following commands enable and disable SQL tracing at the database level:

SQL> execute dbms_monitor.database_trace_enable();
SQL> execute dbms_monitor.database_trace_disable();

You can also set the sql_trace initialization parameter to true to turn on and turn off SQL tracing, but this parameter is deprecated. Oracle recommends that you use the dbms_monitor (or the dbms_session) package for SQL tracing.

How It Works

Obviously, instance-level and database-level SQL tracing is going to impose a serious overhead and may well turn out to be another source of performance problems! It’s possible for background processes to continue writing to their trace files until the trace files reach their maximum size, the directory containing the trace files exhausts all the space allocated to it, or until you bounce the database. This is so because even after you disable this type of tracing, background processes may keep writing to the trace files. You normally don’t ever have to do this—use the session-level tracing instead to identify performance problems. If you must trace an entire instance, because you don’t know from which session a query may be executed, turn off tracing as soon as possible to reduce the overhead.

10-19. Generating an Event 10046 Trace for a Session

Problem

You want to get an Oracle event 10046 trace for a session.

Solution

You can get an Oracle event 10046 trace, also called an extended trace, by following these steps:

  1. Set your trace file identifier.
    SQL> alter session set tracefile_identifier='My_Trace';
  2. Issue the following statement to start the trace.
    SQL> alter session set events '10046 trace name context forever, level 12'
  3. Execute the SQL statement(s) that you want to trace.
    SQL> select sum(amount_sold) from sales;
  4. Turn tracing off with the following command:
    SQL> alter session set events '10046 trace name context off';

You’ll find the trace dump file in the trace directory that’s specified by the diagnostic_dest parameter ($DIAG_HOME/rdbms/db/inst/trace). You can analyze this trace file with TKPROF or another utility such as the Oracle Trace Analyzer.

How It Works

Here’s what the various keywords in the syntax for setting a 10046 trace mean:

set events: Sets a specific Oracle event, in this case, the event 10046

10046: Specifies when an action should be taken

trace: The database must take this action when the event (10046) occurs.

name: Indicates the type of dump or trace

context: Specifies that Oracle should generate a context-specific trace; if you replace context with errorstack, the database will not trace the SQL statement. It dumps the error stack when it hits the 10046 event.

forever: Specifying the keyword forever tells the database to invoke the action (trace) every time the event (10046) is invoked, until you disable the 10046 trace. If you omit the keyword forever, the action is invoked just once, following which the event is automatically disabled.

level 12: Specifies the trace level—in this case, it captures both bind and wait information.

While the Oracle event 10046 has existed for several years, this trace is identical to the trace you can generate with the session_trace_enable procedure of the DBMS_MONITOR package:

SQL> execute dbms_monitor.session_trace_enable(session_id=>99,    serial_num=>88,waits=>true,binds=>true);

Both the 10046 event and the dbms_monitor.session_trace_enable procedure shown here generate identical tracing information, called extended tracing because the trace includes wait and bind variable data.

If you aren’t using the new diagnostic infrastructure (ADR) introduced in Oracle Database 11g, make sure you set the dump file size to the value unlimited, as the 10046 trace often produces very large trace files, and the database will truncate the dump file if there isn’t enough space in the trace dump directory.

The level of tracing you specify for the 10046 trace determines which types of information is gathered by the trace. The default level is 1, which collects basic information. Level 4 allows you to capture the bind variable values, which are shown as :bi, :b2, and so on. You can see the actual values that Oracle substitutes for each of the bind variables. Level 8 provides all the information from a Level 1 trace plus details about all the wait events during the course of the execution of the SQL query. A Level 12 trace is a combination of the Level 4 and Level 8 traces, meaning it’ll include both bind variable and wait information. Level 16 is new in Oracle Database 11g and provides STAT line dumps for each execution of the query. Note that this is the same as setting the value all_executions for the plan_level parameter when you trace with the dbms_monitor.session_trace_enable procedure.

image Tip  If the session doesn’t close cleanly before you disable tracing, your trace file may not include important trace information.

While getting a 10046 trace and analyzing it does provide valuable information regarding the usage of bind variables and the wait events, you need to be careful about when to trace sessions. If the instance as a whole is poorly performing, your tracing might even make performance worse due to overhead imposed by running the trace. In addition, it takes time to complete the trace, run it through TKPROF or some other profiler, and go through the dozens of SQL statements in the report. Here is a general strategy that has worked for us in our own work:

If you’re diagnosing general performance problems, a good first step would be to get an AWR report, ideally taking several snapshots spaced 1–15 minutes apart. Often, you can identify the problem from a review of the AWR report. The report will highlight things such as inefficient SQL statements, contention of various types, memory issues, latching, and full table scans that are affecting performance. You can get all this information without running a 10046 trace.

Run a 10046 trace when a user reports a problem and you can’t identify a problem through the AWR reports. If you’ve clearly identified a process that is performing poorly, you can trace the relevant session. You can also run this trace in a development environment to help developers understand the query execution details and tune their queries.

10-20. Generating an Event 10046 Trace for an Instance

Problem

You want to trace a problem SQL query, but you can’t identify the session in advance. You would like to trace all SQL statements executed by the instance.

Solution

You can turn on tracing at the instance level with the following alter system command, after connecting to the instance you want to trace.

SQL> alter system set events '10046 trace name context forever,level 12';

The previous command enables the tracing of all sessions that start after you issue the command—it won’t trace sessions that are already connected.

You disable the trace by issuing the following command:

SQL> alter system set events '10046 trace name context off';

This command disables tracing for all sessions.

How It Works

Instance-wide tracing helps in cases where you know a problem query is running, but there’s no way to identify the session ahead of time. Make sure that you enable instance-wide tracing only when you have no other alternative, and turn it off as soon as you capture the necessary diagnostic information. Any instance-wide tracing is going to not only generate very large trace files in a busy environment but also contribute significantly to the system workload.

As in the case of Recipe 10-18, it’s possible for background processes to continue writing to their trace files until the trace files reach their maximum size, the directory containing the trace files exhausts all the space allocated to it, or until you bounce the database. This is so because even after you disable this type of tracing, background processes may keep writing to the trace files.

10-21. Setting a Trace in a Running Session

Problem

You want to set a trace in a session, but the session has already started.

image Note  A user who phones to ask for help with a long-running query is a good example of a case in which you might want to initiate a trace in a currently executing session. Some business-intelligence queries, for example, run for dozens of minutes, even hours, so there is time to initiate a trace mid-query and diagnose a performance problem.

Solution

You can set a trace in a running session using the operating system process ID (SPID), with the help of the oradebug utility. Once you identify the PID of the session you want to trace, issue the following commands to trace the session.

SQL> connect / as sysdba
SQL> oradebug setospid <SPID>
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever,level 12
SQL> oradebug event 10046 trace name context off

In the example shown here, we specified Level 12, but as with the 10046 trace you set with the alter session command, you can specify the lower tracing levels 4 or 8.

How It Works

The oradebug utility comes in handy when you can’t access the session you want to trace, or when the session has already started before you can set tracing. oradebug lets you attach to the session and start the SQL tracing. If you aren’t sure about the operating system PID (or SPID) associated with an Oracle session, you can find it with the following query.

SQL> select p.PID,p.SPID,s.SID
  2  from v$process p,v$session s
  3  where s.paddr = p.addr
  4* and s.sid = &SESSION_ID

oradebug is only a facility that allows you to set tracing—it’s not a tracing procedure by itself. The results of the 10046 trace you obtain with oradebug are identical to those you obtain with a normal event 10046 trace command.

In the example shown in the “Solution” section, we use the OS PID of the Oracle users. You can also specify the Oracle Process Identifier (PID) to trace a session instead of the OS PID.

SQL> connect / as sysdba
SQL> oradebug setorapid 9834
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever,level 12

In an Oracle RAC environment, as is the case with all other types of Oracle tracing, make sure you connect to the correct instance before starting the trace. As an alternative to using oradebug, you can use the dbms_system.set_sql_trace_in_session procedure to set a trace in a running session. Note that DBMS_SYSTEM is an older package, and the recommended way to trace sessions starting with the Oracle Database 10g release is to use the DBMS_MONITOR package.

10-22. Enabling Tracing in a Session After a Login

Problem

You want to trace a user’s session, but that session starts executing queries immediately after it logs in.

Solution

If a session immediately begins executing a query after it logs in, it doesn’t give you enough time to get the session information and start tracing the session. In cases like this, you can create a logon trigger that automatically starts tracing the session once the session starts. Here is one way to create a logon trigger to set up a trace for sessions created by a specific user:

SQL> create or replace trigger trace_my_user
  2    after logon on database
  3    begin
  4     if user='SH' then
  5     dbms_monitor.session_trace_enable(null,null,true,true);
  6     end if;
  8*   end;
SQL> /
 
Trigger created.
 
SQL>

How It Works

Often, you find it hard to trace session activity because the session already starts executing statements before you can set up the trace. This is especially so in a RAC environment, where it is harder for the DBA to identify the instance and quickly set up tracing for a running session. A logon trigger is the perfect solution for such cases. Note that in a RAC environment, the database generates the trace files in the trace directory of the instance to which a user connected.

A logon trigger for tracing sessions is useful for tracing SQL statements issued by a specific user, by setting the trace as soon as the user logs in. From that point on, the database traces all SQL statements issued by that user. Make sure you disable the tracing and drop the logon trigger once you complete tracing the SQL statements you are interested in. Remember to revoke the alter session privilege from the user as well.

10-23. Tracing the Optimizer’s Execution Path

Problem

You want to trace the cost-based optimizer (CBO) to examine the execution path for a SQL statement.

Solution

You can trace the optimizer’s execution path by setting the Oracle event 10053. Here are the steps.

  1. Set the trace identifier for the trace file.
    SQL> alter session set tracefile_identifier='10053_trace1'
    Session altered.
    SQL>
  2. Issue the alter session set events statement to start the trace.
    SQL> alter session set events '10053 trace name context forever,level 1';
    Session altered.
    SQL>
  3. Execute the SQL statement whose execution path you want to trace.
    SQL> select * from users
      2  where user_id=88 and
      3  account_status='OPEN'
      4  and username='SH';
    ...
    SQL>
  4. Turn the tracing off.
    SQL> alter session set events '10053 trace name context off';
    Session altered.
    SQL>

You can examine the raw trace file directly to learn how the optimizer went about its business in selecting the execution plan for the SQL statement.

How It Works

An event 10053 trace gives you insight into the way the optimizer does its job in selecting what it estimates to be the optimal execution plan for a SQL statement. For example, you may wonder why the optimizer didn’t use an index in a specific case—the event 10053 trace shows you the logic used by the optimizer in skipping that index. The optimizer considers the available statistics for all objects in the query and evaluates various join orders and access paths. The event 10053 trace also reveals all the evaluations performed by the optimizer and how it arrived at the best join order and the best access path to use in executing a query.

You can set either Level 1 or Level 2 for the event 10053 trace. Level 2 captures the following types of information:

  • Column statistics
  • Single access paths
  • Table joins considered by the optimizer
  • Join costs
  • Join methods considered by the optimizer

A Level 1 trace includes all the foregoing, plus a listing of all the default initialization parameters used by the optimizer. You’ll also find detailed index statistics used by the optimizer in determining the best execution plan. The trace file captures the amazing array of statistics considered by the cost optimizer and explains how the CBO creates the execution plan. Here are some of the important things you’ll find in the CBO trace file.

  • List of all internal optimizer-related initialization parameters
  • Peeked values of the binds in the SQL statement
  • Final query after optimizer transformations
  • System statistics (CPUSPEEDNW, IOTFRSPEED, IOSEEKTIM, MBRC)
  • Access path analysis for all objects in the query
  • Join order evaluation

Unlike a raw 10046 event trace file, a 10053 event trace file is quite easy (and interesting) to read. You must understand here that a 10053 trace will be generated only when a hard parse is required. Bear in mind that the trace files might at times be hard to follow when several tables with multiple indexes are analyzed during optimization. Also, the contents of a 10053 event trace file are subject to change from one Oracle Database release to the next.

Here are key excerpts from our trace file. The trace file shows the cost-based query transformations applied by the optimizer:

OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
OBYE:     OBYE performed.

In this case, the optimizer eliminated the order by clause in our SQL statement. After performing all its transformations, the optimizer arrives at the “final query after transformations,” which is shown here:

select channel_id,count(*)
from sh.sales
group by channel_id

Next, the output file shows the access path analysis for each of the tables in your query.

Access path analysis for SALES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for SALES[SALES]
  Table: SALES  Alias: SALES
    Card: Original: 918843.000000  Rounded: 918843  Computed: 918843.00  Non Adjusted: 918843.00
  Access Path: TableScan
    Cost:  495.47  Resp: 495.47  Degree: 0
      Cost_io: 481.00  Cost_cpu: 205554857
      Resp_io: 481.00  Resp_cpu: 205554857
  Access Path: index (index (FFS))
    Index: SALES_CHANNEL_BIX
    resc_io: 42.30  resc_cpu: 312277
    ix_sel: 0.000000  ix_sel_with_filters: 1.000000
  Access Path: index (FFS)
    Cost:  42.32  Resp: 42.32  Degree: 1
      Cost_io: 42.30  Cost_cpu: 312277
      Resp_io: 42.30  Resp_cpu: 312277
  ****** trying bitmap/domain indexes ******
  Access Path: index (FullScan)
    Index: SALES_CHANNEL_BIX
    resc_io: 75.00  resc_cpu: 552508
    ix_sel: 1.000000  ix_sel_with_filters: 1.000000
    Cost: 75.04  Resp: 75.04  Degree: 0
  Access Path: index (FullScan)
    Index: SALES_CHANNEL_BIX
    resc_io: 75.00  resc_cpu: 552508
    ix_sel: 1.000000  ix_sel_with_filters: 1.000000
    Cost: 75.04  Resp: 75.04  Degree: 0
  Bitmap nodes:
    Used SALES_CHANNEL_BIX
      Cost = 75.038890, sel = 1.000000
  Access path: Bitmap index - accepted
    Cost: 75.038890 Cost_io: 75.000000 Cost_cpu: 552508.000000 Sel: 1.000000
    Believed to be index-only
  ****** finished trying bitmap/domain indexes ******
******** Begin index join costing ********
******** End index join costing ********
  Best:: AccessPath: IndexFFS
  Index: SALES_CHANNEL_BIX
         Cost: 42.32  Degree: 1  Resp: 42.32  Card: 918843.00  Bytes: 0

In this case, the optimizer evaluates various access paths and shows the optimal access path as an Index Fast Full Scan (IndexFFS).

The optimizer then considers various permutations of join orders and estimates the cost for each join order it considers:

Considering cardinality-based initial join order.
Join order[1]:  SALES[SALES]#0
GROUP BY sort
GROUP BY adjustment factor: 1.000000
      Total IO sort cost: 0      Total CPU sort cost: 834280255
      Best so far:  Table#: 0  cost: 101.0459  card: 918843.0000  bytes: 2756529
Number of join permutations tried: 1
GROUP BY adjustment factor: 1.000000
GROUP BY cardinality:  4.000000, TABLE cardinality:  918843.000000
     Total IO sort cost: 0      Total CPU sort cost: 834280255
  Best join order: 1
  Cost: 101.0459  Degree: 1  Card: 918843.0000  Bytes: 2756529

As our brief review of the 10053 trace output shows, you can get answers to puzzling questions such as why exactly the optimizer chose a certain join order or an access path, and why it ignored an index. The answers are all there!

10-24. Generating Automatic Oracle Error Traces

Problem

You want to create an automatic error dump file when a specific Oracle error occurs.

Solution

You can create error dumps to diagnose various problems in the database by specifying the error number in a hanganalyze or systemstate command. For example, diagnosing the causes for deadlocks is often tricky. You can ask the database to dump a trace file when it hits the ORA-00060: Deadlock detected error. To do this, specify the event number 60 with the hanganalyze or the systemstate command:

SQL> alter session set events '60 trace name hanganalyze level 4';
 
Session altered.
 
SQL> alter session set events '60 trace name systemstate level 266';
 
Session altered.
 
SQL>

Both of these commands will trigger the automatic dumping of diagnostic data when the database next encounters the ORA-00060 error. You can use the same technique in an Oracle RAC database. For example, you can issue the following command to generate automatic hanganalyze dumps:

SQL>alter session set events '60 trace name hanganalyze_global level 4';

This alter session statement invokes the hanganalyze command in any instance in which the database encounters the ORA-00060 error.

Although we showed how to set the error trace event at the session level, it would be better, for some hard-to-catch unfortunate conditions, to set this trace event at the system level. It isn’t always easy to reproduce a problem in one's session.

How It Works

Setting event numbers for an error will ensure that when the specified error occurs the next time, Oracle automatically dumps the error information for you. This comes in very handy when you’re diagnosing an error that occurs occasionally and getting a current systemstate dump or a hanganalyze dump is unhelpful. Some events such as deadlocks have a text alias, in which case you can specify the alias instead of the error number. For the ORA-00060 error, the text alias is deadlock, and so you can issue the following command for tracing the error:

SQL> alter session set events 'deadlock trace name systemstate level 266';
 
Session altered.
 
SQL>

Similarly, you can use text aliases wherever they’re available for other error events.

10-25. Tracing a Background Process

Problem

You want to trace a background process.

Solution

If you aren’t sure of the correct name of the background process you want to trace, you can list the exact names of all background processes by issuing the following command:

SQL> select name,description from v$bgprocess;

Suppose you want to trace the dbw0 process. Issue the following commands to start and stop the trace.

SQL> alter system set events 'sql_trace {process:pname=dbw0}';
System altered.
SQL> alter system set events 'sql_trace {process:pname=dbw0} off';
System altered.
SQL>

You can trace two background processes at the same time by specifying the pipe (|) character to separate the process names:

SQL> alter system set events 'sql_trace {process:pname=dbw0|dbw1}';
System altered.
 
SQL> alter system set events 'sql_trace {process:pname=dbw0|dbw1} off';
System altered.
SQL>

How It Works

Oracle lets you to issue an alter system set events command to trace a process (or a set of processes) or a specific SQL statement (or a set of statements). This recipe shows how to trace a background process by specifying the process name.

As in the case of Recipes 10-18 and 10-20, it’s possible for background processes to continue writing to their trace files until the trace files reach their maximum size, the directory containing the trace files exhausts all the space allocated to it, or until you bounce the database. This is so because even after you disable this type of tracing, background processes may keep writing to the trace files.

10-26. Enabling Oracle Listener Tracing

Problem

You want to trace the Oracle listener, in order to diagnose and troubleshoot user connection issues.

Solution

To generate a trace for the Oracle listener, add the following two lines in the listener.ora file, which is by default located in the $ORACLE_HOME/network/admin directory.

trace_level_listener=support
trace_timestamp_listener=true

You can optionally specify a file name for the listener trace by adding the line trace_file_listener=<file_name>. Reload the listener with the lsnrctl reload command, and then check the status of the listener with the lsnrctl status command. You should see the trace file listed in the output:

C:>lsnrctl reload
 
The command completed successfully
 
C:>lsnrctl status
...
 
Listener Parameter File   C:apporaproduct11.2.0dbhome_1 etworkadminliste
ner.ora
Listener Log File         c:apporadiag nslsnrMIROPC61listeneralertlog.xm
l
Listener Trace File       c:apporadiag nslsnrMIROPC61listener raceora_86
40_9960.trc
...
  Instance "orcl1", status READY, has 1 handler(s) for this service...
Service "orcl1XDB.miro.local" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
The command completed successfully
 
C:>

The output shows the listener trace file you’ve just configured.

image Note  This and the next recipe don't have anything to with SQL tracing, which is the focus of this chapter. However, we thought we'd add these two recipes because they're useful, and there's no better chapter to put them in!

How It Works

You can specify various levels for listener tracing. You can specify Level 4 (user) for user trace information and Level 10 (admin) for administrative trace information. Oracle Support may request Level 16 (support) for troubleshooting Oracle listener issues.

If you can’t reload or restart the listener, you can configure the tracing dynamically by issuing the following commands:

C:>lsnrctl
 
LSNRCTL> set current_listener listener
Current Listener is listener
LSNRCTL> set trc_level 16
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC
listener parameter "trc_level" set to support
The command completed successfully
LSNRCTL>

You can turn off listener tracing by issuing the command set trc_level off, which is the default value for this parameter:

LSNRCTL> set trc_level off
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
LISTENER parameter "trc_level" set to off
The command completed successfully
LSNRCTL>

You’ll notice that the lsnrctl status command doesn’t show the listener trace file any longer. Note that the trace file for the listener will not be in the $ORACLE_HOME dbms etworkadmin directory. Rather, the database stores the trace file in the diag directory of the database diagnosability infrastructure (ADR), under the tnslsnr directory, as shown here:

Listener Trace File      c:apporadiag	nslsnrmyhostlistener	raceora_86
                            40_9960.trc

10-27. Setting Archive Tracing for Data Guard

Problem

You want to trace the creation and transmission of the archive logs in a Data Guard environment.

Solution

You can trace the archive logs on either the primary or the standby database by setting the log_archive_trace initialization parameter:

log_archive_trace=trace_level(integer)

For example, if you want to trace the archive log destination activity, you set tracing at level 8, as shown here:

SQL> alter system set log_archive_trace=8

By default the log_archive_trace parameter is set to zero, meaning archive log tracing is disabled.

How It Works

Although archive log tracing is disabled when you leave the log_archive_trace parameter at its default level of zero, the database will still record error conditions in the trace files and the alert log. When you set the log_archive_trace parameter to a non-zero value, Oracle writes the appropriate trace output generated by the archive log process to an audit trail. The audit trail is the same trace directory as that for the SQL trace files—the trace directory in the ADR (this is the same as the user dump directory specified by the user_dump_dest initialization parameter).

On the primary database, the log_archive_trace parameter controls the output of the ARCn (archiver), FAL (fetch archived log), and the LGWR (log writer) background processes. On the standby databases, it traces the work of the ARCn, RFS (remote file server), and the FAL processes.

You can specify any of 17 levels of archive log tracing. Here’s what the tracing levels mean:

0: Disables archivelog tracing (default)

1: Tracks archival of redo log file

2: Tracks archival status of each archivelog destination

4: Tracks archival operational phase

8: Tracks archivelog destination activity

16: Tracks detailed archivelog destination activity

32: Tracks archivelog destination parameter modifications

64: Tracks ARCn process state activity

128: Tracks FAL (fetch archived log) server related activities

256: Tracks RFS Logical Client

512: Tracks LGWR redo shipping network activity

1024: Tracks RFS Physical Client

2048: Tracks RFS/ARCn Ping Heartbeat

4096: Tracks Real Time Apply

8192: Tracks Redo Apply (Media Recovery or Physical Standby)

16384: Tracks redo transport buffer management

32768: Tracks LogMiner dictionary

When you specify a higher level of tracing, the trace will include information from all lower tracing levels. For example, if you specify Level 15, the trace file will include trace information from Levels 1, 2, 4, and 8.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset