Capture process—what are we supposed to stream?

The Capture process uses both LogMiner and Advanced Queuing to accomplish it's task (Note: Synchronous Capture uses internal triggers instead of LogMiner). The Capture process uses a LogMiner process to examine the database redo log for changes. A Capture process references a set of user-defined rules that determines exactly what needs to be captured for the Stream. These Capture rules identify specific changes to be captured from the redo logs.

These changes are then formatted into Logical Change Records (LCRs) and placed (enqueued) into an advanced queue. In most cases, the queue is a buffered queue (more about LCRs and buffered and persistent queues a little later). This method of capture enqueueing is called "Implicit Capture" and is most often used in a Streams environment. The following image shows the process:

Capture process—what are we supposed to stream?

The other method of capturing involves user generation and enqueuing of a message directly into a buffered or persistent queue. This method is called "Explicit Capture" and is usually done by application software. These explicit messages can be either a user defined message or an LCR. For a more detailed explanation on Explicit Capture, refer to the Oracle Streams Concepts and Administration Guide.

A Capture process can capture a majority of database transactions. The Capture process specifically captures DML and DDL. The Streams Capture process can capture DML on columns of the following datatypes:

VARCHAR2

NVARCHAR2

FLOAT

NUMBER

LONG

DATE

BINARY_FLOAT

BINARY_DOUBLE

TIMESTAMP

TIMESTAMP WITH TIME ZONE

TIMESTAMP WITH LOCAL TIME ZONE

INTERVAL YEAR TO MONTH

INTERVAL DAY TO SECOND

RAW

LONG RAW

CHAR

NCHAR

UROWID

CLOB with BASICFILE storage

NCLOB with BASICFILE storage

BLOB with BASICFILE storage

XMLType stored as CLOB

In turn, Capture process can capture the following DDL.

Tables

Indexes

Views

Sequences

Synonyms

PL/SQL packages, procedures, and functions

Triggers

Changes to users or roles

GRANT or REVOKE on users or roles

There are limitations with the Capture process. The following DDL commands are not captured.

ALTER SESSION

ALTER SYSTEM

CALL or EXECUTE for PL/SQL procedures

EXPLAIN PLAN

LOCK TABLE

SET ROLE

NO LOGGING or UNRECOVERABLE operations

FLASHBACK DATABASE

If you take a careful look at the list above, you may notice that these commands are DDL that are instance specific. You want to avoid replicating them, so that you do not end up corrupting the target instance.

In addition, there are object specific DDLs that are not supported by Streams.

CREATE CONTROL FILE

CREATE or ALTER DATABASE

CREATE, ALTER, or DROP MATERIALIZED VIEW LOG

CREATE, ALTER, or DROP MATERIALIZED VIEW

CREATE, ALTER, or DROP SUMMARY

CREATE SCHEMA

CREATE PFILE

CREATE SPFILE

RENAME (Use ALTER TABLE instead.)

Looking at the lists above, one can start to think, "Is there a quick way to tell if my environment can be streamed?" Yes, Oracle Development did provide a quick way to find out. Simply query DBA_STREAMS_UNSUPPORTED view and you can find out the reason why a particular table could not be streamed. We suggest that you query this table as part of your planning a Streams environment.

SELECT * FROM DBA_STREAMS_UNSUPPORTED;

Pay particular attention to the REASON and AUTO_FILTERED column. The REASON column is self-explanatory. As for AUTO_FILTERED, if you see a YES value then Streams automatically filters out the object from being streamed.

Possible reasons include:

Index Organized Table (IOT)

Column with user-defined type

Unsupported column exists

Object table

AQ queue table

Temporary table

Sub object

External table

Materialized view

FILE column exists

Materialized view log

Materialized view container table

Streams unsupported object

Domain index

IOT with overflow

IOT with LOB

IOT with physical Rowid mapping

Mapping table for physical row id of IOT

IOT with LOB

IOT with row movement

Summary container table

The Capture process is the first Streams specific related process. However, if you look again at the diagram you will see LogMiner is also in the picture. The Capture does not do everything by itself. The Capture process uses LogMiner to do all the "heavy lifting". The Capture process takes advantage of LogMiner's ability to mine the database redo logs.

In 9i, the LogMiner tablespace defaulted to the SYSTEM tablespace. As of 10g, it defaults to the SYSAUX tablespace. As there will be additional usage of LogMiner with a Streams environment, we recommend that you isolate the tables related to LogMiner in its own tablespace. This can be accomplished with the following scripts.

CREATE TABLESPACE LOGMNRTS DATAFILE '/u05/oracle/data/logmnrtbs.dbf'
SIZE 100M AUTOEXTEND ON MAXSIZE UNLIMITED;
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'),
END;

This can help eliminate possible fragmentation in the SYSTEM or SYSAUX tablespace where the LogMiner tables are created by default. Depending on your tablespace file to disk distribution, it can also help with performance. If your database has been upgraded from an earlier version, the LogMiner tablespace may well be set to the SYSTEM tablespace. If it is, you are strongly cautioned to use the above method to reset the LogMiner tablespace to a non-system tablespace.

To actually identify the Capture and LogMiner processes that are running on the source database, look for the background process on the host of CPnn for Capture and MSnn for LogMiner where nn is a combination of letters and numbers. Both of these processes may not be constantly running, so they should be monitored over time. Also, there may be multiple Capture and/or LogMiner processes running.

Downstream Capture

The Capture process usually resides on the Source database. This configuration is called Local Capture (and sometimes Upstream Capture). The Source database is defined as containing both the Capture process and the tables being captured. There is another Capture configuration that can be used called Downstream Capture. For now, we will just give a quick example of when and why a Downstream Capture would be configured.

Downstream Capture

The Capture process consumes resources (memory and CPU) from the host. This may not be optimal in a high-volume production environment (this is but one case where Downstream Capture comes into play). Downstream Capture allows the Capture process and queue to be moved to another staging node. That staging node is the "worker" that can afford the additional overhead of Capture. Downstream Capture uses standby archived log destinations (just like those used by Data Guard) defined at the source database to direct a copy of the redo to the staging node. The Capture process at the staging node then mines those redo copies and enqueues the necessary LCRs. Propagation processes on the staging node then send the LCRs to the appropriate destination database sites. We will cover Downstream Capture and other advanced configurations in more detail in Chapter 6.

Synchronous Capture

Synchronous Capture is not Synchronous replication. We need to be clear on this.

Where regular Implicit Capture depends on LogMiner to extract data changes from the redo, Synchronous Capture actually enqueues the data change to its Capture queue directly when the change is committed at the source.

Synchronous Capture (SC) does have some limitations and differences from Implicit Capture. They are as follows:

  • SC can only be created at the Table or Subset levels, not the Schema or Global
  • SC cannot be created using the DBMS_STREAM_ADM.MAINTAIN_*_SCRIPTS procedures
  • SC uses a persistent queue (queue data is stored on disk), so it requires a slightly different configuration than normal Implicit Capture
  • SC only captures DML, no DDL
  • SC does not capture changes for the following datatypes:
    • LONG
    • LONG RAW
    • CLOB
    • NCLOB
    • BLOB
    • BFILE
    • ROWID
    • User-defined types (including object types, REFs, varrays, and nested tables)
    • Oracle-supplied types (including ANY types, XML types, spatial types, and media types)
  • SC can only capture changes for an Index Organized Tables(IOT) if it does not contain any of the above listed datatypes
  • SC is not a valid configuration for Combined Capture and Apply (this requires a buffered (in memory) capture queue)

We will cover Synchronous Capture and other advanced configurations in more detail in Chapter 6.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset