The Capture process uses both LogMiner and Advanced Queuing to accomplish it's task (Note: Synchronous Capture uses internal triggers instead of LogMiner). The Capture process uses a LogMiner process to examine the database redo log for changes. A Capture process references a set of user-defined rules that determines exactly what needs to be captured for the Stream. These Capture rules identify specific changes to be captured from the redo logs.
These changes are then formatted into Logical Change Records (LCRs) and placed (enqueued) into an advanced queue. In most cases, the queue is a buffered queue (more about LCRs and buffered and persistent queues a little later). This method of capture enqueueing is called "Implicit Capture" and is most often used in a Streams environment. The following image shows the process:
The other method of capturing involves user generation and enqueuing of a message directly into a buffered or persistent queue. This method is called "Explicit Capture" and is usually done by application software. These explicit messages can be either a user defined message or an LCR. For a more detailed explanation on Explicit Capture, refer to the Oracle Streams Concepts and Administration Guide.
A Capture process can capture a majority of database transactions. The Capture process specifically captures DML and DDL. The Streams Capture process can capture DML on columns of the following datatypes:
|
|
In turn, Capture process can capture the following DDL.
Tables Indexes Views Sequences |
Synonyms PL/SQL packages, procedures, and functions Triggers Changes to users or roles
|
There are limitations with the Capture process. The following DDL commands are not captured.
|
|
If you take a careful look at the list above, you may notice that these commands are DDL that are instance specific. You want to avoid replicating them, so that you do not end up corrupting the target instance.
In addition, there are object specific DDLs that are not supported by Streams.
|
|
Looking at the lists above, one can start to think, "Is there a quick way to tell if my environment can be streamed?" Yes, Oracle Development did provide a quick way to find out. Simply query DBA_STREAMS_UNSUPPORTED
view and you can find out the reason why a particular table could not be streamed. We suggest that you query this table as part of your planning a Streams environment.
SELECT * FROM DBA_STREAMS_UNSUPPORTED;
Pay particular attention to the REASON
and AUTO_FILTERED
column. The REASON
column is self-explanatory. As for AUTO_FILTERED
, if you see a YES value then Streams automatically filters out the object from being streamed.
Possible reasons include:
Index Organized Table (IOT) Column with user-defined type Unsupported column exists Object table AQ queue table Temporary table Sub object External table Materialized view FILE column exists |
Materialized view log Materialized view container table Streams unsupported object Domain index IOT with overflow IOT with LOB IOT with physical Rowid mapping Mapping table for physical row id of IOT IOT with LOB IOT with row movement Summary container table |
The Capture process is the first Streams specific related process. However, if you look again at the diagram you will see LogMiner is also in the picture. The Capture does not do everything by itself. The Capture process uses LogMiner to do all the "heavy lifting". The Capture process takes advantage of LogMiner's ability to mine the database redo logs.
In 9i, the LogMiner tablespace defaulted to the SYSTEM tablespace. As of 10g, it defaults to the SYSAUX tablespace. As there will be additional usage of LogMiner with a Streams environment, we recommend that you isolate the tables related to LogMiner in its own tablespace. This can be accomplished with the following scripts.
CREATE TABLESPACE LOGMNRTS DATAFILE '/u05/oracle/data/logmnrtbs.dbf' SIZE 100M AUTOEXTEND ON MAXSIZE UNLIMITED; BEGIN DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'), END;
This can help eliminate possible fragmentation in the SYSTEM
or SYSAUX
tablespace where the LogMiner tables are created by default. Depending on your tablespace file to disk distribution, it can also help with performance. If your database has been upgraded from an earlier version, the LogMiner tablespace may well be set to the SYSTEM tablespace. If it is, you are strongly cautioned to use the above method to reset the LogMiner tablespace to a non-system tablespace.
To actually identify the Capture and LogMiner processes that are running on the source database, look for the background
process on the host of CPnn
for Capture and MSnn
for LogMiner where nn
is a combination of letters and numbers. Both of these processes may not be constantly running, so they should be monitored over time. Also, there may be multiple Capture and/or LogMiner processes running.
The Capture process usually resides on the Source database. This configuration is called Local Capture (and sometimes Upstream Capture). The Source database is defined as containing both the Capture process and the tables being captured. There is another Capture configuration that can be used called Downstream Capture. For now, we will just give a quick example of when and why a Downstream Capture would be configured.
The Capture process consumes resources (memory and CPU) from the host. This may not be optimal in a high-volume production environment (this is but one case where Downstream Capture comes into play). Downstream Capture allows the Capture process and queue to be moved to another staging node. That staging node is the "worker" that can afford the additional overhead of Capture. Downstream Capture uses standby archived log destinations (just like those used by Data Guard) defined at the source database to direct a copy of the redo to the staging node. The Capture process at the staging node then mines those redo copies and enqueues the necessary LCRs. Propagation processes on the staging node then send the LCRs to the appropriate destination database sites. We will cover Downstream Capture and other advanced configurations in more detail in Chapter 6.
Synchronous Capture is not Synchronous replication. We need to be clear on this.
Where regular Implicit Capture depends on LogMiner to extract data changes from the redo, Synchronous Capture actually enqueues the data change to its Capture queue directly when the change is committed at the source.
Synchronous Capture (SC) does have some limitations and differences from Implicit Capture. They are as follows:
DBMS_STREAM_ADM.MAINTAIN_*_SCRIPTS
proceduresWe will cover Synchronous Capture and other advanced configurations in more detail in Chapter 6.