Chapter 4. Single-Source Configuration

The stream flows one way: Downhill

In this chapter, we will look at configuring single-source streams replication using Enterprise Manager (EM) DB Console (this can also be applied if you are using Enterprise Manager Grid Control). We will also provide the PL/SQL API calls being issued behind the scenes. The PL/SQL for these calls can be copied to a script and used to configure Streams from the command line as well. This chapter is organized as follows:

  • The Enterprise Manager: This takes you step-by-step through the EM Streams setup wizard, using a schema-level replication example
  • The code behind the curtain: A step-by-step walk-through the PL/SQL code that configures the Stream
  • Sequences, and triggers, and apply: Points out expected behavior of sequence and triggers in our example, as well as special case considerations
  • Other levels at which to replicate: Provides a brief description of other replication levels that can be used and the beauty of the DBMS_STREAMS_ADM.MAINTENANCE_* scripts

The pre-requisite for the examples in this chapter is to include the demo schemas in your source database. Please see Chapter 3, Prepare the Rafts and Secure Your Gear: The pre-work before configuring Oracle 11g Streams, for instructions on including the demo schemas in your database if you wish to "practice along" with our example.

For our example, we are going to set up Streams for the HR Schema between two instances. STRM1 will be our source master database and STRM2 will be our destination target database.

The single-source model is uni-directional replication. This means that it changes the flow only one way; from the source master to the destination target. Thus, we will create a capture queue and process at the source master, and a propagation job and process at the source master to the destination target (queue to queue). We then create an Apply queue and process at the destination target.

The Enterprise Manager

Log into EM DBConsole to the source master database.

At the homepage, click on the Data Movement sub-tab.

The Enterprise Manager

Click on the Setup link under Streams.

The Enterprise Manager

Tip

The EM has a 15 minute idle time-out by default. If you are not able to complete the configuration pages within that time period, you will need to start over. So, it doesn't hurt to read through this first, determine what you will need to enter where, and then start the setup.

Setup options

Select Streams Global, Schema, Table and Subset Replication Wizard and click Continue.

Setup options

Specify the Streams Administration username and password.

OR

If your designated Streams Administration user does not already exist in the source master database, click Create Streams Administrator button.

Setup options

Enter a SYSDBA username and password, and the username and password for the Streams Administration user. Take note of the privileges that will be granted to the new user.

Setup options

Click OK.

This will return you to the specify Streams Administrator screen, click NEXT.

Setup options

Next, you will specify information for the destination or target database.

Setup options

Again, if the Streams Administrator user does not exist on the target database, click Create Streams Administrator button.

Specify the appropriate information and click OK.

Setup options

This will take you back to the configure destination database page. Click Next.

Setup options

Configure replication

This page allows us to specify our replication options. First, we specify what level of replication we want to configure. The levels are as follows:

  1. Global Rule means we want to replicate all the schemas and their objects in the database (of course this does not include schema's proprietary to the database like sys, system, and so on). We will have the option of excluding specific schemas and/or tables in a later step.
  2. Schema Rule means we want to replicate all objects in one or more specific schemas. We will have the option of excluding specific tables in the schemas specified in a later step.
  3. Table Rule means we want to replicate only specific tables. This can be all the data in the table or a subset of data in the table through a where clause specified in a later step. The tables do not need to belong to the same schema.

In this example, we want to replicate the full HR schema. So, we will select Schema Rule.

Processes

Here we specify the names for each of the processes that we will want to use/create.

Tip

Specify a name that is specific to the process configuration. Why? You can have multiple capture, propagate, and Apply processes in an instance. Different rules can be associated to these different processes. Using meaningful names provides a quick visual aid to differentiate between the processes.

In our example, we are going to use the following naming to help us identify which processes work together to support this configuration. We will use SCHEMA_HR and forego using STREAMS since we already know it is Streams. Keep in mind that you do have a length limitation of 30 characters here. Keep it short. Feel free to abbreviate. KISS it.

Directory objects

The Streams setup wizard will use Data Pump to instantiate the destination/target database. This is where you specify where the resulting dump and log files will be written/read. You can use an existing directory (click on the flashlight to select from a list), or create one. To make life easy, let's use one that already exists for Data Pump purposes. It's the DATA_PUMP_DIR (convenient isn't it?)

Directory objects

Options

This is where we specify what changes get sent (DML and/or DDL) and the direction in which they are sent. By default, the page selects only DML and bi-directional (master-to-master). In our example, we want a single-source master. So, we will uncheck Setup Bi-directional replication. We want to replicate DDL changes to any of our HR schema objects as well as DML (data) changes, so we will check Capture, Propagate, and Apply data definition language (DDL) changes.

Options

Click Next.

Object selection

This is where we specify any or all objects in the database schema that we do or do not want to replicate. What you see presented to you on this page will depend on what replication rule you selected on the previous page (global, schema, table). You can select any or all of the schemas listed to replicate. Please note that the table shows only 10 at a time, so if you don't see the schema you want to at first, use the table navigation button to traverse the list. If you wish to exclude any tables that belong to your selected schemas, you can do so by adding them to the Exclude Tables list in the next section (click Add).

Object selection

In our case, we want just the HR schema with all its tables.

Object selection

Click NEXT.

This brings us to our Review summary sheet.

Object selection

Review

The Summary of Operations lists the tasks in the order that the Setup wizard will accomplish them, based on the configuration parameters you have specified. A nice feature here is the Save Scripts button. This will allow you to save a copy of the generated SQL code to a file (default name is setup.sql).

In this example, the saved script will contain the PL/SQL to create the STRM_ADMIN user on both STRM1 and STRM2 instances and grant the necessary privileges, create the necessary database links, and then call the DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS procedure passing in the parameters you defined in the wizard. It is this DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS procedure that actually generates and executes the PL/SQL to build streams. We will see more on this in our "The code behind the curtain" section.

Review

The Save operation will prompt you for the save confirmation and location. It then returns you to the Review page.

Review

Click Finish.

Schedule Streams setup job

You can choose to either set up streams immediately or schedule the setup for a later time (perhaps at night when users are not using the system).

In our case, we will run the setup Immediately. Note that you will also need to specify Host Credentials. This would be an O/S user and password. Please note that the user must have appropriate privileges to run jobs through the EM. For more information on the required privileges, please refer to your O/S specific Oracle maintenance manual. Click Submit.

Schedule Streams setup job

This will bring you to the job confirmation page. It provides a link to the job monitoring page if you wish to monitor the progress of the job.

Schedule Streams setup job

Click OK, and you will return to the initial Streams setup page.

Verify

When the job is complete, you can verify your Streams environment from the Streams Management page. Navigate to the Data Movement page and select Manage under Streams.

Verify

This will take you to the Overview page. Click on Capture.

Verify

Notice that the Capture process is enabled. This means your process is currently capturing changes. To stop/start (disable/enable) the process, click the radio button under the select column for the Capture process, and then click Start or Stop.

Verify

You can view process statistics by clicking on Statistics.

Verify

Click on the Streams breadcrumb link to return to the Capture page.

Click on the Propagation link. Verify the status is Enabled.

Verify

You can view the information and propagation rule sets by clicking on View.

Verify

Click on the Streams breadcrumb link to return to the Propagation page.

We then go to our destination site EM DBConsole and we see our Apply process and verify that it is enabled.

Verify

At our destination site (STRM2), the absence of a Capture process for the schema and a Propagation process back to STRM1 ensures uni-directional single-source master replication. However, it does not keep changes from being made directly to the destination site. So, it is still possible for data to diverge, and lead to data conflicts. To avoid this, we recommend implementing site priority conflict resolution error handling at the destination site (see Chapter 5, N-Way Replication, for conflict resolution).

The code behind the curtain

This section addresses the PL/SQL package procedures and functions that are executed behind the scenes by EM to actually create the Streams replication. We have included these procedures in PL/SQL block code that can be used individually to accomplish each step of the process at a time, or combined in a "master" script (with some minor modifications). Please note that you may need to make minor modifications to accommodate your specific environment setups (like machine names, connections, db links name and so on).

Checking the waters

When you use the EM Streams setup wizard to generate the code to configure your single source, EM generates and runs PL/SQL that does the following:

  • Creates the Streams Administrator user account (if not already done) on both the source and destination instances
  • Grants the necessary privileges to the Streams Administrator account (if not already done) on both the source and destination instances
  • Creates database links for the Streams Administrator user (first, dropping the link if it already exists)
  • Calls the appropriate DBMS_STREAMS_ADM.MAINTAIN_* procedure to configure Streams between the two databases

The call to a DBMS_STREAMS_ADM.MAINTAIN_* procedure sets parameters that tell the procedure what to capture, propagation, and to apply queues and processes to create and where to create them, as well as script naming and location specification. You can also direct the procedure to only generate the the necessary scripts and not run them. For specific details on the DBMS_STREAMS_ADM.MAINTAIN_* procedures, please refer to your Oracle PL/SQL Packages and Types Reference Manual.

In our example, it is the DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS procedure.

Note

At the time of publishing, the EM Streams setup wizard has an undocumented feature in that even though you uncheck "Setup bi-direction replication" on your configuration it may still pass a "TRUE" value to the DBMS_STREAMS_ADM.MAINTAIN_* procedure bi_directional parameter. A "FALSE" value for the bi_directional parameter tells the procedure to set up single-source replication, A "TRUE" value for the bi_directional parameter tells the procedure to set up master-to-master replication. Check the script before running it by doing a "Save Scripts" on the Review page. If the script has the bi_directional parameter set to "TRUE", you can cancel out of the setup wizard, edit the saved script to set the value to "FALSE", and run the script from SQLPLUS.

The DBMS_STREAMS_ADM.MAINTAIN_* procedure in turn will generate and run scripts that set up single-source replication by doing the following:

  • Add supplemental logging to the necessary tables on the source database
  • Create the capture queue at the source database
  • Create the Propagation process at the source database
  • Create the schema Capture process at the source database
  • Configure and execute Data Pump schema export/import to/from the source/destination database
  • Instantiate the capture SCN
  • Create the apply queue and process on the destination database
  • Instantiate the apply SCN
  • Start the capture, apply, and Propagation processes

It is possible to run the DBMS_STREAMS_ADM.MAINTAIN_* procedure to generate the PL/SQL to build the Streams, but not actually run it. This is done by setting the perform_actions parameter to "FALSE".

To generate a full set of scripts, you can begin by running through the EM Streams setup wizard, save the script at the review page, and then cancel out the wizard without finishing. You can edit the saved script, setting the perfrom_actions parameter to "FALSE", and run the script from SQLPLUS. The procedure will create the SQL script to set up the Streams to directory specified by the script_directory_object parameter with the filename by the script_name parameter. This script can then be opened and viewed with a standard text editor. This file can then be run as a standalone script. Keeping a copy of these scripts also provides a way to quickly and accurately recreate the Streamed configuration between the two databases from the command line, should the need arise.

Diving in

In this section, we are going to look at the PL/SQL commands needed to configure single-streams replication between our two databases. The scripts created by the EM and DBMS_STREAMS_ADM.MAINTAIN_* procedure do pretty much the same thing though there may be some minor differences in order of operations and the secondary version checks which we chose to skip here. The PL/SQL presented here is formatted to be run, stand-alone for each step. This will allow you to "play" with each step, individually. You can download these scripts from the Packt website.

Note

If you would like more information on the PL/SQL packages and SQL commands issued in these scripts you can find it in your Oracle PL/SQL Packages and Types Reference, and the Oracle SQL Language Reference manuals respectively.

For PL/SQL block structure and coding, please reference your Oracle PL/SQL Language Reference manual.

The first step is the creation of the Streams Administrator user and granting of privileges at both the source master and destination target databases. This must be done by a SYSDBA user. The code is in the create_strmadmin.sql script and does the following:

  • Connects to the source database as SYS
  • Creates the STRM_ADMIN user
  • Grants DBA, IMP_FULL_DATABASE, EXP_FULL_DATABASE privileges to STRM_ADMIN
  • Executes DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE for STRM_ADMIN
  • Connects to the destination database as SYS
  • Creates the STRM_ADMIN user
  • Grants DBA, IMP_FULL_DATABASE, EXP_FULL_DATABASE privileges to STRM_ADMIN
  • Executes DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE for STRM_ADMIN
  • Connects at the source database as the new STRM_ADMIN user
  • Creates a database link from the source database to the destination database.

For more detailed information on the privileges for the Streams Administrator, reference the "Configure an Oracle Streams Administrator" of the Oracle Streams Concepts and Administration manual for 11gR1, or the Oracle Streams Replication Administrator's Guide for 11gR2.

--create_strmadmin.sql…
set echo on;
ACCEPT src PROMPT 'Enter tnsalias for the source database:'
ACCEPT dba_pwd_src PROMPT 'Enter Password of user "sys" to create Streams Admin at Source : ' HIDE
ACCEPT strm_pwd_src PROMPT 'Enter Password of Streams Admin "strm_admin" to be created at Source : ' HIDE
ACCEPT dest PROMPT 'Enter tnsalias for the target database:'
ACCEPT dba_pwd_dest PROMPT 'Enter Password of user "sys" to create Streams Admin at Destination : ' HIDE
ACCEPT strm_pwd_dest PROMPT 'Enter Password of Streams Admin "strm_admin" to be created at Destination : ' HIDE
PROMPT connecting as sys at &src
connect sys/&dba_pwd_src@&src as SYSDBA;
PROMPT
PROMPT creating strm_admin user
create user strm_admin identified by &strm_pwd_src;
PROMPT granting privs
grant DBA, IMP_FULL_DATABASE, EXP_FULL_DATABASE to strm_admin;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strm_admin',
grant_privileges => true);
END;
/
COMMIT;
/
PROMPT connecting as sys at &dest
connect sys/&dba_pwd_dest@&dest as SYSDBA;
PROMPT
PROMPT creating strm_admin user
create user strm_admin identified by &strm_pwd_dest;
PROMPT granting privs
grant DBA, IMP_FULL_DATABASE, EXP_FULL_DATABASE to strm_admin;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strm_admin',
grant_privileges => true);
END;
/
PROMPT strm_admin create and privs complete
/
COMMIT;
/

Next, we connect as strm_admin and create a database link from our source database to our target database.

connect strm_admin/&strm_pwd_src;
--if the STRM2 dblink already exists you can drop it.
--DROP DATABASE LINK STRM2;
CREATE DATABASE LINK STRM2 connect to strm_admin identified by &strm_pwd_dest using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS= (PROTOCOL=TCP)(HOST=strm_target)
(PORT=1521)))(CONNECT_DATA=(SID=strm2)(server=DEDICATED)))';
COMMIT;
/
--end code

Notice that we used the full TNS connect description rather than the alias for the using parameter? Using the full connect description instead of a TNS alias avoids issues with tnsnames.ora file configurations. This removes any dependency of the database link on tnsnames.ora files that can be changed, moved, or deleted.

Next, we need to add supplemental logging to the tables in the HR schema at the source master database. This needs to be done for each table.

The command to do this is:

ALTER TABLE "HR"."<table_name>" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, -FOREIGN KEY, UNIQUE INDEX) COLUMNS';

In the setup script generated by the DBMS_STREAMS_ADM.MAINTAIN_* procedure, you will see the command repeated and hard-coded for each table in the schema. The following code block below accomplishes the same actions, but dynamically, so it does not need to be edited if table names change, or if tables are added or dropped. This code is found in the add_supplog_schema.sql script and does the following:

  • Sets up a loop for each table name in the specified schema
  • Builds and executes the statement to add supplemental logging for each table
  • Reports the results of each statement
    --create_supplog_schema.sql..
    ACCEPT sowner PROMPT 'Enter schema owner:'
    set serveroutput on
    BEGIN
    For tn in (select table_name from dba_tables where owner = upper('&sowner')) loop
    BEGIN
    EXECUTE IMMEDIATE 'ALTER TABLE "&sowner"."' || tn.table_name ||
    '" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, FOREIGN KEY, UNIQUE INDEX) COLUMNS';
    dbms_output.put_line('added supplemental logging for ' || tn.table_name);
    EXCEPTION WHEN OTHERS THEN
    IF sqlcode = -32588 THEN
    dbms_output.put_line('supplemental logging already exists for ' || tn.table_name);
    ELSE
    RAISE;
    END IF;
    END;
    end loop;
    END;
    /
    --end code
    

Next we set up the capture queue on our source master (STRM1). This code is found in the create_capture_queue.sql script. The script should be run as the STRM_ADMIN user and does the following:

  • Calls dbms_streams_adm.set_up_queue to create the queue
  • Verifies queue creation
    --create_capture_queue.sql...
    ACCEPT cqname PROMPT 'Enter Capture Queue Name: ex: SCHEMA_HR'
    DECLARE
    uname varchar2(50);
    BEGIN
    select user into uname from dual;
    dbms_streams_adm.set_up_queue(
    queue_table => uname ||'.&cqname._CAPTURE_QT',
    storage_clause => NULL,
    queue_name => uname ||'.&cqname._CAPTURE_Q',
    queue_user => uname);
    END;
    /
    column object_name format a30
    select object_name, object_type, created from user_objects
    where object_name like upper('%&cqname%')
    /
    --end code
    

Next, create the propagation rules for the schema. Note that the propagation job is created in an enabled state, so we want to disable it until we are finished setting up the apply queue on the destination site. This code is found in the add_schema_propagation.sql script. The script should be run as the STRM_ADMIN user and does the following:

  • Calls dbms_streams_adm.add_schema_propagation_rules to create the Propagation process and job
  • Calls dbms_aqadm.disable_propagation_schedule to disable the propagation
  • Verifies the creation of the Propagation process and rules
    --add_schema_propagation.sql…
    ACCEPT sname PROMPT 'Enter Schema Name:'
    ACCEPT qname PROMPT 'Enter Queue Name prefix: (ex: SCHEMA_HR) '
    ACCEPT destlink PROMPT 'Enter the DB Link name for the destination DB: (ex: STRM2) '
    DECLARE
    uname varchar2(50);
    gsid varchar2(10) ;
    BEGIN
    select user into uname from dual;
    select upper(instance) into gsid from v$thread;
    dbms_streams_adm.add_schema_propagation_rules(
    schema_name => upper('&sname'),
    streams_name => '&qname._PROPAGATION',
    source_queue_name => uname||'.&qname._CAPTURE_Q',
    destination_queue_name => uname||'.&qname._APPLY_Q@&destlink',
    include_dml => TRUE,
    include_ddl => TRUE,
    include_tagged_lcr => TRUE,
    source_database => gsid,
    inclusion_rule => TRUE,
    and_condition => NULL,
    queue_to_queue => TRUE);
    --disable propagation until we are finished
    BEGIN
    dbms_aqadm.disable_propagation_schedule(
    queue_name => uname||'.&qname._CAPTURE_Q',
    destination => '&destlink',
    destination_queue => uname||'.&qname._APPLY_Q'),
    EXCEPTION WHEN OTHERS THEN
    IF sqlcode = -24065 THEN NULL; -- propagation already disabled
    ELSE RAISE;
    END IF;
    END;
    END;
    /
    --let's verify
    set pagesize 150
    set linesize 100
    select * from dba_propagation
    where propagation_name like upper('%&qname.%')
    /
    select * from dba_streams_schema_rules
    where streams_type = 'PROPAGATION'
    /
    --end code
    

Next we add the schema capture rule. This code is found in the add_schema_capture.sql script. The script should be run as the STRM_ADMIN user and does the following:

  • Calls dbms_streams_adm.add_schema_rule to create the schema level Capture process and rules
  • Verifies the creation of the Capture process and rules
    --add_schema_capture.sql…
    ACCEPT sname PROMPT 'Enter Schema Name:'
    ACCEPT qname PROMPT 'Enter Queue Name prefix: (ex: SCHEMA_HR) '
    DECLARE
    uname varchar2(50);
    gsid varchar2(10) ;
    BEGIN
    select user into uname from dual;
    select upper(instance) into gsid from v$thread;
    dbms_streams_adm.add_schema_rules(
    schema_name => '&sname',
    streams_type => 'CAPTURE',
    streams_name => '&qname._CAPTURE',
    queue_name => uname ||'.&qname._CAPTURE_Q',
    include_dml => TRUE,
    include_ddl => TRUE,
    include_tagged_lcr => TRUE,
    source_database => gsid,
    inclusion_rule => TRUE,
    and_condition => NULL);
    END;
    /
    --let's verify
    set pagesize 150
    set linesize 100
    select * from dba_capture
    where capture_name like upper('%&qname.%')
    /
    select * from dba_streams_schema_rules
    where streams_type = 'CAPTURE'
    /
    --end code
    

    Note

    We do not want to actually start the Capture process until we have everything else in place. This we will do at the end. The script generated by the DBMS_STREAMS_ADM.MAINTAIN_* procedure starts the Capture process here. However, Oracle documentation, and Oracle Streams course material both recommend starting the capture as the last step, as do these authors. In this case, we will do as "they say" not as "they do".

Next, we will want to instantiate the HR schema. We do this using Data Pump to export the HR schema to a dump file that will then be imported into the destination target database. In our example, the HR schema does not exist at our destination site. If it did exist, we would want to drop the HR schema at the destination site so that the Data Pump import is able to identically reproduce it at the destination site.

Note

For detailed information on instantiation, reference the "Instantiation and Oracle Streams Replication" chapter of the Oracle Streams Replication Administrators Guide.

First, we will do the Data Pump export. This code is found in the instantiate_exp_schema.sql script. The script will connect as the STRM_ADMIN user and do the following:

  • Opens a Data Pump job handle with dbms_datapump.open
  • Sets the metadata for the Data Pump job handle with dbms_datapump.metadata_filter
  • Sets the filenames to be generated using dbms_datapump.add_file
  • Starts the Data Pump export job with dbms_datapump.start_job
  • Monitors the job and reports status and completion
    --instantiate_exp_schema_hr.sql…
    ACCEPT sname PROMPT 'Enter Schema Name to instantiate: '
    ACCEPT src_dirobj PROMPT 'Enter Source directory Object name: (DATA_PUMP_DIR:) '
    ACCEPT expdump PROMPT 'Enter Export dump file Name (file extension .dmp will be appended): '
    ACCEPT stmadm PROMPT 'Enter Stream Admin username: '
    ACCEPT stmadmpwd PROMPT 'Enter Stream Admin Password: '
    ACCEPT srcsdb PROMPT 'Enter Source SID: '
    connect &stmadm/&stmadmpwd@&srcsdb
    set serveroutput on
    PROMPT 'Opening DataPump Export Job at &srcsdb'
    DECLARE
    -- data pump job handle
    H1 NUMBER;
    srcsid varchar2(10);
    job_state VARCHAR2(30) := 'UNDEFINED'; -- job state
    status ku$_Status; -- data pump status
    no_job exception;
    pragma exception_init(no_job, -40717);
    BEGIN
    H1 := dbms_datapump.open(
    operation=>'EXPORT',
    job_mode=>'SCHEMA',
    remote_link=>'',
    job_name=>NULL, version=>'COMPATIBLE'),
    dbms_output.put_line('setting metadata filter with handle: ' ||H1);
    dbms_datapump.metadata_filter(
    handle=>H1,
    name=>'SCHEMA_EXPR',
    value=>'IN (''&sname'')'),
    --This command specifies the export dump file
    dbms_output.put_line('Adding export dump file name to handle: ' ||H1);
    dbms_datapump.add_file(
    handle=>H1,
    filename=>'&expdump..dmp',
    directory=>'&src_dirobj',
    filetype=>dbms_datapump.ku$_file_type_dump_file);
    --This command specifies the export log file
    dbms_output.put_line('Adding export log file name'),
    dbms_datapump.add_file(
    handle=>H1,
    filename=>'&expdump._exp.log',
    directory=>'&src_dirobj',
    filetype=>dbms_datapump.ku$_file_type_log_file);
    --This command starts the export
    dbms_output.put_line('starting DataPump Export Job'),
    dbms_datapump.start_job(H1);
    commit;
    --monitor export job status
    job_state := 'STARTING';
    BEGIN
    WHILE (job_state != 'COMPLETING') AND (job_state != 'STOPPED') LOOP
    dbms_output.put_line ('job is: ' || job_state ||' ' || to_char(sysdate,'HH24:MI:SS'));
    status := dbms_datapump.get_status(
    handle => h1,
    mask => dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip,
    timeout => -1);
    job_state := status.job_status.state;
    dbms_lock.sleep(5);
    END LOOP;
    EXCEPTION
    WHEN no_job THEN
    dbms_output.put_line('job finished'),
    END;
    DBMS_DATAPUMP.DETACH(handle => h1);
    END;
    /
    PROMPT DataPump Export Operation is finished
    --end code
    --end code
    

Now, we move to the destination site.

First, we need to instantiate the HR schema, and then create the Apply process. We use Data Pump import to instantiate the HR schema and data. It is important to remember that Data Pump will create the objects in the same tablespace in which they reside at the source database. So, our import is going to need the EXAMPLE tablespace. Make sure you create the EXAMPLE tablespace in the destination database if you have not already done so. You can do this through the EM DBConsole. Or, you can issue the create tablespace command.

For example:

CREATE SMALLFILE TABLESPACE "EXAMPLE" DATAFILE 'C:ORACLE11GR2ORADATASTRM1EXAMPLE01.DBF' SIZE 100M AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

Now that we have our tablespace in place, we can begin the import. This code is found in the instantiate_imp_schema.sql script. This will connect as the STRM_ADMIN user and do the following:

Check to see if it needs to transport the databfile based on the source and destination directory locations you specify. If these are different, dbms_file_transfer.put_file is called.

Note some caveats here:

  • Make sure that the file specified for the import file exists in the location defined for the source directory in the destination database.

    If it does not, the dbms_file_transfer.put_file will fail with:

    ORA-19505: failed to identify file"<pathfilename>"
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    
  • Make sure that the file specified for the import file does not already exist in the destination directory.

    If it does, the dbms_file_transfer.put_file will fail with:

    ORA-19504: failed to create file "<pathfilename>"
    ORA-27038: created file already exists
    

    And will use the existing file for the import instead.

  • Opens a Data Pump job handle with dbms_datapump.open.
  • Sets the filenames to be used with dbms_datapump.add_file.
  • Starts the Data Pump import job with dbms_datapump.start_job.
  • Monitors the job and reports status and completion.
  • Checks the object count for the schema.
    --instantiate_exp_schema_hr.sql…
    ACCEPT sname PROMPT 'Enter Schema Name to instantiate: '
    ACCEPT src_dirobj PROMPT 'Enter Source directory Object name: (DATA_PUMP_DIR): '
    ACCEPT dest_dirobj PROMPT 'Enter Destination directory Object name: (DATA_PUMP_IMP_DIR): '
    ACCEPT expdump PROMPT 'Enter Import dump file Name (file extension .dmp will be appended): '
    ACCEPT destdb PROMPT 'Enter Import Destination DB Sid: (ex: STRM2): '
    ACCEPT stmadm PROMPT 'Enter Stream Admin username: '
    ACCEPT stmadmpwd PROMPT 'Enter Stream Admin Password: '
    connect &stmadm/&stmadmpwd@&destdb
    set serveroutput on
    --we check to see if the source and destination directory objects are different.
    --If so we need to transfer the dump file from the source to the destination target database directory Object
    PROMPT 'Checking if dump file requires transport from &src_dirobj to &dest_dirobj'
    DECLARE
    dest_sid varchar2(10) := upper('&&destdb'),
    BEGIN
    --dbms_output.put_line('dest_sid is: ' || dest_sid);
    IF '&src_dirobj' != '&dest_dirobj' THEN
    dbms_file_transfer.put_file(
    source_directory_object => '"DATA_PUMP_DIR"',
    source_file_name => '&expdump..dmp',
    destination_directory_object => '"DATA_PUMP_IMP_DIR"',
    destination_file_name => '&expdump..dmp',
    destination_database => '&destdb'),
    commit;
    END IF;
    END;
    /
    PROMPT 'Opening DataPump Import Job at &destdb'
    DECLARE
    H1 NUMBER; -- data pump job handle
    job_state VARCHAR2(30) := 'UNDEFINED'; -- job state
    status ku$_Status; -- data pump status
    no_job exception;
    pragma exception_init(no_job, -40717);
    BEGIN
    H1 := dbms_datapump.open(operation=>'IMPORT',job_mode=>'SCHEMA',
    remote_link=>'',
    job_name=>NULL, version=>'COMPATIBLE'),
    --This command specifies the import dump file location
    dbms_output.put_line('Adding import dump file name to handle: ' ||H1);
    dbms_datapump.add_file(
    handle=>H1,
    filename=>'&expdump..dmp',
    directory=>'&dest_dirobj',
    filetype=>dbms_datapump.ku$_file_type_dump_file);
    --This command specifies the import log file location
    dbms_output.put_line('Adding import log file name to handle: ' ||H1);
    dbms_datapump.add_file(
    handle=>H1,
    filename=>'&expdump._imp.log',
    directory=>'&dest_dirobj',
    filetype=>dbms_datapump.ku$_file_type_log_file);
    --This command starts the import job.
    dbms_output.put_line('starting import job with handle: ' ||H1);
    dbms_datapump.start_job(H1);
    --monitor export job status
    job_state := 'STARTING';
    BEGIN
    WHILE (job_state != 'COMPLETING') AND (job_state != 'STOPPED') LOOP
    dbms_output.put_line ('job is: ' || job_state ||' ' || to_char(sysdate,'HH24:MI:SS'));
    status := dbms_datapump.get_status(
    handle => h1,
    mask => dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip,
    timeout => -1);
    job_state := status.job_status.state;
    dbms_lock.sleep(5);
    END LOOP;
    EXCEPTION
    WHEN no_job THEN
    dbms_output.put_line('job finished'),
    WHEN OTHERS THEN
    RAISE;
    END;
    --dbms_lock.sleep(10); --give it time to create objs before checking count
    DBMS_DATAPUMP.DETACH(handle => h1);
    END;
    /
    --Once the import is complete, verify the HR schema objects exist.
    PROMPT DataPump Import operation is finished
    exec dbms_output.put_line('check object count at &destdb')
    Select count(*) from dba_objects where owner = '&sname'
    /
    --end code
    

    Note

    If the OE user does not exist on the destination database and it did on the source database, you will see grant permission errors in the import log when the import brings in the HR schema object permissions. This is okay. You just want to remember to make sure that the OE gets the grants if you ever decide to create that schema on the destination in the future.

Next, create the apply queue at the destination target. This code is found in the add_apply.sql script, it should run as the STRM_ADMIN user and do the following:

  • Creates the Apply queue with the dbms_streams_adm.set_up_queue procedure
  • Adds the Apply rule for the schema with the dbms_streams_adm.add_schema_rules procedure
  • Verifies the creation of the Apply process and rules
    --add_apply.sql…
    ACCEPT sname PROMPT 'Enter Schema Name:'
    ACCEPT qname PROMPT 'Enter Queue Name prefix: (ex: SCHEMA_HR) '
    ACCEPT srcsdb PROMPT 'Enter the Source Database SID: (STRM1) '
    ACCEPT destdb PROMPT 'Enter the Destination Database SID: (STRM2) '
    ACCEPT stmadm PROMPT 'Enter Stream Admin username: '
    ACCEPT stmadmpwd PROMPT 'Enter Stream Admin Password: '
    connect &stmadm/&stmadmpwd@&destdb
    set serveroutput on
    DECLARE
    uname varchar2(50);
    BEGIN
    select user into uname from dual;
    --Create the apply queue
    dbms_output.put_line('Creating &qname._apply queue'),
    dbms_streams_adm.set_up_queue(
    queue_table => uname || '.&qname._APPLY_QT',
    storage_clause => NULL,
    queue_name => uname || '.&qname._APPLY_Q',
    queue_user => uname);
    dbms_output.put_line('sleep 10 secs'),
    dbms_lock.sleep(10);
    --give it time to create the queue before we create rules
    --Add the schema apply rules
    dbms_output.put_line('adding apply rules for source &srcsdb'),
    dbms_streams_adm.add_schema_rules(
    schema_name => '&sname',
    streams_type => 'APPLY',
    streams_name => '&qname._APPLY',
    queue_name => uname || '.&qname._APPLY_Q',
    include_dml => TRUE,
    include_ddl => TRUE,
    include_tagged_lcr => TRUE,
    source_database => '&srcsdb',
    inclusion_rule => TRUE,
    and_condition => NULL);
    END;
    /
    --verify the queue and rule creation
    set pagesize 150
    set linesize 100
    select * from dba_apply
    where apply_name like upper('%&qname.%')
    /
    select * from dba_streams_schema_rules
    where streams_type = 'APPLY'
    /
    

By using the dbms_streams_adm.add_schema_rules procedure to add our Capture process at the source-master database, we automatically prepared our schema object SCN instantiation value. This means that the dbms_streams_adm.add_schema_rules already made a call to the dbms_capture_adm.prepare_schema_instantiation procedure for us.

If we had used dbms_capture_adm, we would need to call the dbms_capture_adm.prepare_schema_instantiation procedure explicitly.

As we used Data Pump export/import to instantiate our schema at the destination target database, the instantiation SCN has also been set for us.

If we had used other instantiation methods, we would need to call DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN manually.

Note

For more information on instantiation methods and setting the capture and apply instantiation SCN, please refer the "Instantiation and Oracle Streams Replication" chapter in your Oracle Streams Replication Administrator's Guide.

With our instantiation SCNs synchronized, we are now ready to start our Apply process at our destination database, our Propagation process at our source database, and finally our Capture process at our source database; in that order. We do it in this order to allow the apply and Propagation processes time to "ramp up" before getting work from the Capture process. This code is found in the start_it_all_up.sql script. The script runs as the STRM_ADMIN user and does the following:

  • Connects to the destination database
  • Starts the Apply process with dbms_apply_adm.start_apply
  • Connects to the source database
  • Starts the Propagation process with dbms_aqadm.enable_propagation_schedule
  • Starts the Capture process with dbms_capture_adm.start_capture
  • Verifies the status of the processes

First, we start at our destination database and start the Apply process.

--start_it_all_up.sql…
ACCEPT destdb PROMPT 'Enter Destination DB tnsalias: (ex: STRM2): '
ACCEPT stmadm PROMPT 'Enter Stream Admin username: '
ACCEPT stmadmpwd PROMPT 'Enter Stream Admin Password: '
ACCEPT srcsdb PROMPT 'Enter Source tnsalias: (ex: STRM1): '
ACCEPT qname PROMPT 'Enter Queue Name prefix: (ex: SCHEMA_HR) '
--First connect to the destination database and start the Apply process.
PROMPT 'connecting to &destdb as &stmadm and starting &qname._APPLY'
connect &stmadm/&stmadmpwd@&destdb
BEGIN
dbms_apply_adm.start_apply(
apply_name => '&qname._APPLY'),
dbms_lock.sleep(10); --give it time to start
EXCEPTION WHEN OTHERS THEN
IF sqlcode = -26666 THEN NULL; -- APPLY process already running
ELSE RAISE;
END IF;
END;
/

Next, connect to the source database and start the Propagation process.

--NOTE: this code assumes the db link name is the same as the tnsalias
-- if this is not the case, you will need to accommodate for the
-- destination parameter in the enable_propagation_schedule call
PROMPT 'connecting to &srcsdb as &stmadm
connect &stmadm/&stmadmpwd@&srcsdb
PROMPT 'starting &qname._PROPAGATION'
BEGIN
dbms_aqadm.enable_propagation_schedule(
queue_name => '&stmadm..&qname._CAPTURE_Q',
destination => '&destdb',
destination_queue => '&stmadm..&qname._APPLY_Q'),
dbms_lock.sleep(10); --give it time to start
EXCEPTION WHEN OTHERS THEN
IF sqlcode = -24064 THEN NULL; -- propagation already enabled
ELSE RAISE;
END IF;
END;
/
--Finally, start the Capture process
PROMPT starting &qname._CAPTURE'
BEGIN
dbms_capture_adm.start_capture(
capture_name => '&qname._CAPTURE'),
dbms_lock.sleep(10); --give it time to start
EXCEPTION WHEN OTHERS THEN
IF sqlcode = -26666 THEN NULL; -- CAPTURE process already running
ELSE RAISE;
END IF;
END;
/
select apply_name || ' is ' ||status apply_status from dba_apply@&destdb
where apply_name like upper('%&qname.%')
/
select propagation_name || ' is ' ||status apply_status from dba_propagation
where propagation_name like upper('%&qname.%')
/
select capture_name || ' is ' ||status apply_status from dba_capture
where capture_name like upper('%&qname.%')
/
--end code

The process for setting up single-source replication from STRM1 to STRM2 is now complete.

The proof is in the pudding (or propagation in this case)

Now that we have created our single source environment, let us see if it works!

We will log into SQLPlus as the HR user at our source master (STRM1) and create a record in the employee table.

We will then log into SQLPlus as the HR user at our destination target (STRM2) and see if the new record appears.

In this screenshot for our STRM1 SQLPlus session, we will do the following:

  • We first show the current user and the global name of our instance to verify where we are
  • We then show that there are no records in the employee table for last name "McKinnell"
  • We then insert a "McKinnell" record into the employee table at STRM1 and commit
  • We then show that there is now a record in the employee table for last name "McKinnell"
The proof is in the pudding (or propagation in this case)

In this screenshot for our STRM2 SQLPlus session, we first show the current user and the global name of our instance to verify where we are.

We then show that there are no records in the employee table that has "McKinnell" for the last_name.

After we insert the "McKinnell" record into the employee table at STRM1 and commit, we rerun the select at STRM2 to see if the record is now there.

The proof is in the pudding (or propagation in this case)

Voila! Our record has been replicated.

Sequences and triggers and Apply

Have you noticed there is a sequence being used for the employee ID? Aren't there special rules for replicating with sequences? There are indeed. However, in our example, they don't affect us. The LCR is created with the value generated by the sequence, not the call to the sequence itself. So, the insert statement actually sent to STRM2 has "208" hard coded as the employee ID. As this is a single source, uni-directional configuration we don't need to worry about what the other sequence is doing as it will/should never be used for values in this table. If you wish to be safe, you can even drop the table PK sequences on the destination site. However, as you will see in the next chapter, be very careful with sequences in a multi-master configuration.

What would happen if we had a before insert trigger on the table that populates the employee_id value from the local employees_seq? Would the trigger end up overwriting the "208" with the local sequence next value? In this case, it would not. The reason is that the Apply process will not fire triggers on the receiving site table by default. This keeps us from potentially duplicating changes that will be generated, captured, and propagated to the destination as a result of the same triggers firing at the source site.

Let's take this one step further. What if we are not replicating the tables that are updated by the triggers at the source site? How do those changes get made at the receiving site if the triggers aren't fired by the Apply process? To force our Apply process to fire table triggers on apply, we have to specify a special "fire flag". To do this we use the DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY. This procedure accepts a trigger owner and name, and a fire_once Boolean indicating if a trigger should fire more than once. Be careful with how you interpret this Boolean. It logically acts as a double negative and can be confusing. If the value is TRUE (default), the trigger will only fire once, and thus does NOT fire if the action is a result of an Apply process. If the value is FALSE, the trigger will fire more than once allowing the Apply process to fire the trigger. So, in this particular case we want to set the fire_once value to false so the Apply process will fire the trigger.

exec DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY
trig_owner => 'HR',
trig_name => 'HRID_TRIGGER',
fire_once => FALSE);

Tip

This example references a fictitious trigger. It really doesn't exist in the schema.

Other levels at which to replicate

In the example that we have used in this chapter, we replicated at the schema level. As mentioned earlier, it is possible to replicate at the table level, the tablespace level, and the entire database (also known as global).

The principle for creating the different levels of replication are pretty much the same as those we used in the schema level setup. You just use different subprograms of the DBMS_STREAMS_ADM package.

Subprogram

Description

ADD_GLOBAL_PROPAGATION_RULES Procedure

Adds global rules to the appropriate positive/negative rule set for the specified Propagation process.

The rules propagate changes for the entire database.

Creates the specified Propagation process if it does not exist.

ADD_GLOBAL_RULES Procedure

Adds global rules to the appropriate positive/negative rule set for the specified Capture/Apply process.

The rules capture/apply changes for the entire database.

Creates the specified Capture/Apply process if it does not exist.

MAINTAIN_GLOBAL Procedure

Configures Streams replication at the database level between two databases.

Calls ADD_GLOBAL_PROPAGATION_RULES and

ADD_GLOBAL_RULES

Uses Data Pump Full Export/Import to Instantiate.

ADD_SCHEMA_PROPAGATION_RULES Procedure

Adds schema level rules to the appropriate positive/negative rule set for the specified Propagation process.

The rules propagate changes for the specified schema.

Creates the specified Propagation process if it does not exist.

ADD_SCHEMA_RULES Procedure

Adds schema level rules to the appropriate positive/negative rule set for the specified Capture/Apply process.

The rules capture/apply changes to the specified schema.

Creates the specified Capture/Apply process if it does not exist.

MAINTAIN_SCHEMAS Procedure

Configures Streams replication for the specified schema(s) between two databases.

Calls ADD_SCHEMA_PROPAGATION_RULES and

ADD_SCHEMA_RULES

Uses Data Pump SCHEMA Export/Import to Instantiate.

ADD_SUBSET_PROPAGATION_RULES Procedure

Adds table row level subset rules to the appropriate positive/negative rule set for the specified Propagation process.

The rules propagate changes for the specified table row level subset data.

Creates the specified Propagation process if it does not exist.

ADD_SUBSET_RULES Procedure

Adds table row level subset rules to the appropriate positive/negative rule set for the specified Capture/Apply process.

The rules capture/apply changes to the specified table row level subset data.

Creates the specified Capture/Apply process if it does not exist.

ADD_TABLE_PROPAGATION_RULES Procedure

Adds table rules to the appropriate positive/negative rule set for the specified Propagation process.

The rules propagate changes to the specified table.

Creates the specified Propagation process if it does not exist.

ADD_TABLE_RULES Procedure

Adds table rules to the appropriate positive/negative rule set for the specified Capture/Apply process.

The rules capture/apply changes to the specified table.

Creates the specified Capture/Apply process if it does not exist.

MAINTAIN_TABLES Procedure

Configures Streams replication for the specified table(s) between two databases.

Calls ADD_TABLE_PROPAGATION_RULES and

ADD_TABLE_RULES.

Uses Data Pump Table Export/Import to Instantiate.

MAINTAIN_SIMPLE_TTS Procedure

Clones a single tablespace from the source database.

Uses Data Pump to instantiate the tablespace at the destination database.

Calls ADD_TABLE_PROPAGATION_RULES and

ADD_TABLE_RULES.

Note

For more information on the DBMS_STREAMS_ADM package subprograms, refer to your Oracle PL/SQL Packages and Types Reference manual.

A note about the DBMS_STREAMS_ADM.MAINTAIN_* scripts.

These scripts can be used to generate setup scripts similar to those we have already seen in this chapter. The EM DBConsole actually makes a call to these subprograms to accomplish the Streams configuration tasks. These subprograms each go about the configuration differently, depending on parameter values and the replication level being configured. Therefore, we would recommend that before using these subprograms to configure your Streamed environment, to first run the subprogram with the perform_actions parameter set to FALSE so that you can generate and review the scripts to familiarize yourself with what the scripts will be doing, prior to actually doing it.

The beauty of DBMS_STREAMS_ADM.MAINTAIN_*

As mentioned earlier, the EM setup wizard generates a PL/SQL block with a call to the appropriate DBMS_STREAMS_ADM.MAINTAIN_* subprogram. This PL/SQL block can be run manually from SQLPLUS if you wish. These scripts can save you a lot of time and headaches.

What happens if the DBMS_STREAMS_ADM.MAINTAIN_* scripts fail? How do you recover? The beauty of the DBMS_STREAMS_ADM.MAINTAIN_* scripts is that they record their progress in the database and "know" where it left off. If the script fails, it will raise an error and add an entry in the DBA_RECOVERABLE_SCRIPT view (and other DBA_RECOVERABLE_SCRIPT views). You can then use the DBMS_STREAMS_ADM.RECOVER_OPERATION procedure to do the following:

  • Rollback the build: This undoes the build and purges the script metadata.
  • Purge the build without rolling back: Leaves everything as it is, just purges the script metadata (you will need to clean up manually: See DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION).
  • Forward the build: You will first need to correct the error condition. The forward will start with the command that erred, and if the command is successful, it will continue the build from that point.

If you want to use the forward recover, do not rollback or purge.

The caveat here is that DBMS_STREAMS_ADM.RECOVER_OPERATION will only recover a Streams build done through a DBMS_STREAMS_ADM.MAINTAIN_* procedure.

The DBMS_STREAMS_ADM.MAINTAIN_* scripts also provide an excellent baseline with which to begin scripting a custom Streams topology.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset