14. Strategies for Migrating Data Quickly between Databases

Migrating data that has been stored in an existing source database to a new destination database is probably one of the most nerve-wracking experiences any Oracle DBA will encounter during her career. (Granted, patching a database is no less tedious; but if it’s been done correctly, it’s possible to rewind a database from the results of a failed patching opportunity, and that’s not always the case for a failed data migration attempt.) The good news is that Oracle offers numerous utilities and methodologies to successfully migrate all or part of an Oracle database’s data to a new destination with virtually no service interruptions, but the bad news is that there are so many different methods that it can be difficult to decide exactly which migration path is the most appropriate in a scenario.

This chapter therefore attempts to answer various frequently asked questions, including the following:

Image Why might you consider moving all or part of your database’s data between the original source database and a different destination database?

Image How do you perform the migration between source and destination databases with a keen regard to the service level agreements (SLAs) of the application workloads affected by the migration?

Image Why is Transportable Tablespace Sets (TTS) probably your best choice for the majority of situations to migrate data between source and destination databases?

Image When would you consider alternatives to TTS such as Data Pump Export and Import, Oracle Streams, Oracle GoldenGate, and Recovery Manager to migrate data?

Image What do the newest features of Oracle Database 12c portend for migration of data between source and destination databases, especially during upgrades from pre-12c databases?

Why Bother Migrating?

While there’s no doubt that data migration can be a tedious process to plan and execute, there are also some excellent reasons to perform a data migration.

One of the most obvious reasons for data migration is to upgrade an existing Oracle database to a later (perhaps the latest) Oracle database release. While it’s certainly possible to perform either an in-place or out-of-place database upgrade, there may be severe restrictions on the amount of time that a database may remain in nontransactional or read-only mode as the upgrade completes.

Also, IT organizations periodically make strategic decisions to move to a new storage platform—say, from one vendor’s storage area network (SAN) to another—and the storage administrators must then present their DBA colleagues with an ultimatum: Migrate or perish! In this situation, an Oracle DBA will have little bargaining power because often this mandate is driven by a need to economize on storage costs, improve the reliability of existing storage components, or possibly even make a move to reduce the human capital costs of storage administration.

Finally, many IT organizations in the middle of the 2010 decade have been forced to confront the licensing costs of having multiple Oracle databases spread across dozens or hundreds of dedicated individual servers and SANs. Organizations are therefore slowly but surely moving away from this “siloed” approach, and many have begun adopting a strategy of consolidation—that is, hosting multiple databases on an extremely robust enterprise computing system such as an Oracle Exadata Database Machine. Since Exadata is at its core a little-endian operating system, this transition often requires not just the migration of databases and their data to the new environment but also the conversion of data from big-endian operating systems such as Solaris and HP-UNIX.

Determining the Best Strategy

Ironically, the most important consideration when contemplating a database migration project has nothing to do with the database itself or even the data stored in the database. The crucial driving factor that determines the best migration strategy is the SLAs for the application workloads that are accessing the data to be migrated within the source database—what can be thought of as an application’s window of inopportunity. Understanding the expected SLA for an application workload is the key to the success of any migration, and there are several subfactors to be considered as well.

Real-Time versus Near Real-Time Migration

A mission-critical application may still be able to tolerate a reasonable amount of downtime while the migration completes. If downtime is not tolerable at all, then the only alternative is to consider employing the methods that initially transfer data between the source and destination databases and then keep newly added or modified data in synch between source and destination until it’s time to effect a cutover to the destination database.

Read-Only Tolerance

Equally important, some applications can tolerate at least some and perhaps even considerable time during which the application can only read (but not modify) the data. Real-time systems such as electronic trading systems, banking transactional financial applications, and traffic control applications are obvious exceptions to this rule; however, consider an order entry system that operates only at peak time for a relatively narrow duration during weekdays. It may be possible to ask end users to record new orders via alternative electronic or manual methods and then simply have them rekey the transactions just a few minutes later, after all data has been successfully migrated and validated for completely successful transference.

Reversibility

Finally, it’s important to consider a fallback plan in case the data migration effort must be aborted shortly after switchover to the new destination database. In many cases, this is simply due to a failure of forethought while planning the data migration project. For example:

Image What if data migration accidentally caused unexpected data corruption?

Image What if transaction data has been captured successfully at the source, but the delivery of that changed data failed at the destination and the failure went undetected before the switchover?

Image What if unexpected character-set conversion issues are encountered because insufficient test data was available during final system testing?

It’s therefore crucial to imagine the unimaginable, draw up plans for a “go/no go” decision point during the migration, and at least consider the likelihood of having to revert to the original production source.

Considering What Data to Migrate

When contemplating what to migrate, it’s also helpful to consider just how much historical data needs to be migrated and how much of it will be needed immediately after the completion of the data migration process.

One of the most valuable questions to ask during a migration planning process is, Do we need all this data? Many times, application users expect their IT team to tell them when they should consider archiving at least some of their data. The good news here is that data migration is the perfect time to ask this question and perhaps complete the purging process that application data owners may have been contemplating for a long time but were afraid to ask their Oracle DBA for assistance or advice.

Even if the application data owners are thoroughly convinced that all of their data is absolutely golden and must be migrated in one fell swoop, pressing this question a bit more strenuously may force them to admit that at least some data is outmoded and perhaps even essentially useless. Once the data owners realize this, it’s important to reassure them that they are making the right choice to eliminate at least some of their data. And if the Oracle DBA is fortunate enough to be migrating application data to an Oracle 12c database environment, it’s important to discuss the new Information Lifecycle Management (ILM) features of that release and explain how Oracle 12c’s Automatic Data Optimization (ADO) and In-Database Archiving (IDA) might work to their advantage.


Note

Remember that ILM, ADO, and IDA features do require licensing the Oracle Database 12c Advanced Compression option.


Even if the application data owners insist that all their data must be moved during the initial migration, it’s quite possible that not all the historical data in the source database will be immediately necessary to conduct business on the destination database. For example, recent sales orders from a few days, weeks, months, or even years may still be quite valuable, but will application users need to actively search for 5-year-old data just moments after the initial migration is complete? Since it’s improbable that such a query will need to be answered on day one, that may limit the scope of the initial data migration to only a few million rows; then, while the application is accessing the most recent data, the remainder of the historical data can be migrated during a later off-peak period.

Data Migration Methods

Once the window of inopportunity—the SLA of the application(s) accessing the database to be migrated—has been identified, it is relatively simple to choose an appropriate migration method. The migration methodologies discussed in the remainder of this chapter are therefore divided between those that permit the capture of transaction activity—which allows application users to continue to modify data in the source system without any interruptions during the migration—and those that require the source database to be either partially or completely quiesced until the migration to the destination database is complete.

Transactional Capture Migration Methods

A database that supports applications whose data simply must remain accessible until the very last second before the applications are repointed to the new destination database requires special concern during data migration. Migration methods for these databases must leverage a tool or technique that permits data to flow between source and destination databases while transactions continue on the source database until the very last second. Table 14.1 summarizes these migration methods.

Image

Table 14.1 Transactional Migration Methods

Logical Standby Database

When transactional integrity must be preserved between the source and destination systems, a simple method involves using Oracle Data Guard to create a physical standby database on the desired destination platform:

1. That physical standby database can then be transformed into a logical standby database.

2. The destination database can then continue to receive transactional data manipulation language (DML) from the source database via SQL Apply data delivery methods.

3. Once the data migration is complete, SQL Apply is halted at the source database, and the destination database is failed over to.

4. Finally, all application workloads are simply pointed to the new destination database.

The use of database services is strongly recommended to ease the transition between source and destination systems. Note that if database services have been registered with the Oracle Data Guard Broker, then a failover operation will automatically point application workloads to the destination database.

Oracle Streams

Oracle Streams—essentially, database replication using LogMiner techniques—is another option that is included in the licensing costs for Oracle Database Enterprise Edition. Streams does support almost all Oracle datatypes, but it is notoriously complex to set up, and many Oracle DBAs eschew this method in favor of other technologies because it’s difficult to tune its performance effectively. However, as a one-time mechanism for migrating data between source and destination databases, it may be sufficient for reasonably sized data migration.


Note

As of Oracle Database 12c, Oracle Streams has been deprecated in favor of Oracle XStreams, which is based on Oracle GoldenGate technology. For more information, see MOS Note 1644090.1, “12c streams and change data capture (CDC) support.”


Oracle GoldenGate

Oracle GoldenGate (OGG) is perhaps the easiest way to migrate data between source and destination databases while also keeping that data synchronized between both the databases.

An in-depth discussion of the myriad features of OGG would fill a book by itself, but in essence, OGG uses two distinct sets of processes to perform data migration:

Image EXTRACT processes are responsible for change data capture. They mine the transaction logs of the source database for committed transactions, transmit those changes across the network to the destination system, and write the changes in a proprietary format to a series of trail files.

Image REPLICAT processes are responsible for change data delivery. They read the trail files on the destination system and then apply those changes directly to the destination database.

OGG offers several methods for initially populating data between source and destination databases, and it supports transferring of data between dozens of commonly used relational database management systems (RDBMSs), including DB2, SQL Server, Sybase, and Teradata. OGG also offers extreme flexibility for filtering of data, robust functions for modifying the data on the fly for loading into different data structures and tables, and even the capability to replicate data bidirectionally between two systems.

The list price for OGG licensing can sometimes appear to be quite daunting, but a savvy Oracle DBA can often negotiate a term license for OGG just for the duration of the migration, especially if the migration is from a non-Oracle database to an Oracle database environment.

Nontransactional Migration Methods

These Recovery Manager (RMAN)-based migration methods require that the application workloads are prevented from applying changes to the source database for some period of time—usually determined by how long it will take to complete the final migration of data between source and destination database—to insure against the loss of any transactional data. A brief summary of these migration methods is provided in Table 14.2.

Image
Image
Image

Table 14.2 Nontransactional Migration Methods

Cloning Databases with Recovery Manager

In many cases, Oracle Recovery Manager (RMAN) can be leveraged to clone the source database as a new physical database at its new destination. This technique leverages the well-known DUPLICATE DATABASE command, which has grown in power and flexibility in each subsequent Oracle database release since its introduction in Oracle 8.1.7.

In essence, cloning the source database to its new destination with DUPLICATE DATABASE comprises a few simple steps that are handled automatically within the command structure itself:

1. The source database’s SPFILE is cloned to its new destination, and then the destination database’s instance is restarted using the new SPFILE.

2. The source database’s control files are cloned at their new destination in the locations specified by DB_CREATE_ONLINE_LOG_DEST, and then the new control files are MOUNTed.

3. All the required datafiles for specified tablespaces are cloned to the new destination. Note that it’s possible to migrate only a limited number of tablespace sets during the initial migration to the new destination database, and other techniques—for example, Transportable Tablespaces—can be used to migrate the remaining tablespace sets from source to destination at a later time.

4. The destination database is opened, and all online redo logs are re-created at the locations specified by DB_CREATE_ONLINE_LOG_DEST.

5. TEMPFILEs are re-added for all existing temporary tablespaces. (Note that prior to Oracle Database 11g Release 1, TEMPFILEs had to be re-created manually after cloning was completed.)

DUPLICATE DATABASE also offers some additional advantages over other data migration methods:

Image If wide network bandwidth is available between the source and destination platforms, RMAN’s ability to leverage multiple parallel channels can take dramatic advantage of that bandwidth to quickly transfer data between the source and destination platforms.

Image If limited storage space is available on the destination platform, RMAN can leverage image copies and backup sets already present on the source platform as the source for cloning the data.

Image Starting with Oracle Database release 12.1.0.1, DUPLICATE DATABASE automatically decides whether to use either backup sets or image copies when transmitting data directly to the destination database. (Prior releases always used image copies of all datafiles, which tended to consume significant network bandwidth during cloning.)

The biggest disadvantage of using DUPLICATE DATABASE to migrate data to its new destination is that the source database cannot process any transactions while the cloning is in progress. In addition, it’s important to remember that the new destination database will be assigned a new DBID after a successful cloning operation, so it will in essence be a completely new database.


Note

For more information on the latest features of DUPLICATE DATABASE in Oracle 12c, see MOS Note 369644.1, “Frequently asked questions about restoring or duplicating between different versions and platforms.”


Data Guard: Physical Standby Database

One of the most effective ways to quickly migrate an Oracle database from one platform to another is to leverage Oracle Data Guard to create a new physical standby database at the desired destination. The destination database will continue to receive online redo from the source database until the DBA decides that it’s finally time to perform a switchover to the physical standby, which then becomes the new primary database.

Just as for logical standby databases, database services are strongly recommended for routing application connectivity in the case of a physical standby database. Because Data Guard Broker can register a database service specifically to either the primary or physical standby database, application workload activity can immediately resume against the new destination database once switchover to the new standby database is completed.


Note

It’s a little-known fact that Oracle Data Guard can be used for transforming and migrating data between heterogeneous platforms. See MOS Note 413484.1, “Data Guard support for heterogeneous primary and physical standbys in same Data Guard configuration,” for complete information.


Transportable Database

If applications can tolerate a potentially lengthy transactional downtime, or if the database to be migrated is relatively small, another option is transportable database (TDB). TDB has been available since Oracle release 10.2.0.1 and offers a simple, relatively painless method to migrate an entire database in one fell swoop between source and destination platforms.

Moving a database from one platform to another using TDB essentially involves the following steps:

1. The entire source database must be shut down and then reopened in read-only mode.

2. TDB is invoked, which generates a script that handles the migration of the entire database to its new destination platform. As part of that migration, a new SPFILE will be generated on the destination platform that will re-create the new database’s control files and datafiles in their new proper location.

3. Image-copy backups of the source database’s control file and datafiles are shipped automatically to the destination database platform.

4. TDB starts up a new database instance using the new SPFILE.

5. The database’s control files are opened on the destination database, bringing the database into MOUNT mode.

6. The database accepts its datafiles in their new physical location; they are automatically renamed via the corresponding TDB script.

7. The database is opened in read/write mode on the destination platform. As part of this final step, new online redo log groups are re-created at the locations specified by the DB_CREATE_ONLINE_LOG_DEST parameter and new TEMPFILEs are also created in their proper locations.

However, TDB does have some prerequisites that must be considered:

Image First and foremost, the entire source must be open in read-only mode for the duration of the migration. While this may be a small concern for a read-mostly application that can tolerate a brief amount of time without transaction processing, it is anathema for a database that supports an OLTP application.

Image The determining factor for how long the source database must remain in read-only mode is the time it will take to transfer image copies of all nonsystem datafiles from the source to the destination platform.

Image An important consideration not to be missed is that endian boundaries cannot be crossed during database migration via TDB. For example, TDB cannot be used to migrate a source database on an HP-UX (big-endian) platform to a Linux (little-endian) platform.

A simple query against view V$TRANSPORTABLE_PLATFORM will show the list of OS platforms that Oracle Database supports for TDB operations. And when it’s joined to V$DATABASE, V$TRANSPORTABLE_PLATFORM, it becomes simple to verify the endianness of the current database’s OS platform, as Listings 14.1 and 14.2 show.

Listing 14.1 Determining Platform Support for Database Migration Operations


SET LINESIZE 80
COL endian_format   FORMAT A12      HEADING "Endian|Format"
COL platform_name   FORMAT A60      HEADING "Platform Name" WRAP
TTITLE "Platforms Currently Supported for TTS / TDB Operations|(from
V$TRANSPORTABLE_PLATFORM)"
SELECT
    endian_format
   ,platform_name
  FROM v$transportable_platform
 ORDER BY endian_format, platform_name;
TTITLE OFF

            Platforms Currently Supported for TTS / TDB Operations
                        (from V$TRANSPORTABLE_PLATFORM)

Endian
Format       Platform Name
------------ ----------------------------------------------------------
Big          AIX-Based Systems (64-bit)
Big          Apple Mac OS
Big          HP-UX (64-bit)
Big          HP-UX IA (64-bit)
Big          IBM Power Based Linux
Big          IBM zSeries Based Linux
Big          Solaris[tm] OE (32-bit)
Big          Solaris[tm] OE (64-bit)
Little       Apple Mac OS (x86-64)
Little       HP IA Open VMS
Little       HP Open VMS
Little       HP Tru64 UNIX
Little       Linux IA (32-bit)
Little       Linux IA (64-bit)
Little       Linux x86 64-bit
Little       Microsoft Windows IA (32-bit)
Little       Microsoft Windows IA (64-bit)
Little       Microsoft Windows x86 64-bit
Little       Solaris Operating System (x86)
Little       Solaris Operating System (x86-64)

20 rows selected.


Listing 14.2 Discovering Current Platform Details


SET LINESIZE 60
TTITLE "Current Database Platform Endianness|(from V$DATABASE +
V$TRANSPORTABLE_PLATFORM)"
COL name            FORMAT A16      HEADING "Database Name"
COL endian_format   FORMAT A12      HEADING "Endian|Format"
SELECT
     D.name
    ,TP.endian_format
  FROM
      v$transportable_platform TP
     ,v$database D
 WHERE TP.platform_name = D.platform_name;
TTITLE OFF

            Current Database Platform Endianness
        (from V$DATABASE + V$TRANSPORTABLE_PLATFORM)

                 Endian
Database Name    Format
---------------- ------------
DB11203          Little


Listing 14.3 shows how the DBMS_TDB package can be used to verify if the source database is ready for transport to its destination.

Listing 14.3 Verifying Database Transportability


-----
-- Using DBMS_TDB to validate if a database is ready for transport
-----
SET SERVEROUTPUT ON
DECLARE
    db_check BOOLEAN;
BEGIN

    -----
    -- Can this database be transported to
    -- the specified destination platform?
    -----
    db_check :=
        DBMS_TDB.CHECK_DB(
             target_platform_name => 'Linux IA (32-bit)'
            ,skip_option => DBMS_TDB.SKIP_OFFLINE
        );
    IF db_check
        THEN DBMS_OUTPUT.PUT_LINE('Database can be transferred to destination platform.');
        ELSE DBMS_OUTPUT.PUT_LINE('Warning!!! Database CANNOT be transported to
destination platform.');
    END IF;

    -----
    -- Are there any directories or external objects that need to be
    -- transferred separately after these tablespace(s) have been
    -- transported to the destination platform?
    -----
    db_check := DBMS_TDB.CHECK_EXTERNAL;
    IF db_check
        THEN DBMS_OUTPUT.PUT_LINE('Database can be transferred to destination platform.');
        ELSE DBMS_OUTPUT.PUT_LINE('Warning!!! Database CANNOT be transported to
destination platform.');
    END IF;

END;
/


For a complete discussion on how to leverage TDB as a database migration methodology, see MOS Note 1401921.1, “Cross-platform database migration (across same endian) using RMAN transportable database.”

Transportable Tablespace Sets

Perhaps the most flexible toolset for any database migration effort is the Transportable Tablespace Set (TTS) utility. Introduced in Oracle Database 10g Release 1, it offers the capability to transfer just what’s needed from a source Oracle database to its new destination database. Even better, TTS permits crossing endian boundaries during data migration, and the conversion from little-endian to big-endian format (or vice versa) can take place on either the source or destination database via the RMAN CONVERT command.

TTS, however, does have some limitations:

Image TTS boundaries must be respected. For example, if a table in the TPCH_DATA tablespace has a related index in the TPCH_IDX tablespace, both tablespaces must be moved as a set during their migration. (Remember that the DBMS_TTS utility can be used to determine if the tablespaces to be moved are self-inclusive.)

Image While the migration takes place, the tablespace set’s tablespaces must be open in read-only mode on the source system. While this might appear to be a disadvantage, it’s actually an advantage because any other application activity against other tablespace sets continues unabated.

TTS data migration typically involves a few basic steps: first, an appropriate tablespace set in the source database (DB10205)—in this case, tablespaces TPCH_DATA and TPCH_IDX—is identified and verified for completeness, as shown in Listing 14.4.

Listing 14.4 Verifying Tablespace Set Transportability


BEGIN
    DBMS_TTS.TRANSPORT_SET_CHECK(
         ts_list => 'TPCH_DATA,TPCH_IDX'
        ,incl_constraints => TRUE
        ,full_check => TRUE
    );
END;
/
SELECT * FROM transport_set_violations;
no rows selected


To set up for simpler transferal of datafiles between the source database (DB10205) and the selected destination database (DB12010) using DBMS_FILE_TRANSFER, Listing 14.5 shows how to prepare database links.

Listing 14.5 Creating a Transportability Infrastructure


-----
-- Prepare the source database
-----

$> mkdir /home/oracle/TTS

DROP DATABASE LINK db10205;
CREATE DATABASE LINK db10205
    CONNECT TO system IDENTIFIED BY "oracle_4U"
    USING 'db10205';

CREATE OR REPLACE DIRECTORY db10205_dbf
    AS '/u02/app/oracle/oradata/db10205';
CREATE OR REPLACE DIRECTORY ttsfiles
    AS '/home/oracle/TTS';
GRANT READ, WRITE ON DIRECTORY ttsfiles TO PUBLIC;

-----
-- Prepare the destination database
-----
$> mkdir /home/oracle/TTS

CREATE OR REPLACE DIRECTORY db12010_data
    AS '+DATA/ORCL/DATAFILE';
CREATE OR REPLACE DIRECTORY ttsfiles
    AS '/home/oracle/TTS';
GRANT READ, WRITE ON DIRECTORY ttsfiles TO PUBLIC;


The first step in the migration process is to restrict the accessibility of the selected tablespace sets by bringing them into read-only mode on the source database, as Listing 14.6 shows.

Listing 14.6 Switching Tablespaces to Read-Only Mode


ALTER TABLESPACE tpch_data READ ONLY;
Tablespace altered.
ALTER TABLESPACE tpch_idx READ ONLY;
Tablespace altered.


Data Pump Export is used to capture the metadata for all objects within the tablespace sets on the source database into a Data Pump dump set. Listing 14.7 shows the Data Pump Export parameter file used to capture just the desired metadata, and Listing 14.8 shows the invocation of the Data Pump Export operation.

Listing 14.7 Data Pump Export Parameter File


JOB_NAME = TTS_1
DIRECTORY = TTSFILES
DUMPFILE = tts_1.dmp
LOGFILE = tts_1.log
TRANSPORT_TABLESPACES = "tpch_data,tpch_idx"
TRANSPORT_FULL_CHECK = TRUE


Listing 14.8 Results of Data Pump Export Invocation


$> expdp system/oracle_4U PARFILE=/home/oracle/TTS/TTS_1.dpectl

Export: Release 10.2.0.1.0 - 64bit Production on Friday, 16 January, 2015 18:01:30

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":
userid="/********@(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product
  /10.2.0/db_1/bin/oracle)(ARGV0=oraclefxkl)(ARGS=(DESCRIPTION=(LOCAL=YES)(ADDRESS=
  (PROTOCOL=beq))))(ENVS=ORACLE_SID=fxkl))(CONNECT_DATA=(SID=fxkl)))
  AS SYSDBA" transport_tablespaces=
  TPCH_DATA, TPCH_IDX dumpfile=tts_2.dmp directory=TTSFILES logfile=tts_2.log
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded


Image copies of all datafiles for the corresponding tablespace sets are copied from the source to the destination database using either OS utilities or Oracle’s DBMS_FILE_TRANSFER utility, as Listing 14.9 shows.

Listing 14.9 Invoking DBMS_FILE_TRANSFER.PUT_FILE


BEGIN
    DBMS_FILE_TRANSFER.PUT_FILE(
         source_directory_object => 'DB10205_DBF'
        ,source_file_name => 'tpch_data.dbf'
        ,destination_directory_object => 'DB12010_DATA'
        ,destination_file_name => 'TPCH_DATA.DBF'
        ,destination_database => 'DB12010'
    );
    DBMS_FILE_TRANSFER.PUT_FILE(
         source_directory_object => 'DB10205_DBF'
        ,source_file_name => 'tpch_idx.dbf'
        ,destination_directory_object => 'DB12010_DATA'
        ,destination_file_name => 'TPCH_IDX.DBF'
        ,destination_database => 'DB12010'
    );
    DBMS_FILE_TRANSFER.PUT_FILE(
         source_directory_object => 'TTSFILES'
        ,source_file_name => 'tts_1.dmp'
        ,destination_directory_object => 'TTSFILES'
        ,destination_file_name => 'tts_1.dmp'
        ,destination_database => 'DB12010'
    );
END;
/


On the destination database, Data Pump Import is used to load the TTS metadata contained in the Data Pump dump into the destination database, thus “plugging in” the TTS to that database, as shown in Listings 14.10 and 14.11.

Listing 14.10 Data Pump Import Parameter File


JOB_NAME = TTS_1
DIRECTORY = TTSFILES
DUMPFILE = tts_1.dmp
LOGFILE = tts_1.log
TRANSPORT_DATAFILES = '+DATA/ORCL/DATAFILE/TPCH_DATA.DBF','+DATA/ORCL/DATAFILE/TPCH_IDX.DBF'


Listing 14.11 Plugging Transported Tablespaces into Destination Database


$> impdp system/oracle_4U PARFILE=tts_1.dpictl

>> Results of successful transportable tablespace import at destination
>> (from destination database's alert log)

. . .
Fri Jan 16 18:32:18 2015
DM00 started with pid=36, OS id=2269, job SYSTEM.TTS_1
Fri Jan 16 18:32:18 2014
DW00 started with pid=37, OS id=2271, wid=1, job SYSTEM.TTS_1
Plug in tablespace TPCH_IDX with datafile
  '+DATA/ORCL/DATAFILE/TPCH_IDX.DBF'
Plug in tablespace TPCH_DATA with datafile
  '+DATA/ORCL/DATAFILE/TPCH_DATA.DBF'
. . .


Finally, the newly migrated TTSs are brought into read/write mode on the destination database, and application workloads can now start processing against the transported tablespaces in the destination database, as Listing 14.12 shows.

Listing 14.12 Switching Transported Tablespaces to Read/Write Mode


SQL> ALTER TABLESPACE tpch_data READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE tpch_idx READ WRITE;
Tablespace altered.



Note

It’s also possible to transport tablespaces using the RMAN TRANSPORT TABLESPACE command, which also leverages RMAN’s tablespace point-in-time recovery (TSPITR) capability so that the selected tablespace sets can be transported from the source to the destination as of an earlier system change number (SCN). See MOS Note 1461278.2, “Information Center: Transportable tablespaces (TTS) for Oracle databases,” for detailed information on this methodology.


Cross-Platform Transportable Tablespaces

Available for Oracle 11.2.0.3 destination databases, cross-platform transportable tablespaces (XTTS) is an intriguing variation on the TTS methodology. XTTS permits the source database’s TTSs to remain open in read/write mode for almost the entire duration of the data migration effort except for a brief outage period at the very end.

There are several similarities between TTS and XTTS:

Image TTS boundaries must still be respected, and Data Pump Export is still used to capture TTS metadata on the source.

Image Endian boundaries can be crossed, and conversion to different endian-ness can be requested at either the source or the destination database.

Image Data Pump Import is still used to plug in TTS metadata to the destination database.

However, the main processing flow is significantly different. There are some prerequisites to accomplish, as shown in Listings 14.13 and 14.14.

Listing 14.13 XTTS: Prerequisites to Initial Execution


-----
-- On source database:
-----
SQL> CREATE OR REPLACE DIRECTORY ora10g_dbf
    AS '+DATA/ORA10G/DATAFILE';
SQL> GRANT READ, WRITE ON DIRECTORY ora10g_dbf TO PUBLIC;

SQL> CREATE OR REPLACE DIRECTORY xttsfiles
    AS '/home/oracle/XTTSFILES';
SQL> GRANT READ, WRITE ON DIRECTORY xttsfiles TO PUBLIC;

-----
-- On destination database:
-----
SQL> CREATE OR REPLACE DIRECTORY ora12c_dbf
    AS '+DATA/ORA12010/DATAFILE';
SQL> GRANT READ, WRITE ON DIRECTORY ora12c_dbf TO PUBLIC;

SQL> CREATE OR REPLACE DIRECTORY xttsfiles
    AS '/home/oracle/XTTSFILES';
SQL> GRANT READ, WRITE ON DIRECTORY xttsfiles TO PUBLIC;
SQL> DROP DATABASE LINK ora10g;
SQL> CREATE DATABASE LINK ora10g
    CONNECT TO system IDENTIFIED BY oracle_4U
    USING 'ORA10G';


Listing 14.14 XTTS: Extracting XTT Programs and Configuring xtt.properties


$> unzip –d /home/oracle/XTTS rman_xttconvert_1.4.zip

# Editing xtt.properties configuration file on source and destination:

-- Do this on source and destination platforms
$> unzip –d /home/oracle/XTTS rman_xttconvert_1.4.zip
-- Edit xtt.properties file on source and destination platforms

# xtt.properties
# Parameters for Cross-Platform Transportable Tablespace (XTTS) demonstrations
# Tablespace(s) for XTTS transport:
tablespaces=AP_DATA,AP_IDX
# Source database parameters:
platformid=2
srcdir=ORA10G_DBF
srclink=ORA10G
dfcopydir=/home/oracle/XTTSFILES
backupformat=/home/oracle/XTTSFILES
# Destination database parameters:
dstdir=ORA12c_DBF
stageondest=/home/oracle/XTTSFILES
storageondest=+DATA
backupondest=+FRA
asm_home=/u01/app/oracle/product/12.1.0/grid
asm_sid=+ASM
parallel=4
rollparallel=2


XTTS requires capture, transport, and restoration of an initial set of incremental level 0 image copy backups of all datafiles for each TTS between the source and destination databases, as shown in Figure 14.1. Note that the endian conversion—if it is indeed required—is performed automatically during this step.

Image

Figure 14.1 Initial extraction in cross-platform transportable tablespaces

Once the initial image copy backups have been transferred from source to destination, however, the XTTS methodology continues to take incremental level 1 backups on the source database. These backup sets are then transported to the destination database and applied against the previously transferred incremental level 0 image copies to recover them forward, as Figure 14.2 shows.

Image

Figure 14.2 Incremental synchronization in cross-platform transportable tablespaces

When the files at the destination database are as close as desired to the source database and the DBA decides to complete the migration process, all TTSs are brought into read-only mode just once. This freezes any transactional activity on the source until one final incremental level 1 backup can be captured, transported to the destination, and then applied against the incremental level 0 image copies to recover them forward one last time, as shown in Figure 14.3.

Image

Figure 14.3 Final synchronization in cross-platform transportable tablespaces

For an excruciatingly detailed discussion of how to download and deploy XTTS for a data migration effort, see MOS Note 1389592.1, “Reduce transportable tablespace downtime using cross platform incremental backup.”

Cross-Platform Transport

New in Oracle Database Release 12.1.0.1, the cross-platform transport (CPT) utility method uses a combination of RMAN, DBMS_FILE_TRANSFER, and Data Pump to perform data migration extremely similar to the methodology that XTTS provides.

As with XTTS, CPT offers similar advantages and restrictions:

Image TTS boundaries must still be respected.

Image Endian boundaries can be crossed, and conversion to different endian-ness can be requested at either the source or the destination database.

Image The tablespace sets selected for migration remain in read/write mode for the duration of the migration except for a short period at the end of the migration process when they must be briefly brought into read-only mode.

However, CPT also offers a much simpler methodology than XTTS, as shown in Figure 14.4.

Image

Figure 14.4 Cross-platform transport workflow

The RMAN BACKUP FOR TRANSPORT command is used to initiate the migration. Note that this command now automatically calls Data Pump Export to capture the tablespace set’s metadata on the source database.

DBMS_FILE_TRANSFER (or any OS utility) transports all backup sets to the destination system, following which the final phase of the migration may commence.

At the destination database, the RMAN RESTORE FOREIGN command is invoked. Data Pump Import locates the specified dump set and uses it to load TTS metadata into the destination database, thus plugging in all TTSs.


Note

As of this book’s publication date, CPT is available only for migration between source and destination databases at Oracle Database release 12.1.0.1 or higher; however, it may be back-ported to earlier releases at a future date.


Piecemeal Migration Methods

It’s also possible to migrate data between source and destination databases using a more piecemeal approach that ranges from database utilities such as Data Pump Export and Import to completely manual methods, as Table 14.3 summarizes.

Image
Image

Table 14.3 Manual Migration Methods

Data Pump Export/Import

If the application can tolerate a potentially lengthy outage, then using Oracle Data Pump Export and Import utilities may offer an excellent solution to complete a data migration.

If disk space on the source and/or destination platforms is low or unavailable, but there is relatively wide network bandwidth between both platforms, then consider performing Data Pump Import operations directly over the network from the destination database by setting the NETWORK_LINK parameter to an appropriate database link to the source database.

Consider using the FLASHBACK_SCN parameter to isolate transactions to a particular point in time during data export. However, be aware that this may require an extremely large UNDO tablespace to provide sufficient read consistency for the objects being exported.

Finally, remember that there is no way to transactionally resynchronize tables that have been loaded via Data Pump Import; this requires a utility like Oracle GoldenGate, Oracle Streams, or other third-party tools that employ change data capture and delivery methods.

Data Pump Full Transportable Export/Import

If the source database is at least Oracle Database release 11.2.0.3 and the destination database that’s resident on Exadata is at least Oracle Database release 12.1.0.1, then Data Pump full transportable export (FTE) may be an attractive option to complete the migration between source and destination platforms. Figure 14.5 shows the basic strategy behind this method.

Image

Figure 14.5 Data Pump full tablespace export workflow

First, the desired non-system tablespace sets are identified and verified for completeness, as shown in Listing 14.15.

Listing 14.15 Verifying Self-Contained Tablespace Sets


EXEC DBMS_TTS.TRANSPORT_SET_CHECK('TPCH_DATA,TPCH_IDX',TRUE,TRUE);
PL/SQL procedure successful.

SELECT * FROM transport_set_violations;
No rows returned


All non-system tablespaces about to be transported are brought into read-only mode, as shown in Listing 14.16.

Listing 14.16 Bringing Transportable Tablespaces into Read-Only State


SQL> ALTER TABLESPACE tpch_data READ ONLY;
Tablespace altered.

SQL> ALTER TABLESPACE tpch_idx READ ONLY;
Tablespace altered.


Data Pump Export is used to capture the metadata for all objects within the tablespace sets as well as all database objects that have been stored in the source database’s data dictionary (for example, packages, stored procedures) into a Data Pump dump set, as shown in Listings 14.17 and 14.18. To execute the Data Pump Export, issue the following command:

$> expdp system/oracle_4U parfile=/home/oracle/fte_db11203.dpectl

Listing 14.17 Contents of fte_db11203.dpectl Parameter File


DIRECTORY=DATA_PUMP_DIR
DUMPFILE=fte_ora11203.dmp
LOGFILE=fte_ora11203.log
FULL=Y
TRANSPORTABLE=ALWAYS
VERSION=12.0


Listing 14.18 Results of FTE Export


Export: Release 11.2.0.4.0 - Production on Fri Jan 16 20:23:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** parfile=/home/oracle/fte_ora11203.dpectl
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64.93 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
...
<< many lines omitted for sake of brevity >>
...
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS"          0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB       0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
***********************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/admin/ora11g/dpdump/fte_db11203.dmp
***********************************************************************
Datafiles required for transportable tablespace TPCH_DATA:
  +DATA/ora11g/datafile/tpch_data.258.858108115
Datafiles required for transportable tablespace TPCH_IDX:
  +DATA/ora11g/datafile/tpch_idx.257.858108117
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed ...


Next, the image copies of all datafiles for the corresponding tablespace sets are copied from the source to the destination database using either OS utilities or Oracle’s DBMS_FILE_TRANSFER utility.

On the destination database, Data Pump Import is used to load the TTS metadata contained in the Data Pump dump set into the destination database, thus plugging the TTS into that database. The first step in this process is to create a directory object, as shown in Listing 14.19.

Listing 14.19 Creating Directory Object for Pluggable Database Access to Data Pump Directory


CREATE OR REPLACE DIRECTORY dpdir
    AS '/u01/app/oracle/admin/cdb122/dpdump';
GRANT READ ON DIRECTORY dpdir TO PUBLIC;
GRANT WRITE ON DIRECTORY dpdir TO PUBLIC;


Finally, it’s time to import the metadata into the DB12010 database. Data Pump Import is invoked as follows to complete the transfer of all TTS metadata as well as all database objects that have been stored in the database’s data dictionary:

$> impdp system/oracle_4U@db12010 parfile=/home/oracle/fti_db12010.dpictl

Listing 14.20 shows the Data Pump Export parameter file, and Listing 14.21 displays abbreviated results of the Data Pump Import process.

Listing 14.20 Contents of fti_db12010.dpictl Parameter File


DIRECTORY=DPDIR
DUMPFILE=fte_db11203.dmp
LOGFILE=fti_db12010.log
FULL=Y
TRANSPORT_DATAFILES='/u01/app/oracle/oradata/db12010/tpch_data.dbf',
'/u01/app/oracle/oradata/db12010/tpch_idx.dbf'


Listing 14.21 Results of FTE Import


Import: Release 12.1.0.1.0 - Production on Fri Jan 16 21:14:03 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Source timezone version is +00:00 and target timezone version is -07:00.
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@db12010 parfile=/home/oracle/fti_db12010.dpictl
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/TABLESPACE
. . .
<< many lines omitted for sake of brevity >>
. . .


Partition Migration

If the destination database is leveraging Oracle Partitioning features, then one of the fastest ways to load data into a partitioned table is to not load it at all but to use partition exchange mechanisms instead to exchange partitions between the two databases. Because partition exchange simply modifies the underlying metadata for the partitioned objects—essentially, plugging in a nonpartitioned table as a new partition in an existing partitioned table—it is one of the fastest and easiest ways to migrate data.

This data migration technique uses the following strategy:

1. The metadata for the tablespaces that comprise the partitioned tables and indexes are extracted via Data Pump Export.

2. The tablespace sets that comprise just the older, least frequently used partitions of the partitioned tables and their indexes are brought into read-only mode and are then transported to the destination system and placed in the appropriate storage locations. (By the way, this is also an excellent mechanism for migrating data from non-ASM to ASM storage.)

3. Using Data Pump Import, the tablespaces’ metadata is plugged into the destination database to complete the data transfer, and then these new tablespaces are placed into read/write mode.

Manual Methods

Finally, what if none of these methods and utilities provides sufficient flexibility for loading data, or what if an application’s window of inopportunity must be somehow reduced to satisfy that application’s SLA? The only alternative may be to write custom SQL or PL/SQL code to complete data migration in a timely fashion between the source and destination databases.

These manual methods can definitely leverage database links from the destination to the source database, and they can take advantage of the implicit cursors provided by INSERT INTO [destination table] SELECT FROM [source table]@database_link statements or CREATE TABLE [destination table] AS SELECT FROM [source table]@source_database_link statements. These two methods allow the use of parallelism at both source and/or destination databases for fast data extraction and loading. If only a handful of database objects must be handled manually, this approach may actually offer exactly the required speed and flexibility that much more expensive tools may offer.

If you are fortunate enough that the DDL for the objects to be loaded match precisely between source and destination databases, consider using either Data Pump Export/Import or the DBMS_METADATA package to extract and then construct empty objects on the destination database before loading any data. Finally, remember to use the proper hints to invoke direct-path loading for the INSERT INTO ...SELECT FROM SQL statement: /* +APPEND NOLOGGING PARALLEL(n) */. (Recall that CREATE TABLE AS SELECT [CTAS] statements automatically leverage direct-path loading and parallelism without any extra hints.)


My Oracle Support Reference Documents

The following My Oracle Support (MOS) documents contain valuable primary and ancillary information for additional reading and strengthening understanding of the myriad topics discussed in this chapter:

Image 369644.1: “Frequently asked questions about restoring or duplicating between different versions and platforms”

Image 413484.1: “Data Guard support for heterogeneous primary and physical standbys in same Data Guard configuration”

Image 831223.1: “Using RMAN incremental backups to update transportable tablespaces”

Image 1389592.1: “Reduce transportable tablespace downtime using cross-platform incremental backup”

Image 1401921.1: “Cross-platform database migration (across same endian) using RMAN transportable database”

Image 1472959.1: “Moving user datafiles between ASM disk groups using incrementally updated backups”

Image 1644090.1: “12c streams and change data capture (CDC) support”

Additionally, Oracle Database Upgrade, Migration and Transformation Tips and Techniques (Oracle Press, June 2015) by Edward Whalen and Jim Czuprynski is also a useful resource.


Summary

You learned in this chapter that you shouldn’t be afraid to ask your users if they really do need all their data at the conclusion of the initial migration. It’s very likely that the applications accessing that data can tolerate a relatively short window of inopportunity during which data is accessible in read-only mode.

Of all the tools on an Oracle DBA’s tool belt, this chapter demonstrated that Transportable Tablespace Sets (TTS) is the closest thing to a Swiss Army knife because it makes short work of migrating extremely large volumes of data between source and destination databases—especially when an endian boundary needs to be crossed.

No single tool is appropriate for all migration scenarios. In fact, it usually makes more sense to leverage multiple tools—Data Pump Export/Import for more static data, TTS for data that can tolerate short periods of read-only access, and Oracle Streams or Oracle GoldenGate for the most volatile data—to accomplish the ultimate goals of the migration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset