Clusterware and Database Upgrades
by Syed Jaffar Hussain
Whenever Oracle announces a new version, the million-dollar question facing every organization and every DBA is whether to upgrade their current (and mostly stable) environment. There is little doubt that the prospect of an upgrade gives you, as a DBA, butterflies in your stomach and the feeling that it will take a lot of hard work to make the process successful. It’s hardly surprising that many organizations are afraid to accept the risks of an upgrade.
The objective of this chapter is to explain the key elements that drive an organization to consider upgrading. It also offers the best practices, tips, and tricks to upgrade your current Oracle Grid Infrastructure (GI) and databases to the new Oracle 12c.
Typically, people reject the risk of an upgrade for the following main reasons:
Here are a few key factors that drive an organization or DBA to consider an upgrade of their existing environment nevertheless:
This chapter used the following setup configuration to demonstrate an Oracle Clusterware 11gR2 (11.2.0.3) and Database upgrade to Oracle 12c (12.1.0.0.2):
Every successful deployment needs careful planning and must go through prerequisite verification before upgrading an Oracle environment. You are therefore advised to work through some of the important pre-upgrade tasks mentioned in the subsequent sections to ensure a smooth and successful upgrade.
Before any upgrade, it is essential to have an up-to-date backup copy of the existing Oracle software: GI and RDBMS, and also the Cluster’s critical components (OCR and OLR [Oracle Local Registry]) in order to secure yourself from any sort of disaster situations that might arise before and after the upgrade. We recommend that you consider the following backup actions:
Back up your current GI and RDBMS homes, preferably the entire Oracle software filesystem.
Perform a manual physical OCR/OLR backup.
Perform a full Oracle Database backup of the source databases.
Validating node (server) readiness prior to the upgrade procedure is one of the key steps of a successful upgrade. It is mandatory to verify all the prerequisites, such as storage, OS kernel parameters, OS patches, etc., on the current nodes to ensure that they meet the minimal requirements to proceed with the upgrade.
Prerequisite validations can be performed in two ways: with the Cluster verification utility (runcluvfy.sh) or by letting the Oracle Universal Installer (OUI) perform the verification as part of the prerequisite checking during the upgrade process. The following command demonstrates an example how to use the cluvfy utility to perform the verification prior to installation of the Clusterware:
./runcluvfy.sh stage –pre crsinst –upgrade –n rac1,rac2 –rolling –fixup–src_crshome
/u01/app/11203/grid –dest_home /u01/app/12.1.0/grid –dest_version12.1.0 –verbose
Note The runcluvfy.sh must be executed from the 12c software staging location.
In the event of any validation failures produced by the preceding command, you should fix the issues and re-execute the command to ensure that all the prerequisite checks have been met.
Unsetting Oracle Variables
Ensure the Oracle variables for GRID and RDBMS homes on the local node are unset prior to cluster upgrade. The following example demonstrates how to unset Oracle variables on the Unix/Linux platforms:
$ unset ORACLE_HOME ORA_CRS_HOME TNS_ADMIN ORA_NLS10 ORACLE_HOME
The $ORACLE_HOME/bin path should not be part of the current PATH variable settings on the local node. If it is, remove the path from the current PATH settings.
Upgrading Flexibility and Restrictions
A few important points to keep in mind before proceeding with an upgrade process:
Cluster and ASM Oracle 12c Upgrade Compatibility Matrix
Before upgrading the current environment, review the compatibility matrix to determine whether the current environment is capable of performing a direct upgrade to Oracle 12c or not. You need to be prepared for situations such as the current environment not supporting a direct upgrade; how then would you proceed?
Table 13-1 depicts a compatibility matrix for performing direct Oracle 12c upgrades from earlier Oracle versions.
Table 13-1. Cluster Upgrade Compatibility Matrix
Oracle Version | Compatibility |
---|---|
Oracle 10gR1 (10.1.0.5) | Direct upgrade possible |
Oracle 10gR2 (10.2.0.3) | Direct upgrade possible |
Oracle 11gR1 (11.1.0.6) | Direct upgrade possible |
Oracle 11gR2 (11.2.0.2) | Direct upgrade possible: patch set 11.2.0.2.3 (PSU 3) or later must be applied |
If your current Oracle environment doesn’t match any of the Oracle versions listed in Table 13-1, you will have to perform an indirect upgrade, in which you need to first either apply a patch or upgrade it to the version specified in the table before you upgrade it to 12c.
Launching OUI
When all the prerequisites are met, and all upgrade best practices have been performed, you are good to go with the upgrade process. So, let’s get into the real action. Navigate through the Oracle 12c GI software staging location and launch OUI by running the following command:
./runInstaller
Initiating an Oracle Clusterware Upgrade
The guidelines provided here, with the help of the screenshots, will demonstrate a two-node Oracle 11gR2 (11.2.0.3) GI upgrade to Oracle 12c (12.1.0.0.2) on Linux 5 OS.
Figure 13-1. Select Installation Option screenshot
Figure 13-2. GI Node Selection screenshot
Figure 13-3. GI Management Repository Option screenshot
Figure 13-4. Specify Installation Location screenshot
Figure 13-5. Root script execution configuration screenshot
Figure 13-6. Select batches for nodes screenshot
Figure 13-7. Install Product screenshot
Figure 13-8. Install Product root execution screenshot
If you have opted to run the root.sh/rootupgrade.sh script manually, the screen shown in Figure 13-9 will appear.
Figure 13-9. rootupgrade script execution screenshot
Based on your response or action, the Installer then proceeds with running the script on the nodes, as per the instructions (batch) specified previously. While the script is being executed on the respective nodes, you can see the feedback on the Install Product screen on which node the script is currently being executed.
After the completion of the script on the local node, from the other command prompt on the local node, you can query the CRS active/software versions. You will notice that the crssoftwareversion will be shown as the new version; however, the crsactiveversion will remain to previous release. This is the expected result: until the script doesn’t complete on all nodes, the crsactiveversion will remain to previous release.
$ ./crsctl query crsactiveversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
$ ./crsctl query crssoftwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.0.2]
After successfully completing the execution of rootupgrade.sh on all nodes, the Installer will then proceed forward in configuring the GI management database, if selected, where a new database will be configured in the context. This can be observed in Figure 13-10.
Figure 13-10. Progress bar screenshot
$ ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.0.2]
$ ./crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.0.2]
The Importance of the Rootupgrade.sh Script
Only software binaries are deployed under the respective homes on the local and remote nodes until the rootupgrade.sh script is executed. The rootupgrade.sh actually performs the core Clusterware stack configuration and upgrade. In the context, the script does the following core duties:
On the local node (first node):
The following output is expected when the rootupgrade.sh script is executed on the local and other nodes of the cluster, except the last node in the cluster:
[root@rac1 oracle]# /u01/app/12.1.0/grid_1/rootupgrade.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_1/crs/install/crsconfig_params
2013/04/12 10:07:18 CLSRSC-363: User ignored prerequisites during installation
ASM upgrade has started on first node.
OLR initialization - successful
2013/04/12 10:12:29 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
2013/04/12 10:17:24 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2013/04/12 10:20:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
On the last node:
The following output is expected when the rootupgrade.sh script is executed on the last node in the cluster:
[root@rac2 ∼]# /u01/app/12.1.0/grid_1/rootupgrade.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_1/crs/install/crsconfig_params
2013/04/12 10:22:05 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2013/04/12 10:25:18 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
2013/04/12 10:29:28 CLSRSC-343: Successfully started Oracle clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.0.2
2013/04/12 10:35:20 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
After a Clusterware upgrade, quickly go through the following post-upgrade checklist to confirm success:
Figure 13-11. 12c cluster output screenshot
$ ./crsctl stat res -t -init
------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
------------------------------------------------------------------------------
Cluster Resources
------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE rac1 Started
ora.cluster_interconnect.haip
1 ONLINE ONLINE rac1
ora.crsd
1 ONLINE ONLINE rac1
ora.cssd
1 ONLINE ONLINE rac1
ora.cssdmonitor
1 ONLINE ONLINE rac1
ora.ctssd
1 ONLINE ONLINE rac1 ACTIVE:0
ora.diskmon
1 OFFLINE OFFLINE
ora.drivers.acfs
1 ONLINE ONLINE rac1
ora.evmd
1 ONLINE ONLINE rac1
ora.gipcd
1 ONLINE ONLINE rac1
ora.gpnpd
1 ONLINE ONLINE rac1
ora.mdnsd
1 ONLINE ONLINE rac1
ora.storage
1 ONLINE ONLINE rac1
Whenever you prepare to introduce a new change to the existing environment, it is always a recommended and safe approach to have a complete backout plan to recover from any disasters that may occur after the change. If you work for a reputable organization, the management naturally expects a back-out plan to accompany all changes to the current environment. The back-out plan is a procedure to roll back the changes that were applied if the change didn’t go well, or the change creates new problems.
The objective of this section is to summarize all the necessary steps that are required to downgrade from a successfully upgraded or from a failed Clusterware/ASM to earlier versions.
Oracle’s out-of-place upgrade option makes the downgrade procedure faster by keeping the old and new versions of Oracle Clusterware software in different homes/locations. This makes the downgrade procedure a very light and quick process, in contrast with pre-11gR2 versions.
Initiating Downgrade Procedure
In order to kick-start the downgrade procedure, perform the following steps:
From the new 12c Grid Home, run the following command on all nodes of the cluster sequentially, preferably starting from the first node to the last node:
$GRID_HOME/crs/install/rootcrs.pl –downgrade
Use the [-force] option in case if you encounter any issues with the preceding command due to the failed upgrade.
The command will stop all currently running resources on the server and shut down the 12c cluster stack subsequently.
After successfully executing the preceding command across all nodes in the cluster, execute the following command on the first node:
$GRID_HOME/crs/install/rootcrs.pl –downgrade -lastnode
This process will downgrade the OCR and set to the previous release.
As GI installation owner, run the following commands from the 12c $GRID_HOME/oui/bin location on every successfully upgraded node to complete the downgrade procedure:
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/11.2.0/grid
Note You need to provide the 12c Grid Home location in the first command, and in the second command, you need to provide the previous release Grid Home location.
For Oracle version 11gR2 or later, after successfully executing the preceding steps, as the root user, bring up the previous release cluster stack using the following command on each node in the cluster:
$/u01/app/11.2.0/grid/bin/crsctl start crs
For pre–Oracle 11gR2 versions, as root user, execute the root.sh script manually from the earlier Cluster release on each node in sequence to bring up the previous cluster stack.
Oracle 12c GI supports downgrade option to Oracle 11gr2 and Oracle 11gR1.
Note Keep in mind that any pre-/post-upgrade configuration modifications performed will be removed and unrecoverable after the downgrade.
Forcing Upgrade—When Some Nodes Are Inaccessible
Pre–Oracle 12c, if some nodes are not accessible or become unreachable before or during the upgrade, then the Clusterware upgrade won’t be completed due to missing/inactive nodes in the cluster. In contrast, with the introduction of the –force option with the rootupgrade.sh script, you can perform a forced upgrade despite missing/inactive nodes in the cluster. Therefore, when you query active crs version to confirm the upgraded, you will see that the crs active version is set to the new Cluster version in contrast to the pre–Oracle 12c releases.
Here is an example to run the rootupgrade.sh script with the –force option:
$ /u01/app/12.1.0/grid/rootupgrade –force
When the inactive nodes become accessible after a forced upgrade, you can join them to the cluster to either upgrade or delete them from the cluster. Execute the following examples on the first node of the cluster to join the node that was not accessible during the course of the upgrade:
$/u01/app/12.1.0/grid/crs/install/rootcrs.pl –join –existingNode rac1 upgrade_node rac2
In the preceding example, existingNode indicates the node on which the upgrade was successful, and upgrade_node is the node which was not reachable during the course of upgrade and wants to join the node in the cluster after a forced upgrade.
Installing Oracle 12c RDBMS Software
You will have to install the new Oracle 12c RDBMS binaries into a new RDBMS location right after your Clusterware upgrade to ally with the complete Clusterware environment upgrade procedure.
As stated earlier on, from 11gR2 onward, all upgrades are now out-of-place upgrades; you will have to create a new location in the context for the Oracle 12c binaries and install the software. This procedure will not have any impact on the active databases on the respective nodes unless you select the “Install database software only” option. In other words, we advise you not to select the “Upgrade an existing database option” on the Select Installation Option screen, as shown in Figure 13-12.
Figure 13-12. Database Select Installation Option screenshot
You will have to follow the typical RDBMS software installation in the RAC environment and complete the software installation considering the preceding recommendations discussed a little earlier.
Once you successfully complete the software installation, you may then proceed with the database upgrade process as per your schedule and convenience. Manual and Database Upgrade Assistant (DBUA) upgrade methods will be demonstrated in more detail in the subsequent section.
Although there are various methods, processes, and tools to perform an upgrade of an existing database to the new Oracle release, this section mainly focuses on the manual and DBUA upgrade methods. Just before the upgrade, it is important for you as a DBA to become familiar with those key enhancements brought into the upgrade procedure of the new release for a successful database upgrade. The new enhancements with regard to the database upgrade include the following:
Pre-upgrade tool:The new preupgrd.sql script in Oracle database 12c replaces the previous Oracle release pre-upgrade tool, utlu112i.sql script, which will provide recommendations to fix any pre– and post–database upgrade chaos that might emerge and cause disruption in the upgrade process. Therefore, pre–database upgrade, you must execute the script from the Oracle 12c home on the source database to gather the recommendations for pre– and post–database upgrade. The DBUA tool uses this script by default.
Parallel Upgrade Utility (catctl.pl):Unlike earlier releases of Oracle, with the introduction of the new Parallel Upgrade Utility in Oracle 12c, you can now make full use of the system’s CPU capacity by initiating the database upgrade script, catupgrd.sql, in parallel. The new Parallel Upgrade Utility (catctl.pl) with Oracle 12c greatly improves the database upgrade runtime by executing the catupgrd.sh script in parallel. The DBUA tool takes full advantage of this utility by running it by default, as the utility is integrated with the DBUA tool. The following example summarizes the use of the utility when you do a manual database upgrade:
$ORACLE_HOME/per/bin/perl catctl.pl –n 2 –l /tmp catupgrd.sql
The -n parameter indicates the range of parallelism: the value must be 0-8.
-l is the location where the logs will be spooled.
Restart DBUA database upgrades:Unlike the earlier Oracle releases, the DBUA tool in Oracle 12c has the ability to resume the database upgrade process from the point where it failed without being started over again. The new RMAN feature in 12c, Guaranteed Restore Point, in this context will help in resuming the failed database upgrades initiated by the DBUA tool.
Database upgrade compatibility matrix: Before you start upgrading the existing database to the new release, it is essential to determine whether the new release will support a direct upgrade of an existing database version or not. Table 13-2 outlines the database upgrade compatibility matrix of various Oracle releases that support direct and indirect database upgrades to Oracle 12c.
Table 13-2. Database Upgrade Compatibility Matrix
Direct Database Upgrade | Indirect Database Upgrade |
---|---|
10.2.0.5 11.1.0.7 11.2.0.2 or later | 9.2.0.8 or earlier 10.1.0.5 10.2.0.2,10.0.2.3,10.0.2.4 11.1.0.6 11.2.0.1 |
For example, if the existing Database version is 11.2.0.1, you need to upgrade it to 11.2.0.2 or higher first and then proceed with 12c upgrade.
Deploying Manual Database Upgrade
In this section, you will learn all the mandatory steps that are required to perform a manual database upgrade to Oracle 12c. The following demonstrates the procedure to upgrade an Oracle 11.2.0.3 database, named PRDDB, to Oracle 12c:
Back up the source database, if no recent backup exists.
Connect to the source database as/sysdba and execute the pre-upgrade script from the Oracle 12c home. This is a mandatory step when you perform a manual database upgrade; on the flip side, if the script doesn’t run, the post–database upgrade step ultrp.sql script will be failed. Figure 13-13 shows how to run the pre-upgrade script on the source database:
export ORACLE_SID=PRDDB1
SQL> sqlplus / as sysdba
SQL> @/u01/app/oracle/product/12.0.1/rdbms/admin/preupgrd.sql
You are likely to see the following output on running the pre-upgrade tool:
Figure 13-13. Pre-upgrade screenshot
Unlike earlier Oracle database releases, the pre-upgrade output is written to a log file. You will have to review the preupgrd.log file generated by the pre-upgrade tool for any warnings and recommendations. Similarly, review and apply the recommendations stated in the preupgrade_fixups.sql andpostupgrade_fixups.sql scripts to fix any pre– and post–database upgrade issues recommended by the pre-upgrade tool on the source database, for example, collecting dictionary and fixed-objects statistics, database initialization parameter adjustments, invalid object details, tablespace size adjustments, etc. The log file and script files will typically be created under the $ORACLE_BASE/cfgtoolslogs/PRDDB/preupgrade location. If no $ORACLE_BASE variable is set on the server, the files in the context will be created under the $ORACLE_HOME/cfgtoolslogs/PRDDB/preupgrade location. The DBUA tool uses the pre-upgrade tool by default. The following shows the existence of the files under the $ORACLE_HOME location:
-rw-r--r-- 1 oracle dba 1444 Apr 13 09:34 postupgrade_fixups.sql
-rw-r--r-- 1 oracle dba 2491 Apr 13 09:34 preupgrade_fixups.sql
-rw-r--r-- 1 oracle dba 6397 Apr 13 09:34 preupgrade.log
Another important task while performing a manual database upgrade is to copy the source database password file, init/spfile, from the Oracle 11g home to the Oracle 12c Database home. Also, ensure that the local/remote listener parameter TNS configuration details are added in the new Oracle 12c tnsnames.ora file to avoid database startup failures when database is started from the Oracle 12c home. For a RAC database, you must do it for all instances. The following example demonstrates the copy command on Linux OS:
$cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapwPRDDB1 /u01/app/oracle/product/12.1.0/db_1/dbs
If required, run the preupgrade_fixups.sql script and subsequently shut down and mount the database to disable the archivelog mode to avoid excessive archive generation during upgrade and to expedite the upgrade process.
srvctl stop database –d PRDDB
export ORACLE_SID=PRDDB1
sqplus / as sysdba
SQL> STARTUP MOUNT
SQL> ALTER DATABASE NOARCHIVELOG;
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
Set the $ORACLE_HOME to Oracle Database 12c version on the server.
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
cd $ORACLE_HOME
On the SQL prompt, run the following database upgrade command:
SQL> CONNECT / AS SYSDBA
SQL> STARTUP UPGRADE
Exit from the SQL prompt and subsequently initiate the upgrade process in parallel mode, using the new Parallel Upgrade Utility, as explained here:
$/u01/app/oracle/product/12.0.1/db_1/perl/bin/perl catctl.pl–n 2 –l /tmp catupgrd.sql
If you prefer to perform an upgrade in a serial mode, like in previous releases, do the following at the SQL prompt:
SQL>@?/rdbms/admin/catupgrd.sql parallel=no
Database will be automatically shut down once the preceding command successfully completes. You therefore have to perform the following steps to enable the CLUSTER_DATABASE, set the database COMPATIBLE parameter to 12c, enable archive log, etc.:
SQL> STARTUP MOUNT
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=SPFILE;
SQL> ALTER SYSTEM SET COMPATIBLE=12.1.0 scope=spfile;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP
SQL> execute dbms_stats.gather_fixed_objects_stats;
SQL> @/u01/app/oracle/prdoduct/12.1.0/rdbms/admin/utlrp.sql
SQL> @/u01/app/oracle/product/12.1.0/rdbms/admin/utlobj.sql
SQL> @/u01/app/oracle/product/12.1.0/rdbms/admin/utlu121s.sql
SQL> SHUTDOWN IMMEDIATE;
SQL> EXIT
utlrp.sql recomiples all invalid objects on the database
ultobj.sql verfies the validity of all packages/classes on the database
utlu121s.sql displays database upgrade summary
After successfully completing the database upgrade process, you will have to modify the cluster database configuration details in the OCR using the following command:
srvctl upgrade database –d PRODB –o /u01/app/oracle/product/12.1.0/db_1
At this point in time, your cluster database is successfully upgraded to 12c. Now, open the database using the following cluster command, which in turn will start all RAC database instances:
srvctl start database –d PRODB
Note Executing the utlrp.sql is mandatory to avoid performance degradation when accessing the data dictionary objects after the database upgrade completion. If the script doesn’t run right after the database upgrade completion, whenever the data dictionary is being accessed first, it will have a significant performance impact. Therefore, it is advised to run the script after the upgrade.
Post-Database Upgrade Steps
Once the database is successfully upgraded to Oracle 12c, go through the following recommended post–database upgrade steps:
$ ./srvctl config database –d PRODB
Database Upgrade Using the DBUA
After learning a manual database upgrade procedure, it’s time now to shift the focus to know how to perform a database upgrade using the DBUA tool in Oracle 12c. In addition to portraying the DBUA database upgrade procedure with a bunch of screenshots, this section also going to explain why DBUA is a preferred method over other methods to perform a RAC database procedure.
The step-by-step procedure that outlines database upgrade with DBUA tool is as follows:
Launch ./duba from the Oracle 12c home.
Select the Upgrade Oracle database option on the Select Operation page, as shown in Figure 13-14.
Figure 13-14. Select Operation screenshot
Select the database from the given database list on the Select Database page, as shown in Figure 13-15.
Figure 13-15. Select Database screenshot
Prerequisite checks are carried out and the result will be displayed. You will have an option to Ignore, Fix or Revalidate any prerequisite failures on this page. Also, you can run the prerequisite checks by clicking the Check Again button on the top right (Figure 13-16).
Figure 13-16. Prerequisite Checks screenshot
On the Upgrade Options screen, set the degree of parallelism to perform the upgrade and recompiling invalid objects in parallel. Optionally, you can also run any custom pre–/post–database upgrade Scripts.
Specify the configuration options for EM Cloud Control on the Management Options screen.
The new Move Database Files screen allows you to move the datafiles and FRA location between Filesystem, ASM, etc.
Also, the Network Configuration screen permits you to migrate an existing listener to the new Oracle release.
Optionally, the Recovery Options screen has the flexibility to define a recovery point to recover the database to the same point in case of any failures during the database upgrade process.
Once you input all the required information through the interactive screens, the Database Upgrade Information screen provides all details that you entered for this process.
Click on the Finish button to kick-start the upgrade action.
You will see the Upgrade Screen after the database upgrade is complete.
After going through the preceding steps, you will have a successful database upgrade for an Oracle 11gR2 (11.2.0.3) to Oracle 12c Database.
This section will quickly take you through some of the following advantages of DBUA over other database upgrade options:
Sometimes after a successful database upgrade, you might have to downgrade the database to its previous release for various factors; hence, you should have a back-out action plan in mind when you do an upgrade procedure. The following is the database downgrade procedure which lets you downgrade the database from 12c to a previous Oracle database release:
Perform the current database full backup.
Disable the database vault, if it exists.
Ensure that the database COMPATIBLE initialization parameter is set to the Database version that supports a direct upgrade (refer to Table 13-2).
Drop the sysman user if OEM is configured. You will have to reconfigure the OEM after you finish the downgrade process.
Disable the CLUSTER_DATABASE initialization parameter, and stop the database as follows:
export ORACLE_SID=PRDDB1
SQL> sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile;
srvctl stop database –d PRDDB
Start up the database in downgrade mode and then execute the downgrade script, as follows:
SQL> STARTUP DOWNGRADE;
SQL> SPOOL /tmp/dbdowngrade.log
SQL> @?/rdbms/admin/catdwgrd.sql
The catdwgrd.sql script under the Oracle 12c /rdbms/admin home downgrades the 12c Database components to the previous release. If you encounter any ORA issues during the course of downgrade, fix the issue and rerun the downgrade script once again.
After completing the script, shut down the database and exit from the SQL prompt.
Set the previous Oracle version to ORACLE_HOME, PATH OS env variables.
Start up the database in UPGRADE mode to reload the previous version components by running the catreload.sql script from the Oracle 11g /rdbms/admin home. Examples are as follows:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
sqlplus / as sysdba
SQL> STARTUP UPGRADE
SQL> spool /tmp/dbdowngrade.log
SQL> @?/rdbms/admin/catreload.sql
Enable the CLUSTER_DATABASE parameter, and shut down and start up the database.
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP
Recompile all invalid objects running the utlrp.sql script.
SQL> @?/rdbms/admin/utlr.sql
Downgrade the database version in the OCR using the srvctl downgrade command from the Oracle 12c database home.
srvctl downgrade database –d PRDDB –o /u01/app/oracle/product/11.2.0/db_1 –to_version 11.2.0.3.0
Summary
This chapter explains the core advantages of upgrading and why an organization needs to upgrade its current (stable) environment with new releases. Before demonstrating a two-node cluster and a RAC database upgrade with a several hands-on examples and slide-decks, all the important prerequisites and precautions that are required for a smooth and successful upgrade were explained. You have also learned a step-by-step procedure for downgrading a successful, failed, or partially upgraded Clusterware and Database to the previous version.