Images

CHAPTER 33

Managing Storage, Security, Availability, and Performance in a CDB and PDBs

Exam Objectives

• 063.5.1    Manage Permanent and Temporary Tablespaces in CDB and PDBs

• 063.6.1    Manage Common and Local Users

• 063.6.2    Manage Common and Local Privileges

• 063.6.3    Manage Common and Local Roles

• 063.6.4    Enable Common Users to Access Data in Specific PDBs

• 063.7.1    Perform Backups of a CDB and PDBs

• 063.7.2    Recover PDB from PDB Datafiles Loss

• 063.7.3    Use Data Recovery Advisor

• 063.7.4    Duplicate PDBs Using RMAN

• 063.8.1    Monitor Operations and Performance in a CDB and PDBs

• 063.8.3    Manage Allocation of Resources Between PDBs and Within a PDB

• 063.8.4    Perform Database Replay

This chapter covers a daunting list of objectives. Not to worry—many of them are straightforward if you understand the equivalent objective for environments that don’t have container databases (CDBs). All that is necessary to translate that knowledge to a CDB environment is an understanding of the CDB architecture covered in previous chapters. There is, however, potential for confusion. Be prepared for some odd effects (typically when attempting to do something in the root container that should be done in a pluggable container) and always be aware of which container your session is currently attached to.

Space Management in a Multitenant Environment

In a multitenant environment, tablespaces and the datafiles that comprise them are associated with either the root container or one of the pluggable databases (PDBs) within the CDB. Within a container (either the root or a PDB), the database administrator (DBA) can issue absolutely normal commands to create and manage tablespaces and datafiles. The fact that a database is running in a multitenant environment should not have any effect.

Tablespace Architecture

The unique identifier for a tablespace is the tablespace name qualified with the container number. There is no problem with several PDBs having tablespaces with the same name. Indeed, every PDB will have a tablespace named SYSTEM (to store the definitions of the objects defined within the PDB) and a tablespace named SYSAUX that contains Oracle-defined objects such as the Automatic Workload Repository (AWR) that can be configured and populated independently in each PDB. Datafiles must of, course, be uniquely named, but they are still associated with only one PDB and can be administered only from within that PDB.

The following query shows the tablespace and datafile structure of a CDB with two pluggable containers named jw1 and jw5:

Images

The only major enhancement to space management in a multitenant environment is that it is possible to set a space budget for each PDB, in other words, a limit for the total size of all datafiles and tempfiles that make up the container. To limit the overall size of a container, connect to the PDB and issue a command such as this:

Images

Any attempt to add a datafile or to extend an existing datafile that would take the total over the limit should fail.

Using CREATE TABLESPACE

The procedure for creating and modifying a tablespace in a CDB (root) container with CREATE TABLESPACE is the same as creating a tablespace in any PDB. If you are connected to CDB$ROOT, then the tablespace is visible and usable only in the root container; similarly, a tablespace created when connected to a PDB is visible only to that PDB and cannot be used by any other PDB unless connected with a database link. When connected to the root container, the CDB_TABLESPACES and CDB_DATA_FILE views will show the whole picture, as will V$TABLESPACE and V$DATAFILE. When connected to a PDB, these views will show the tablespaces and datafiles that are part of the PDB and the common undo tablespace.

For ease of management, Oracle recommends using separate directories to store datafiles for each PDB and the CDB. Even better, if you use Automatic Storage Management (ASM) or Oracle Managed Files (OMF), you’ll automatically get your datafiles and other database objects segregated into separate directories by container ID. Here is how the datafiles for the container database CDB01 are stored in an ASM disk group:

Images

Images

The container’s datafiles are stored in the DATAFILE subdirectory; each of the PDBs has its own set of datafiles in one of those subdirectories with the long string of hexadecimal digits. You use OMF with ASM in this scenario; you don’t need to know or care what those directory paths and filenames are since the locations of the datafiles are managed automatically.

Default Permanent and Temporary Tablespaces in a PDB

Changing the default tablespace in a CDB or PDB is identical to changing the default tablespace in a non-CDB. For both CDBs and PDBs, you use the ALTER DATABASE DEFAULT TABLESPACE command. In the following example, you set the container to QATEST1, create a new tablespace within QATEST1, and change the default tablespace to be the tablespace you just created:

Images

Going forward, any new users (local or common) within QATEST1 that don’t have a specific default permanent tablespace will use the tablespace QA_DFLT.

For any CDB you can have one default temporary tablespace or temporary tablespace group defined at the CDB level that can be used by the root container. Every PDB should also have a temporary tablespace. This will be created automatically if the PDB is created from the seed container and would usually already exist in any non-CDB database that is plugged in. A PDB can have multiple temporary tablespaces to be used by different users, as well as one default temporary tablespace. In the following example, you create a new temporary tablespace called QA_DFLT_TEMP in the PDB QATEST1 and make it the default temporary tablespace for QATEST1:

Images

A temporary tablespace created within a PDB stays with that PDB when it’s unplugged and plugged back into the same or a different CDB. If a user is not assigned a specific temporary tablespace, then that user is assigned the default temporary tablespace for the PDB.

Exercise 33-1: Administering Tablespaces in a Multitenant Database    This exercise assumes you have a CDB named cdba with two pluggable containers called pdba and pdbb, opened in read-write mode, and that there is a tnsnames.ora file with entries for each PDB and the root container.

1.  Enable Oracle Managed Files for pdba, giving it its own location. Create the nominated directory first, and ensure that the Oracle owner has read-write permissions on it. The following example is for Linux:

Images

Note that the OMF parameters are PDB modifiable.

2.  In each PDB, create a new permanent tablespace. Use OMF for pdba and nominate files for pdbb. The following example is for Windows:

Images

3.  Investigate the structures created.

Images

Manage Common and Local Users, Roles, and Privileges

Users and roles can be defined in the root container, in which case they are common and propagated to every PDB in the CDB, or they can be defined in a PDB, in which case they are local and exist only in that PDB. Common and local roles can be granted to local users or to common users. Direct grants of privileges are always local to the container where the grant was made.

Common and Local Users

In a multitenant environment, there are two types of users: common users and local users. A common user in a CDB (root container) has an account available in the root container and automatically in each PDB within the CDB. Common users’ names must start, by default, with C##, which makes it easy to distinguish a common user from a local user in each PDB.

Creating a local user in a PDB is exactly like creating a user in a non-CDB. You can create a local user either with a common user or with another local user with the CREATE USER privileges.

Images

The root container (CDB$ROOT) cannot have local users, only common users. Common users have the same identity and password in the root container and every PDB, both current and future. Having a common user account doesn’t automatically mean you have the same privileges across every PDB, including the root container. The accounts SYS and SYSTEM are common users who can set any PDB as their default container. For new common users, the username must begin with C## unless the instance parameter COMMON_USER_PREFIX has been changed from the default. Note that the Oracle-supplied common users do not have the C## prefix in order to maintain compatibility with non-CDB environments.

When you create a common user with the CREATE USER command, you can add CONTAINER=ALL to the command, as in this example:

Images

If you are connected to the root container and have the CREATE USER privilege, the CONTAINER=ALL clause is applied by default. The same applies to a local user and the CONTAINER=CURRENT clause. The C##SECADMIN user now has DBA privileges in the root container. This user has an account set up in each PDB but no privileges in any PDB unless explicitly assigned.

Images

To allow the user C##SECADMIN to at least connect to the QATEST1 database, grant the appropriate privileges as follows:

Images

When using CREATE USER, you can optionally specify the default tablespace, the default temporary tablespace, and the profile. These three attributes must exist in each PDB; otherwise, those values will be set to the PDB defaults for those items.

What if a common user is created while one of the PDBs is currently not OPEN or is in READ ONLY mode? The new common user’s attributes are synced the next time the PDB is opened in READ WRITE mode.

Grant Privileges Commonly and Locally

Common and local privileges apply to common and local users. If a privilege is granted across all containers to a common user, it’s a common privilege. Similarly, a privilege granted in the context of a single PDB is a local privilege, regardless of whether the user is local or common.

In the previous section, the user C##SECADMIN, a common user, was granted the CREATE SESSION privilege but only on the QATEST1 container. If C##SECADMIN needs access to all PDBs by default, use the CONTAINER=ALL keyword to grant that privilege across all current and new PDBs in the CDB.

Images

From a security perspective, you can grant common users privileges in the root container but no other containers. Remember that only common users can connect to the root container, regardless of the privileges granted; for a common user to connect to the root container, the user will need the CREATE SESSION privilege in the context of the root container, as you can see in the following example:

Images

To fix this issue for C##ROOTADM, you need to grant the CREATE SESSION privilege in the context of the root container.

Images

You revoke privileges from users and roles using the REVOKE command, as in previous releases and non-CDBs. The key difference using GRANT and REVOKE in a multitenant environment is the addition of the CONTAINER clause where you specify the context of the GRANT or REVOKE. Here are some examples of the CONTAINER clause:

•  CONTAINER=QATEST1 (privileges valid only in the PDB QATEST1)

•  CONTAINER=ALL (privileges valid across all PDBs, current and future)

•  CONTAINER=CURRENT (privileges granted or revoked in the current container)

To grant a privilege with CONTAINER=ALL, the grantor must have the SET CONTAINER privilege along with the GRANT ANY PRIVILEGE system privilege.

Manage Common and Local Roles

Roles, just like system and object privileges, work much the same in a multitenant environment as they do in a non-CDB environment. Common roles use the same conventions as common users and start with C##; a common role can have the same privileges across all containers or specific privileges or no privileges in a subset of containers. You use the CONTAINER clause to specify the context of the role.

Images

Note in the example that a common role (C##MV) was granted to a local user (DW_REPL) in DW17. The user DW_REPL inherits all the privileges in the role C##MV but only in the DW17 PDB. The reverse is also possible: A common user (such as C##RJB) can be granted a local role (such as LOCAL_ADM) in a specific PDB (such as QATEST1), and therefore the privileges granted via LOCAL_ADM are available only in QATEST1 for C##RJB.

Enable Common Users to Access Data in Specific PDBs

Just as in a non-CDB environment, you may want to share objects with users in other PDBs. By default, any tables created by a common or local user are nonshared and are accessible only in the PDB where they were created.

Shared tables, on the other hand, have some restrictions. Only Oracle-supplied common users (such as SYS or SYSTEM) can create shared tables, and these can be created only by running the supplied scripts that create the component in which they are used. Common users that the DBA creates (even with DBA privileges such as CREATE USER, DROP ANY TABLE, and so forth) cannot create shared tables.

The two types of shared objects are “links”: object links and metadata links. Object links connect every PDB to a table in the root container, and each PDB sees the same rows. A good example of this is AWR data in tables like DBA_HIST_ACTIVE_SESS_HISTORY, which has the column CON_ID so you can identify which container the row in DBA_HIST_ACTIVE_SESSION_HISTORY applies to.

In contrast, metadata links allow access to tables in the root container plus their own private copies of the data. Most of the DBA_xxx views use this method. For example, looking at the DBA_USERS view in the PDB QATEST1, there is no CON_ID column from the PDB perspective.

Images

However, from the same table in the root container, you can look at CDB_USERS and see the local and common users across all containers.

Images

The common users such as C##RJB exist for every PDB (other than the seed database). Users such as QAFRED exist only in the PDB with CON_ID=3 (QATEST1). Note also that the common users you create must start with C##; Oracle-supplied common users do not need this prefix.

Exercise 33-2: Manage Common and Local Users and Roles    This exercise demonstrates how to manage users and roles in a multitenant environment. It is assumed the CDB is named cdba, with pluggable containers pdba and pdbb.

1.  Run queries to determine what users exist.

Images

Note which users are common and which are local.

2.  Create users in the root container and grant a common role.

Images

Note that it is not possible to create a common user unless the prefix is used.

3.  Connect as a newly created common user.

Images

Note that the connection to a PDB fails if the user has not been granted the appropriate role or privilege with an appropriate scope.

4.  Create a local user.

Images

Observe that the new users are not “Oracle maintained.”

5.  Use common and local roles.

Only common roles can be created in the root, and they can be granted commonly or locally. Any attempt to create a local role in the root or to grant it to a user in a particular PDB will fail.

Images

What syntax is valid for local and common roles? What is the scope of the various role grants?

Audit in the Multitenant Environment

Traditional audit (as enabled with the AUDIT_% instance parameters and the AUDIT command) does not change when moving to a multitenant environment. It is configured in each PDB as in previous releases. Unified Audit is multitenant aware and is the recommended technique.

The syntax for creating and enabling unified Audit policies was discussed in Chapter 16. This syntax is unchanged in a multitenant environment. What does change is the visibility of the audit policies. If a policy is defined in the root container, it can be enabled in all CDBs. It cannot be enabled (or disabled) from within a PDB. This means that the senior DBA, who has access to the root container, can configure auditing of any action by any user in the entire environment. A policy defined while connected to a PDB is visible only within that PDB and can be managed by the DBA of that PDB.

To view audit records, query the UNIFIED_AUDIT_TRAIL view. This will show the audit records generated by your currently connected container, either the root or a PDB. When connected to the root, you can also query the CDB_UNIFIED_AUDIT_TRAIL view. This is a consolidated view of the unified Audit trail in every container.

The following example creates and enables a CDB-wide audit policy that will capture all actions by users who have connected with DBA privileges assigned by operating system or password file authentication:

Images

A user now connects to the pluggable container JW1 with the SYSDBA privilege, but he cannot disable (or even see) the audit policy.

Images

Nonetheless, in the root container the audit records can be seen as having been generated in the pluggable container.

Images

Multitenant Backup and Recovery

In a multitenant environment, you use the same tools for backup and recovery as in a non-CDB environment. You can back up and restore the entire CDB, a single PDB, a tablespace, a datafile, or even a single block anywhere in the CDB. Many operations can be performed from within a PDB, and some can be performed only at the CDB level from the root container. You can also duplicate a PDB using Recovery Manager (RMAN). Using RMAN gives you more flexibility than when cloning a PDB with the CREATE PLUGGABLE DATABASE . . . FROM . . . option. For example, you can use the RMAN DUPLICATE command to copy all PDBs or a subset of PDBs within a CDB or to a new CDB, including the root and the seed containers.

The Data Recovery Advisor is usable in a CDB in the same way as in a non-CDB. Follow the routine within an RMAN session.

Images

This will report on detected problems, generate a script to fix them, and run the script.

Back Up a CDB and Individual PDBs

For multitenant databases, the RMAN syntax has been modified, and new clauses have been added. You need to be careful when connecting with RMAN because the ORACLE_SID environment variable will refer to the instance, which is managed from the root. Connecting using operating system authentication will therefore connect you to the root container, as shown in Figure 33-1. Note that the RMAN session can see the tablespaces and datafiles of all the containers and that (with the exception of the root) the tablespaces are prefixed with the container name.

Images

Figure 33-1    An RMAN session against the root container of a CDB

From a session such as that depicted in Figure 33-1, you can back up the entire CDB or individual PDBs with commands such as these:

Images

The second command in the previous code will identify the tablespaces that make up the nominated container and back them up using what RMAN defaults have been configured. It is also possible to back up individual tablespaces of pluggable databases from the root by using the container name as a prefix. Here’s an example:

Images

To connect to a pluggable container, you must use password file authentication and connect through a TNS service name. Here’s an example:

Images

The scope of a session launched in this way will be limited to the tablespaces that make up the pluggable database, and there are no changes in syntax from what you would use in a non-CDB.

Restore and Recovery

In the event of any sort of instance failure, instance recovery will occur automatically in the transition from mount mode to open mode, in the same manner as for a non-CDB. Once this is completed, any in-flight transactions will be rolled back when the individual PDBs are opened. Recovering from loss of a controlfile is also identical to a non-CDB: In nomount mode, the controlfile must be restored or re-created.

To restore and recover the entire CDB, start the instance in nomount mode, connect to the root container (which is the only accessible container in this mode), and issue the usual RESTORE CONTROLFILE, RESTORE DATABASE, and RECOVER DATABASE commands that are used for complete or incomplete recovery. The critical datafiles in a CDB are the SYSTEM and UNDO tablespaces of the root container. Damage to these will force the CDB instance to terminate, and they must be restored and recovered in mount mode. Thus far, there are no differences between CDB and non-CDB. The differences are when applying restore and recover operations to individual PDBs.

Within a PDB, the SYSTEM tablespace is critical. If this is damaged, the entire container will be brought down to mount mode, and until the tablespace is restored and recovered, it cannot be opened. Other PDBs will be unaffected by this. Any other tablespaces can be offlined, restored, and recovered individually while the PDB remains open. These operations can be accomplished from the root container by prefixing the tablespace name with the container name or from within the PDB using the usual non-CDB commands.

Duplicate PDBs Using RMAN

Chapter 32 showed you how to clone a PDB using the CREATE PLUGGABLE DATABASE . . . FROM command. RMAN gives you more flexibility and scalability when duplicating one or more PDBs within a CDB or the entire CDB. As in any RMAN DUPLICATE operation, you must create an auxiliary instance for the destination CDB. The auxiliary instance must be started with the initialization parameter ENABLE_PLUGGABLE_DATABASE=TRUE.

When duplicating a CDB, the syntax lets you nominate which PDBs should or should not be included in the duplicate. The duplicate will always include the root and seed containers. To duplicate a CDB to another CDB with just one PDB called TOOL to a new CDB called NINE, you use the RMAN DUPLICATE command like this:

Images

If you want to copy two or more pluggable databases, you just add them to the end of the DUPLICATE command.

Images

Exclusions are allowed in the DUPLICATE command. If you want to clone an entire CDB but without the CCREPOS PDB, do this:

Images

Finally, you can duplicate not only pluggable databases, but also individual tablespaces, to a new CDB.

Images

Mulitenant Performance Monitoring and Tuning

Generally speaking, instance and database tuning may be best done from the root container, from where the instance and the database as a whole can be observed. SQL tuning will usually be done from within the container where the SQL is running. In either case, the same methodology that you would use in a non-CDB environment applies.

1.  Identify the problem and a goal.

2.  Determine a possible cause of the problem.

3.  Apply a solution.

4.  Confirm whether the solution has been effective.

5.  Repeat as necessary until the goal is reached.

The same techniques and tools that you would use in a non-CDB are available, usually at both the CDB and PDB levels. One issue that is unique to a CDB environment is the possibility that activity generated by sessions against one PDB could impact adversely activity in another PDB. To manage this, the Resource Manager has enhancements to control what resources are available to each PDB.

Tuning the Instance

Adjusting parameters at the CDB is much like tuning a single instance in a non-CDB environment that has several applications with different resource and availability requirements. It’s worth mentioning again that a CDB is a single database instance; however, with the added features of the multitenant environment, you have much more control over resource consumption among the several applications (each in their own PDB) in addition to the strong isolation between the applications from a security perspective.

Tuning the memory in a CDB means you’re changing the same memory areas as in a non-CDB:

•  Buffer cache (SGA)

•  Shared pool (SGA)

•  Program Global Area (PGA)

When you calculate the memory requirements for a CDB, your first estimate can be the sum of all corresponding memory requirements for each non-CDB that will become a PDB. Of course, you will eventually want to reduce the total memory footprint for the CDB based on a number of factors. For example, not all PDBs will be active at the same time; therefore, you will likely not need as much total memory allocated to the CDB.

Using Enterprise Manager Cloud Control 12c is a good way to see resource usage across the CDB. In Figure 33-2, the container CDB01 has three active PDBs and two inactive ones.

Images

Figure 33-2    Viewing PDB resource usage within a CDB using Cloud Control 12c

The total memory allocated for the CDB is approximately 5GB. Three non-CDB databases would likely use 5GB or more each; all five PDBs in CDB01 may perform just fine in a total of 5GB.

There are a few different approaches to resource allocation among PDBs within a CDB:

•  None    Let each PDB use all the resources of the CDB if no other PDB is active; when multiple PDBs need resources, they are divided equally.

•  Minimum    Each PDB gets a minimum guaranteed resource allocation.

•  Minimum/maximum    Each PDB gets both a minimum guaranteed resource allocation and a maximum.

Resource usage allocation in a CDB is measured in shares. By default, all PDBs can consume all resources allocated to the CDB. We cover more details on how shares are allocated and calculated later in the chapter.

There is only one spfile per CDB instance. All database parameters are stored in the CDB’s spfile, but many of those parameters can be changed at the PDB level. The column ISPDB_MODIFIABLE is an easy way to see which parameters you can change at the PDB level.

Images

When you unplug a PDB, its customized parameters stay with the unplugged PDB and are set when that PDB is plugged back in, regardless of which PDB it is plugged into. When a PDB is cloned, the custom parameters are cloned as well. At the container level, you can also look at the data dictionary view PDB_SPFILE$ to see which parameters are different across PDBs.

Images

In the TOOL PDB, the SESSIONS parameter is different from the default (at the CDB level); the RPTQA12C PDB has three nondefault parameters set.

Using Memory Advisors

The buffer cache in a CDB, shared across all PDBs, behaves much like the buffer cache in a non-CDB; the same least recently used (LRU) algorithms are used to determine when and if a block should stay in the buffer cache. Because the buffer cache is shared, the PDB’s container ID (CON_ID) is also stored in each block. The same container ID is stored in the other SGA and PGA memory areas such as the shared pool in the SGA and the global PGA. The memory advisors from previous versions of Oracle Database work in much the same way in a multitenant environment; sizing recommendations are at the CDB (instance) level. Individual memory parameters that can be adjusted at the PDB are limited to SORT_AREA_SIZE and SORT_AREA_RETAINED_SIZE.

Figure 33-3 shows the output from the SGA Memory Advisor launched from Cloud Control 12c.

Images

Figure 33-3    CDB SGA Memory Advisor in Cloud Control 12c

Even with several PDBs in the CDB01 container, it appears that the total memory for the CDB can be reduced by at least 1GB and retain good performance for all PDBs.

To accommodate a potentially larger number of sessions in a CDB, the parameter PGA_AGGREGATE_LIMIT was added to place a hard limit on the amount of PGA memory used. The existing parameter PGA_AGGREGATE_TARGET was useful in previous releases as a soft limit but only for tunable memory. Several sessions using untunable memory (such as PL/SQL applications that allocate large memory arrays) could potentially use up all available PGA memory, causing swap activity at the OS level and affecting performance across all instances on the server. Thus, the parameter PGA_AGGREGATE_LIMIT was added to abort PGA memory requests by one or more nonsystem connections to get under this limit.

Manage Allocation of Resources Between PDBs and Within a PDB

Within a PDB, the DBA can (generally speaking) configure the Resource Manager exactly as in a non-CDB; there are only a few capabilities that are not available. The change in use of the Resource Manager is at the CDB level, where a plan must be created that will allocate resources between PDBs.

Using Shares to Manage Inter-PDB Resources

Each PDB that’s plugged into a CDB competes for the resources of a CDB—primarily central processing units (CPUs), parallel servers, and, in the case of Oracle Exadata, input/output (I/O). How much of each resource a PDB gets depends on how many shares that PDB was assigned when it was created.

By default, each PDB gets one share unless otherwise specified. When a new PDB is added or an existing PDB is unplugged, the number of shares each PDB has remains the same. Table 33-1 shows a CDB with four PDBs: HR, BI, REPOS, and TOOL. The BI PDB has three shares, and the rest have one each, the default.

Images

Table 33-1    PDBs and Share Allocation for Four PDBs

The TOOL database, for example, is guaranteed 16.67 percent of the server’s CPU resources if needed. If one or more of the other PDBs are not active, their share is available to active PDBs.

In Table 33-2, you create a PDB called NCAL and don’t specify the number of shares, so it defaults to 1.

Images

Table 33-2    PDBs and Share Allocation for Five PDBs After Adding a New One

The minimum CPU guaranteed for each PDB is automatically recalculated based on the new total number of shares. Each PDB with one share now gets 14.29 percent of the CPU resources, and the CPU resources available (at a minimum) for the BI PDB are now 42.86 percent.

Creating and Modifying Resource Manager Plans

To refine the resource consumption further, you can set limits within each PDB using Resource Manager. From the perspective of the PDB, all resources are controlled by directives created using DBMS_RESOURCE_MANAGER. The amount of CPU, Exadata I/O, and concurrent parallel servers used by the PDB default to 100 percent, but can be adjusted down to 0 percent depending on the time of day or other circumstances.

The resource plan itself is created at the CDB level, and you create directives for each PDB within the CDB. You can also specify a set of default directives for those PDBs that do not have an explicit set of directives.

Identifying Parameters to Limit PDB Resource Usage

As part of the utilization plan for each PDB, there are two key limits you can control: the utilization limit for CPU, Exadata I/O, and parallel servers, as well as a parallel server limit. These plan directive limits are UTILIZATION_LIMIT and PARALLEL_SERVER_LIMIT, respectively.

The resource directive UTILIZATION_LIMIT defines the percentage of CPU, I/O, and parallel servers available to a PDB. If UTILIZATION_LIMIT is set at 30, then the PDB can use no more than 30 percent of the resources available to the CDB, even if no other PDBs are active.

To further refine the resource limits, you can use PARALLEL_SERVER_LIMIT to define the maximum percentage of the CDB’s PARALLEL_SERVERS_TARGET value; this value overrides the UTILIZATION_LIMIT directive, but only for parallel resources. The default is 100 percent.

Creating the CDB Resource Plan

The steps for creating a CDB resource plan are similar to creating a resource plan in a non-CDB, but with additional steps for each PDB. You create and manage the resource plan from the root container only. Table 33-3 lists the steps and corresponding DBMS_RESOURCE_MANAGER calls needed to create and configure the CDB resource plan.

Images

Table 33-3    Steps to Create a Resource Plan with DBMS_RESOURCE_MANAGER Calls

Other key procedures in DBMS_RESOURCE_MANAGER include UPDATE_CDB_PLAN to change the characteristics of the CDB resource plan and DELETE_CDB_PLAN to delete the resource plan and all of its directives. To update and delete individual CDB plan directives, use UPDATE_CDB_PLAN_DIRECTIVE and DELETE_CDB_PLAN_DIRECTIVE.

Exercise 33-3: Creating a CDB Resource Plan with PDB Directives    In this exercise, you’ll create a CDB resource plan for the CDB01 container and define plan directives for two of the PDBs in the CDB.

1.  Create a pending area for the CDB plan.

Images

2.  Create a resource plan that manages the TOOL and CCREPOS PDBs to minimize CPU and other resource usage.

Images

3.  Create a plan directive that gives PDBs different shares. In this, the PDBs named TOOL and CCREPOS have one share each. The utilization limit for TOOL should be 50 percent, and for CCREPOS it will be 75 percent.

Images

4.  Validate and submit the pending area.

Images

5.  Finally, make this Resource Manager plan the current plan.

Images

Viewing Resource Plan Directives

In Oracle Database 12c you have a data dictionary view called DBA_CDB_RSRC_PLAN_DIRECTIVES to see all of the current resource plans. Querying that view, you can see the resource plans you just created for TOOL and CCREPOS.

Images

In previous releases of Oracle Database and for non-CDBs in Oracle Database 12c, the corresponding data dictionary view is DBA_RSRC_PLAN_DIRECTIVES.

Managing Resources Within a PDB

Resource plans can manage workloads within a PDB as well. These resource plans manage workloads just as they do in a non-CDB and, not surprisingly, are called PDB resource plans. There are a few restrictions and differences with PDB plans. Table 33-4 shows the parameter and feature differences between non-CDB and PDB resource plans.

Images

Table 33-4    Differences Between Non-CDB and PDB Resource Plans

Regardless of the container type, you still view resource plans using the V$RSRC_PLAN dynamic performance view. To find the active CDB resource plan, select the row in V$RSRC_PLAN with CON_ID=1.

Migrating Non-CDB Resource Plans

You will likely convert and plug in many non-CDBs as new PDBs. This process is straightforward, and all of your applications should work as expected. If the non-CDB has a resource plan, it will be converted as well, as long as it meets these conditions:

•  There are no more than eight consumer groups.

•  There are no subplans.

•  All resource allocations are on level 1.

In other words, the migrated resource plan must be compatible with a new PDB resource plan that follows the rules in the previous section. If the plan violates any of these conditions, the plan is converted during the plug-in operation to a plan that is compatible with a PDB. This plan may be unsuitable; you can drop, modify, or create a new resource plan. The original plan is saved in DBA_RSRC_PLAN_DIRECTIVES with the STATUS column having a value of LEGACY.

Perform Database Replay

The Database Replay functionality from previous Oracle Database releases has also been enhanced in Oracle Database 12c to include simultaneous workload replays as a planning tool for estimating how multiple non-CDBs will perform in a CDB environment. You can take production workloads from multiple servers in a non-CDB environment and play them back in various configurations on a single new server to simulate how well they would coexist in a multitenant environment.

Capture Source Database Workloads

When you capture workloads for potential multitenant deployment, the workloads are typically in different business units and locations; the peak load for each application is likely at different times of the day, which makes these applications ideal candidates for consolidation. Figure 33-4 shows a typical set of workloads from applications currently on different servers.

Images

Figure 33-4    Candidate workloads for multitenant consolidation

You can also analyze existing PDBs and capture workloads to see how they would perform as a PDB of another CDB on a different server. The general steps you’ll follow as part of this analysis phase are as follows:

1.  Capture the workload of an existing non-CDB or PDB.

2.  Optionally export the AWR snapshots for the database.

3.  Restore the candidate database onto the target system.

4.  Make changes to the imported candidate database as needed, such as upgrading to Oracle Database 12c.

5.  Copy the generated workload files to the target system.

6.  Process the workload as a one-time prerequisite step.

7.  Repeat steps 1–6 for all other candidate databases.

8.  Configure the target system for replay (such as the workload replay client processes).

9.  Replay the workloads for all PDBs within the single CDB on the target system.

On the source database server, you’ll capture the workload for a typical 8-hour or 24-hour period. You’ll want all captured workloads to cover the same time period. To optimize the performance of the replay test, you can optionally export AWR snapshots, SQL profiles, and SQL tuning sets.

Process Workloads on the Target System

After you import the candidate database into a PDB of the new CDB, you import the workload generated on the source server. You preprocess the workload files in preparation for the replay, which needs to happen only once for each imported workload. It’s recommended that you replay each imported workload individually to ensure that there are no extreme variations in performance compared to that database’s performance on the original server.

Replay Workloads on Target CDB: Consolidated Replay

After all PDBs have been created and preprocessed, remap any connections that might refer to objects that don’t exist on the target system. Create a replay schedule that will replay each workload at the same time and rate that it does on the source system. You can create multiple schedules to see how workloads can be shifted to optimize the CDB’s overall performance. This is the consolidated replay capability, which will replay all the work from all the source non-CDBs concurrently.

After the replay session is complete, you review the reports generated by Consolidated Database Replay to see, for example, if the response time and overall service level agreement (SLA) of the databases on their original servers can be met by this consolidation platform. If there are severe regressions, then you can use the tuning methodologies discussed earlier in this chapter and run the replay again. Even after tuning, you may find that the server needs more CPUs or memory. Ideally, you’ll find out that each database runs just as fast or faster than it did on the original server.

One set of processed capture files can be replayed as often as desired, perhaps with different settings for the CDB instance or with different Resource Manager plans. To compare replay runs, the entire CDB may need to be reverted to a known point before each consolidated replay. Database Flashback or point-in-time recovery is useful for this to ensure that each run starts from a known state. During the replay, generate AWR snapshots and reports as you would for a production database.

Two-Minute Drill

Manage Permanent and Temporary Tablespaces in CDB and PDBs

•  The CREATE TABLESPACE command creates a tablespace whose visibility is limited to the container in which it was created, either the CDB or any of the PDBs.

•  Using ASM and OMF for tablespaces in a multitenant environment automatically keeps datafiles for each PDB in a separate directory for ease of maintenance.

•  Each PDB can have its own default permanent tablespace and temporary tablespace or temporary tablespace group.

Manage Common and Local Users

•  The two types of users in a multitenant environment are common and local.

•  Common users not created during an Oracle installation must begin with a specified prefix, which is the default of C##.

•  The root container cannot have local users, only common users.

•  Non-Oracle-maintained common users have no privileges in any container until explicitly granted.

Manage Common and Local Privileges

•  Non-Oracle-maintained common users need the CREATE SESSION privilege to connect to any container, including CDB$ROOT.

•  The GRANT and REVOKE commands work as in a non-CDB environment with the addition of the CONTAINER clause.

•  The CONTAINER clause can specify a specific container, all containers (ALL), or the currently connected container (CURRENT).

Manage Common and Local Roles

•  Common roles begin with a specified prefix, which is a default of C##.

•  Common roles can have different privileges in each PDB.

•  The CONTAINER clause specifies the scope of the changes to the role with GRANT or REVOKE.

Enable Common Users to Access Data in Specific PDBs

•  The two shared object types are object links and metadata links.

•  The CDB_USERS data dictionary view has the same columns as DBA_USERS with the addition of the CON_ID column.

•  Each container manages its own security, granting roles and privileges locally to local and common users.

Perform Backups of a CDB and PDBs

•  RMAN includes the new clause PLUGGABLE to back up and recover a PDB from the root.

•  You can back up the root container as a PDB with the name CDB$ROOT.

•  The REPORT SCHEMA command shows each tablespace and datafile within a CDB or PDB, depending on how you connected to RMAN.

•  The PLUGGABLE keyword is available in other RMAN commands such as DUPLICATE, SWITCH, and DELETE.

•  You can back up an individual PDB as the container owner with SYSDBA privileges or as a local (noncommon) user with SYSDBA or SYSBACKUP privileges.

Recover PDB from PDB Datafiles Loss

•  Instance recovery occurs only at the CDB level.

•  Missing temporary datafiles (tempfiles) are re-created automatically at container open.

•  Flashback Database is available only at the CDB level.

•  PDB PITR recovery works similarly to a tablespace recovery in a non-CDB.

•  Tablespace PITR is available for any tablespace other than SYSTEM, UNDO, and SYSAUX.

•  Controlfile loss is handled the same way as a non-CDB: Either replace the missing or damaged controlfile with a multiplexed copy or restore from an RMAN autobackup.

•  Recovery of the root SYSAUX datafile can occur with the CDB and all PDBs online.

Use Data Recovery Advisor

•  The Data Recovery Advisor supports single-instance CDBs and non-CDBs, but not clustered (RAC) databases.

•  You use the LIST FAILURE command to see any outstanding data failures within RMAN.

•  ADVISE FAILURE shows one or more options to repair the database error.

•  REPAIR FAILURE PREVIEW shows you the steps RMAN will take to repair the corrupt or lost datafiles.

•  REPAIR FAILURE implements one of the recommendations provided with ADVISE FAILURE.

•  Media failure can be repaired using PITR at either the PDB or PDB tablespace level.

•  Flashback CDB rewinds both the CDB and all PDBs within the CDB.

•  You cannot flashback a CDB to a system change number (SCN) before any PDB’s database point-in-time recovery (DBPITR).

Duplicate PDBs Using RMAN

•  You use the RMAN DUPLICATE command with new clauses that specify one or more PDBs to include in the duplicate.

•  Using the SKIP keyword, you can duplicate all PDBs within a CDB except for the specified PDBs.

•  The TABLESPACE keyword with a qualified tablespace name will duplicate a single tablespace, along with any other complete PDBs, in a single DUPLICATE command.

Monitor Operations and Performance in a CDB and PDBs

•  A CDB and its PDBs are tuned as a single instance since, well, a CDB is a single instance.

•  CDB tuning methodologies are identical to well-established tuning methodologies for non-CDB environments.

•  When consolidating multiple non-CDBs into a single CDB, start by adding together the resource usage of each non-CDB.

•  Resource usage allocations between PDBs are measured in shares.

•  Parameters that can be changed within a PDB are identified by the column ISPDB_MODIFIABLE in V$PARAMETER.

•  The buffer cache in the CDB’s instance is shared by all PDBs, and each block in the buffer cache has a container ID.

•  The new parameter PGA_AGGREGATE_LIMIT is a hard limit on total PGA usage across all PDBs.

Manage Allocation of Resources Between PDBs and Within a PDB

•  You use DBMS_RESOURCE_MANAGER to create resource allocations both at the CDB and PDB levels.

•  A share is the basic unit of minimum resource required for the PDB and defaults to 1.

•  The minimum number of resources for a PDB is a percentage calculated by its share value divided by the total number of shares across all PDBs in the CDB.

•  Adding new PDBs to a CDB does not change the share value of other PDBs; however, the percentage of resources allocated for other PDBs will decrease proportionally.

•  Using Resource Manager, you cannot define consumer groups or shares at the CDB level.

•  The utilization limit for CPU, Exadata I/O, and parallel servers is defined by the UTILIZATION_LIMIT directive in a PDB resource plan.

•  The parallel server limit in a PDB can be set in a resource plan using the PARALLEL_SERVER_LIMIT directive.

•  The data dictionary view DBA_CDB_RSRC_PLAN_DIRECTIVES contains all CDB resource plans.

•  PDB resource plans are limited to one level and eight consumer groups and cannot have subplans.

•  You can find the active CDB resource plan by querying V$RSRC_PLAN with a CON_ID of 1.

•  When you plug in a non-CDB as a new PDB, all existing resource plans are migrated as is unless the plan has more than eight consumer groups, has subplans, or is multilevel.

•  Imported resource plans from a non-PDB are converted if they violate the resource plan restrictions for a PDB.

•  A noncompliant resource plan for an imported non-CDB is saved in DBA_RSRC_PLAN_DIRECTIVES with STATUS='LEGACY'.

Perform Database Replay

•  Consolidated Database Replay uses Oracle Database Replay to measure the performance of multiple databases in a new server environment.

•  For each source database, capture workload data for replay.

•  Optionally, you can export AWR snapshots and SQL profiles to optimize performance in the target CDB.

•  You need to preprocess each imported workload only once on the target CDB.

•  Replay the consolidated workloads multiple times with tuning steps for major performance regression.

•  The CDB should be flashed back or restored between each consolidated replay run.

Self Test

1.  You are logged on to a pluggable database as SYSDBA and need to create a datafile. Which view will show you all the datafiles that already exist in the container database? (Choose the best answer.)

A.  DBA_DATA_FILES.

B.  CDB_DATA_FILES.

C.  V$DATAFILE.

D.  When connected to a pluggable container, it is not possible to see all the datafiles in the CDB.

2.  In a multitenant database, which tablespace that is part of the root container can be used by sessions connected to pluggable containers? (Choose the best answer.)

A.  The default permanent tablespace.

B.  The default temporary tablespace.

C.  The undo tablespace.

D.  No tablespace that is part of the root container can be used by sessions connected to pluggable containers.

3.  Study this code snippet:

Images

Following the execution of these commands, what can be said regarding the C##JW user?

A.  The user exists only in the root container.

B.  The user exists in the root and all pluggable containers but can connect only to the root.

C.  The user exists in the root and all pluggable containers and can connect to the root and to all pluggable containers.

D.  The user will not be able to log in to a pluggable container until their password has been set in the pluggable container.

4.  Which of the following GRANT commands, executed in the root container, will give the common user C##RWR the ability to access any table across all PDBs and the root container? (Choose the best answer.)

A.  grant select any table to c##rwr container=current;

B.  grant select any table to c##rwr container=all;

C.  grant select any table to c##rwr;

D.  grant select any table to c##rwr container=cdb$root;

5.  Which of the following statements is incorrect regarding local and common roles?

A.  Common roles can be granted to local roles.

B.  Local roles can be granted to common roles.

C.  Common roles can be granted only to common users.

D.  Local roles can be granted to local or common users.

6.  Which of the following are correct regarding multitenant (CDB and PDB) and RMAN? (Choose all that apply.)

A.  You can back up the controlfile only when connected to the root container.

B.  Tablespace backups can include multiple tablespaces from different PDBs.

C.  PDBs can be backed up individually only by connecting to the target PDB in RMAN.

D.  You can enforce division of responsibility by granting the privilege SYSBACKUP to local users within individual PDBs.

E.  The SYSBACKUP privilege can be granted commonly, to a common user.

7.  RMAN is configured for autobackup of the controlfile and spfile. You perform a whole PDB backup with RMAN like this:

Images

Shortly afterward, you lose all copies of the controlfile. Consider these steps:

1  Connect to each container and open it with RESETLOGS.

2  Connect to the pluggable container ccrepos.

3  Connect to the root container.

4  Open the database with RESETLOGS.

5  Open all pluggable containers.

6  Restore the controlfile from autobackup.

7  Recover the database.

8  Start up the CDB instance in nomount mode.

What is a correct course of action to follow?

A.  8, 2, 6, 3, 7, 1

B.  2, 8, 6, 7, 1

C.  8, 2, 6, 7, 3, 4, 5

D.  8, 6, 7, 4, 5

8.  Identify the correct statements about initialization parameters in a multitenant environment. (Choose all that apply.)

A.  A subset of parameters set at the container level can be overridden at the PDB level.

B.  Pluggable databases can have parameters that cannot be set at the container level.

C.  Unplugging a PDB and plugging it back into the same container database preserves the customized parameters set in the PDB.

D.  Unplugging a PDB and plugging it into a different container database preserves the customized parameters set in the PDB.

9.  Which of the following is true about allocating resources between PDBs in a CDB? (Choose the best answer.)

A.  Using a minimal allocation plan, the shares allocated to a PDB prevent other PDBs from using those shares even if the PDB is not busy.

B.  Using no resource plan in a CDB environment is the default and lets PDBs compete equally for all resources.

C.  In a minimum/maximum allocation scheme, a PDB can still go over the maximum if no other PDBs are busy.

D.  The “share” amount is the percentage of overall resources that a PDB can use within the CDB.

E.  The tiered allocation plan gives some users in one PDB a higher priority than users in another PDB.

10.  You are plugging in a non-CDB into a container database, and the non-CDB has several resource plans. Which of the following conditions will invalidate the imported resource plan? (Choose the best answer.)

A.  The original plan has 12 consumer groups.

B.  The resource allocations are on only the first level.

C.  The imported resource plan has the same name as an existing resource plan.

D.  The imported resource plan already has a status of LEGACY.

E.  The resource plan directive PARALLEL_SERVER_LIMIT is higher than the value of the CDB initialization parameter PARALLEL_MAX_SERVERS.

F.  The number of shares in the original resource plan is more than the total shares of all PDBs already in the container.

11.  When performing a consolidated workload replay, which of the following steps are optional when exporting or importing the workload for the databases that are candidates for multitenant consolidation?

A.  Exporting AWR snapshots from every candidate database

B.  Exporting SQL tuning sets from the candidate database

C.  Capturing and exporting the workload for a candidate database

D.  Creating a replay schedule

E.  Creating a PDB for each database

Self Test Answers

1.  Images    D. When connecting to a pluggable container, the scope of all views is limited to that container.
Images    A, B, and C are incorrect. A is incorrect because DBA views show only rows relevant to the current container. B and C are incorrect because the V$ and CDB views will show the full picture only when connected to the root container.

2.  Images    C. The undo tablespace is administered from the root container and used by all sessions in all containers.
Images    A, B, and D are incorrect. None of the root container’s tablespaces, with the exception of the undo tablespace, are visible to any PDB.

3.  Images    B. The user will have been created as a common user in every container, but the grant of the role is effective only for the local container, which is the root.
Images    A, C, and D are incorrect. A is incorrect because all users created in the root are common users. C is incorrect because unless the grant is made with the CONTAINERS=ALL clause, the grant is only local. D is incorrect because a common user’s password is always the same in all containers.

4.  Images    B. The GRANT statement must include CONTAINER=ALL.
Images    A, C, and D are incorrect. A is incorrect because specifying CONTAINER=CURRENT gives privileges only to C##RWR in the current container. C is incorrect because without a CONTAINER keyword, the privilege is granted locally. D is incorrect because the values for CONTAINER must be either CURRENT or ALL; you cannot specify a PDB or root container name.

5.  Images    C. Common roles can be granted to either common or local users.
Images    A, B, and D are incorrect. They are all valid role assignments, either local or common, to either local or common users or roles.

6.  Images    B, D, and E. When connected to the root container, the RMAN syntax lets you specify any tablespaces from any container. The SYSBACKUP privilege can be granted commonly to a common user or locally to a local user.
Images    A and C are incorrect. The controlfile can be backed up from any container, and a PDB can be backed up either from within the PDB or from the root.

7.  Images    D. This sequence will perform complete recovery of all containers.
Images    A, B, and C are incorrect. The controlfile restore can be done only in nomount mode, when only the root container is available. The OPEN RESETLOGS is executed from the root and applies to all containers.

8.  Images    A, C, and D. When you create a PDB, it automatically inherits all of the parameters at the CDB level. A subset of parameters can be set at the PDB level. These parameter values persist even if the PDB is unplugged and plugged back into the same or another container.
Images    B is incorrect because there are no parameters at the PDB level that do not exist at the CDB level.

9.  Images    B. The default resource allocation in a CDB is none. One PDB can use all CDB resources.
Images    A, C, D, and E are incorrect. A is incorrect because a PDB can use all remaining resources in a CDB even if another PDB with a higher number of shares is not active. C is incorrect because a PDB cannot exceed the maximum number of shares even if no other PDBs are busy. D is incorrect because shares are relative resource quantities and not percentages. E is incorrect because there is no such tiered resource allocation plan across PDBs in a multitenant environment, although the Resource Manager can control resource usage between users in a single PDB.

10.  Images    A. When importing a non-CDB and plugging it into a PDB of an existing container (CDB), any existing Resource Manager plans must have no more than eight consumer groups, must all be at level 1, and must have no subplans.
Images    B, C, D, E, and F are incorrect. B is incorrect because existing resource plans will import just fine with only one level. C is incorrect because the resource plans are qualified with the PDB name, and thus there is no name conflict. D is incorrect because a non-CDB’s resource plan will never have a STATUS of LEGACY since only PDBs can have imported resource plans with that status. E is incorrect because an existing non-CDB resource plan will not have a plan directive of PARALLEL_SERVER_LIMIT, which is valid only in a multitenant environment. F is incorrect because a non-CDB will not have a resource plan with a share directive.

11.  Images    A and B. You do not need to export AWR snapshots and SQL tuning sets from the source databases, but they would be helpful when tuning the combined workloads in the target CDB.
Images    C, D, and E are incorrect. C is incorrect because you cannot use Consolidated Database Replay on a target server without a workload capture. D is incorrect because you won’t be able to replay the workloads side-by-side in the new CDB without the configured schedule. E is incorrect because each candidate database must be converted (if necessary) and plugged into the new CDB as a PDB to test the consolidated workload.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset