1
Database Architecture and ASM

CERTIFICATION OBJECTIVES

1.01 Understanding Database Architecture and ASM

1.02 Describe ASM

1.03 Set up initialization parameter files for ASM and database instances

1.04 Start up and shut down ASM instances

1.05 Administer ASM disk groups

Image Two-Minute Drill

Image Self Test

Automatic Storage Management (ASM) is a key Oracle Database technology that you should be using in your environment, even if it has only one database and one database instance. The integration of the server file system and a volume manager built specifically for Oracle Database files makes your disk management and tuning task a breeze: Every file object is striped and mirrored to optimize performance. In addition, nearly all ASM volume management tasks can occur while the volume is online. For example, you can expand a volume or even move a volume to another disk while users are accessing it, with minimal impact to performance. The multiplexing features of an ASM cluster minimize the possibility of data loss and are generally more effective than a manual scheme that places critical files and backups on different physical drives. And if that isn’t enough, you can use an ASM instance and its disk groups to service more than one database instance, further optimizing your investment in disk hardware.

Before beginning a detailed explanation of how ASM works and how you can leverage it in your environment, this chapter offers a brief overview of the Oracle Database architecture, including instance memory structures, logical database structures, and physical database structures. A thorough understanding of the Oracle Database architecture (if you don’t already have this from previous coursework) is required to fully understand and appreciate how ASM works.

After describing how ASM works, the chapter shows you how to set up an ASM instance along with its associated disk groups and required initialization parameters. Starting up and shutting down an ASM instance are similar to that for a database instance, with a few key differences. Finally, you’ll be introduced to a few typical ASM administration scenarios that include adding disk groups, dropping disk groups, and rebalancing a disk group.

CERTIFICATION OBJECTIVE 1.01
Understanding Database Architecture and ASM

Before diving into the specifics of ASM, you must have a thorough understanding of Oracle Database and its associated memory and process structures. This section starts with the Oracle physical storage structures, including datafiles, control files, redo log files, and archived redo log files. It also covers the non-database files required to operate Oracle Database, such as initialization files and log files. Next, it reviews the key memory structures in an Oracle instance. Finally, it shows the relationships between the physical storage structures and the memory structures.

Oracle Logical Storage Structures

The datafiles in Oracle Database are grouped together into one or more tablespaces. Datafiles are physical structures that are subdivided into extents and blocks. Each tablespace is a little like a logical wrapper for a group of datafiles. Within each tablespace are finer-grained logical database structures, such as tables and indexes. Another term used is segments, which in Oracle Database is used to describe the physical space occupied by a table or an index. The way in which Oracle Database is compartmentalized allows for more efficient control over disk space usage. Figure 1-1 shows the relationship between the logical storage structures in a database.

FIGURE 1-1     Logical storage structures

Image

Tablespaces

An Oracle tablespace consists of one or more datafiles; a datafile can be a part of one and only one tablespace. For an installation of Oracle 11g, a minimum of two tablespaces are created: the SYSTEM tablespace and the SYSAUX tablespace. A default installation of Oracle 11g creates six tablespaces.

Oracle 11g (and originally Oracle 10g) allows you to create a special kind of tablespace called a bigfile tablespace, which can be as large as 128TB (terabytes).

Using bigfiles makes tablespace management completely transparent to the database administrator (DBA); in other words, the DBA can manage the tablespace as a unit without worrying about the size and structure of the underlying datafiles.

Using Oracle Managed Files (OMF) can make tablespace datafile management even easier. With OMF, the DBA specifies one or more locations in the file system where datafiles, control files, and redo log files will reside, and Oracle automatically handles the naming and management of these files.

If a tablespace is temporary, only the segments saved in the tablespace are temporary; the tablespace itself is permanent. A temporary tablespace can be used for sorting operations and for tables that exist only for the duration of the user’s session. Dedicating a tablespace for these kinds of operations helps to reduce the I/O contention between temporary segments and permanent segments stored in another tablespace, such as tables.

Tablespaces can be either dictionary managed or locally managed. In a dictionary-managed tablespace, extent management is recorded in data dictionary tables. Therefore, even if all application tables are in the USERS tablespace, the SYSTEM tablespace will still be accessed for managing Data Manipulation Language (DML) on application tables. Because all users and applications must use the SYSTEM tablespace for extent management, this creates a potential bottleneck for write-intensive applications. In a locally managed tablespace, Oracle maintains a bitmap in the header of each datafile (inside a tablespace) to track space availability. Only quotas are managed in the data dictionary, dramatically reducing the contention for data dictionary tables.

As of Oracle 9i, if the SYSTEM tablespace is locally managed, then all other tablespaces must be locally managed if both read and write operations are to be performed on them. Dictionary-managed tablespaces must be read-only in databases with a locally managed SYSTEM tablespace.

Blocks

A database block is the smallest unit of storage in Oracle. The size of a block is a specific number of bytes of storage within a given tablespace, within the database.

To facilitate efficient disk I/O performance, a block is usually a multiple of the operating system block size. The default block size is specified by the Oracle initialization parameter DB_BLOCK_SIZE. Most operating systems will allow as many as four other block sizes to be defined for other tablespaces in the database. Some high-end operating systems will allow five block sizes. The blocks in the SYSTEM, SYSAUX, and any temporary tablespaces must be of the size DB_BLOCK_SIZE.

Extents

The extent is the next level of logical grouping in the database. An extent consists of one or more database blocks. When you enlarge a database object, the space added to the object is allocated as an extent. Extents are managed by Oracle at the datafile level.

Segments

The next level of logical grouping is the segment. A segment is a group of extents that form a database object that Oracle treats as a unit, such as a table or index. As a result, this is typically the smallest unit of storage that an end user of the database will deal with. Four types of segments are found in an Oracle database: data segments, index segments, temporary segments, and undo segments.

Every table in a database resides in a single data segment consisting of one or more extents; Oracle allocates more than one segment for a table if it is a partitioned table or a clustered table. Data segments include LOB (large object) segments that store LOB data referenced by a LOB locator column in a table segment (if the LOB is not stored inline in the table).

Each index is stored in its own index segment. As with partitioned tables, each partition of a partitioned index is stored in its own segment. Included in this category are LOB index segments. A table’s non-LOB columns, a table’s LOB columns, and the LOB’s associated indexes can all reside in their own tablespace (different segments) to improve performance.

When a user’s SQL statement needs disk space to complete an operation, such as a sorting operation that cannot fit in memory, Oracle allocates a temporary segment. Temporary segments exist only for the duration of the SQL statement.

As of Oracle 10g, manual rollback segments exist only in the SYSTEM tablespace, and typically the DBA does not need to maintain the SYSTEM rollback segment. In previous Oracle releases, a rollback segment was created to save the previous values of a database DML operation in case the transaction was rolled back, and to maintain the “before” image data to provide read-consistent views of table data for other users accessing the table. Rollback segments were also used during database recovery for rolling back uncommitted transactions that were active when the database instance crashed or terminated unexpectedly.

In Oracle 10g, Automatic Undo Management handles the automatic allocation and management of rollback segments within an undo tablespace. Within an undo tablespace, the undo segments are structured similarly to rollback segments, except that the details of how these segments are managed is under control of Oracle, instead of being managed (often inefficiently) by the DBA. Automatic undo segments were available staring with Oracle9i, but manually managed rollback segments are still available in Oracle 10g. However, this functionality is deprecated as of Oracle 10g and will no longer be available in future releases. In Oracle 11g, Automatic Undo Management is enabled by default; in addition, a PL/SQL (Procedural Language/Structured Query Language) procedure is provided to help you size the UNDO tablespace.

Image

If you’re starting out with Oracle Database 11g, all you really need to know is that manual rollback is redundant and will be unavailable in a future release. In addition, automatic undo is standard in Oracle Database 11g.

Oracle Physical Storage Structures

Oracle Database uses a number of physical storage structures on the disk to hold and manage the data from user transactions. Some of these storage structures, such as the datafiles, redo log files, and archived redo log files, hold real user data. Other structures, such as control files, maintain the state of the database objects. Text-based alert and trace files contain logging information for both routine events and error conditions in the database. Figure 1-2 shows the relationship between these physical structures and the logical storage structures reviewed in the section “Oracle Logical Storage Structures.”

FIGURE 1-2     Oracle physical storage structures

Image

Datafiles

Oracle Database must contain at least one datafile. One Oracle datafile corresponds to one physical operating system file on the disk. Each datafile in Oracle Database is a member of one and only one tablespace; a tablespace, however, can consist of many datafiles. The exception is a bigfile tablespace, which consists of exactly one datafile.

An Oracle datafile can automatically expand when it runs out of space, if the DBA created the datafile with the AUTOEXTEND parameter. The DBA can also limit the amount of expansion for a given datafile by using the MAXSIZE parameter. In any case, the size of the datafile is ultimately limited by the disk volume on which it resides.

The datafile is the ultimate resting place for all data in the database. Frequently accessed blocks in a datafile are cached in memory. Similarly, new data blocks are not immediately written out to the datafile but are written to the datafile depending on when the database writer process is active. Before a user’s transaction is considered complete, however, the transaction’s changes are written to the redo log files.

Redo Log Files

Whenever data is added, removed, or changed in a table, index, or other Oracle object, an entry is written to the current redo log file. Oracle Database must have at least two redo log files, because Oracle reuses redo log files in a circular fashion. When one redo log file is filled with redo log entries, the current log file is marked as ACTIVE, if it is still needed for instance recovery, or INACTIVE, if it is not needed for instance recovery. The next log file in the sequence is reused from the beginning of the file and is marked as CURRENT.

Ideally, the information in a redo log file is never used. However, when a power failure occurs or some other server failure causes the Oracle instance to fail, the new or updated data blocks in the database buffer cache might not yet have been written to the datafiles. When the Oracle instance is restarted, the entries in the redo log file are applied to the database datafiles in a roll-forward operation to restore the state of the database up to the point at which the failure occurred.

To be able to recover from the loss of one redo log file within a redo log group, multiple copies of a redo log file can exist on different physical disks. Later in this chapter, you will see how redo log files, archived log files, and control files can be multiplexed to ensure the availability and data integrity of the Oracle database. Multiplexing, in a nutshell, means that you have more than one, or many more than one, copies of a structure for performance and availability.

Control Files

Oracle Database has at least one control file that maintains the metadata of the database. Metadata is the data about the physical structure of the database itself (the table and field definitions). Among other things, the control file contains the name of the database, when the database was created, and the names and locations of all datafiles and redo log files. In addition, the control file maintains information used by Recovery Manager (RMAN), such as the persistent RMAN settings and the types of backups that have been performed on the database. Whenever any changes are made to the structure of the database, the information about the changes is immediately reflected in the control file.

Because the control file is so critical to the operation of the database, it can also be multiplexed (one or more control files can be copied). However, no matter how many copies of the control file are associated with an instance, only one of the control files is designated as primary for purposes of retrieving database metadata.

The ALTER DATABASE BACKUP CONTROLFILE TO TRACE command is another way to back up the control file. It produces a SQL script that you can use to re-create the database control file in case all multiplexed binary versions of the control file are lost due to a catastrophic failure.

This trace file can also be used, for example, to re-create a control file if the database needs to be renamed or to change various database limits that could not otherwise be changed without re-creating the entire database.

Archived Log Files

Oracle Database can operate in one of two modes: ARCHIVELOG or NOARCHIVELOG mode. When the database is in NOARCHIVELOG mode, the circular reuse of the redo log files (also known as the online redo log files) means that redo entries (the contents of previous transactions) are no longer available in case of a failure to a disk drive or another media-related failure. Operating in NOARCHIVELOG mode does protect the integrity of the database in the event of an instance failure or system crash, because all transactions that are committed but not yet written to the datafiles are available in the online redo log files only. So crash recovery is limited to entries currently in online redo logs. If your last backup of datafiles fails before your earliest redo log file, you cannot recover your database.

In contrast, ARCHIVELOG mode sends a filled redo log file to one or more specified destinations and can be available to reconstruct the database at any given point in time in the event that a database media failure occurs. For example, if the disk drive containing the datafiles crashes, the contents of the database can be recovered to a point in time before the crash, given availability of a recent backup of the datafiles, the redo log files, and archived log files that were generated since the backup occurred.

The use of multiple archived log destinations for filled redo log files is critical for one of Oracle’s high-availability features known as Oracle Data Guard, formerly known as Oracle Standby Database.

Initialization Parameter Files

When a database instance starts, the memory for the Oracle instance is allocated, and one of two types of initialization parameter files is opened: either a text-based file called init>SID<.ora (known generically as init.ora or a PFILE), or a server parameter file (SPFILE). The instance first looks for an SPFILE in the default location for the operating system ($ORACLE_HOME/dbs on Unix, for example) as either spfile>SID<.ora or spfile.ora. If neither of these files exists, the instance looks for a PFILE with the name init>SID<.ora. Alternatively, the STARTUP command can explicitly specify a PFILE to use for startup of Oracle.

Initialization parameter files, regardless of their format, specify file locations for trace files, control files, filled redo log files, and so forth. They also set limits on the sizes of the various structures in the System Global Area (SGA), as well as how many users can connect to the database simultaneously.

Until Oracle9i, using the init.ora file was the only way to specify initialization parameters for the instance. Although it is easy to edit with a text editor, the file has some drawbacks. If a dynamic system parameter is changed at the command line with the ALTER SYSTEM command, the DBA must remember to change the init.ora file so that the new parameter value will be in effect the next time the instance is restarted.

An SPFILE makes parameter management easier and more effective for the DBA. If an SPFILE is in use for the running instance, any ALTER SYSTEM command that changes an initialization parameter can change the initialization parameter automatically in the SPFILE, change it only for the running instance, or both. No editing of the SPFILE is necessary or even possible without corrupting the SPFILE itself.

Although you cannot mirror a parameter file or SPFILE per se, you can back up an SPFILE to an init.ora file. Both the init.ora and the SPFILE for the Oracle instance should be backed up using conventional operating system commands, or in the case of an SPFILE, using Recovery Manager.

When the DBCA (Database Configuration Assistant tool) is used to create a database, an SPFILE is created by default.

Alert and Trace Log Files

When things go wrong, Oracle can and often does write messages to the alert log, and in the case of background processes or user sessions, trace log files.

The alert log file, located in the directory specified by the initialization parameter BACKGROUND_DUMP_DEST, contains the most significant routine status messages as well as critical error conditions. When the database is started up or shut down, a message is recorded in the alert log, along with a list of initialization parameters that are different from their default values. In addition, any ALTER DATABASE or ALTER SYSTEM commands issued by the DBA are recorded. Operations involving tablespaces and their datafiles are recorded here, too, such as adding a tablespace, dropping a tablespace, and adding a datafile to a tablespace. Error conditions, such as tablespaces running out of space, corrupted redo logs, and so forth, are also recorded here—all critical conditions.

The trace files for the Oracle instance background processes are also located in BACKGROUND_DUMP_DEST. For example, the trace files for PMON (process monitor) and SMON (system monitor) contain an entry when an error occurs or when SMON needs to perform instance recovery; the trace files for QMON (queue monitor) contain informational messages when it spawns a new process.

Trace files are also created for individual user sessions or connections to the database. These trace files are located in the directory specified by the initialization parameter USER_DUMP_DEST. Trace files for user processes are created in two situations: They are created when some type of error occurs in a user session because of a privilege problem, running out of space, and so forth. Or a trace file can be created explicitly with this command:

image

Trace information is generated for each SQL statement that the user executes, which can be helpful when tuning a user’s SQL statement.

The alert log file can be deleted or renamed at any time; it is re-created the next time an alert log message is generated. The DBA will often set up a daily batch job (either through an operating system mechanism, the Oracle Database internal scheduling mechanism, or using Oracle Enterprise Manager’s scheduler) to rename and archive the alert log on a daily basis.

As of Oracle Database 11g Release 1, the diagnostics for an instance are centralized in a single directory specified by the initialization parameter DIAGNOSTIC_DEST; USER_DUMP_DEST and BACKGROUND_DUMP_DEST are ignored.

Backup Files

Backup files can originate from a number of sources, such as operating system copy commands or Oracle RMAN. If the DBA performs a “cold" backup, the backup files are simply operating system copies of the datafiles, redo log files, control files, archived redo log files, and so forth.

In addition to bit-for-bit image copies of datafiles (the default in RMAN), RMAN can generate full and incremental backups of datafiles, control files, archived redo log files, and SPFILEs that are in a special format, called backupsets, readable only by RMAN. RMAN backupset backups are generally smaller than the original datafiles because RMAN does not back up unused blocks. RMAN is the standard for backup and recovery management, except in situations where RMAN backup processing has a detrimental effect on performance.

Oracle Memory Structures

Oracle uses the server’s physical memory to hold many things for an Oracle instance: the Oracle executable code itself, session information, individual processes associated with the database, and information shared between processes (such as locks on database objects). In addition, the memory structures contain user and data dictionary SQL statements, along with cached information that is eventually permanently stored on disk, such as data blocks from database segments and information about completed transactions in the database. The data area allocated for an Oracle instance is called the System Global Area (SGA). The Oracle executables reside in the software code area. In addition, an area called the Program Global Area (PGA) is private to each server and background process; one PGA is allocated for each user session or server process.

Figure 1-3 shows the relationships between these Oracle memory structures.

FIGURE 1-3     Oracle logical memory structures

Image

System Global Area

The SGA is a group of shared memory structures for an Oracle instance, shared by the users of the database instance. When an Oracle instance is started, memory is allocated for the SGA based on the values specified in the initialization parameter file or hard-coded in the Oracle software. Many of the parameters that control the size of the various parts of the SGA are dynamic; however, if the parameter

SGA_MAX_SIZE is specified, the total size of all SGA areas must not exceed the value of SGA_MAX_SIZE. If SGA_MAX_SIZE is not specified, but the parameter SGA_TARGET is specified, Oracle automatically adjusts the sizes of the SGA components so that the total amount of memory allocated is equal to SGA_TARGET. SGA_TARGET is a dynamic parameter; it can be changed while the instance is running. The parameter MEMORY_TARGET, new to Oracle 11g, balances all memory available to Oracle between the SGA and the PGA to optimize performance.

Memory in the SGA is allocated in units of granules. A granule can be either 4MB or 16MB, depending on the total size of the SGA. If the SGA is less than or equal to 128MB, a granule is 4MB; otherwise, it is 16MB. The next few subsections cover the highlights of how Oracle uses each section in the SGA.

Buffer Caches The database buffer cache holds blocks of data from disk that have been recently read to satisfy a SELECT statement or that contain modified blocks that have been changed or added from a DML statement. As of Oracle9i, the memory area in the SGA that holds these data blocks is dynamic. This is a good thing, considering that there may be tablespaces in the database with block sizes other than the default block size. Oracle allows for tablespaces with up to five different block sizes (one block size for the default and up to four others). Each block size requires its own buffer cache. As the processing and transactional needs change during the day or during the week, the values of DB_CACHE_SIZE and DB_nK_CACHE_SIZE can be dynamically changed without restarting the instance to enhance performance for a tablespace with a given block size.

Oracle can use two additional caches with the same block size as the default (DB_CACHE_SIZE) block size: the KEEP buffer pool and the RECYCLE buffer pool. As of Oracle9i, both pools allocate memory independently of other caches in the SGA.

When a table is created, you can specify the pool where the table’s data blocks will reside by using the BUFFER_POOL_KEEP or BUFFER_POOL_RECYCLE clause in the STORAGE clause. For tables that you use frequently throughout the day, it would be advantageous to place the tables into the KEEP buffer pool to minimize the I/O needed to retrieve blocks in the tables.

Shared Pool The shared pool contains two major subcaches: the library cache and the data dictionary cache. The shared pool is sized by the SHARED_POOL_SIZE initialization parameter. This is another dynamic parameter that can be resized as long as the total SGA size is less than SGA_MAX_SIZE or SGA_TARGET.

The library cache holds information about SQL and PL/SQL statements that are run against the database. In the library cache, because it is shared by all users, many different database users can potentially share the same SQL statement.

Along with the SQL statement itself, the execution plan of the SQL statement is stored in the library cache. The second time an identical SQL statement is run, by the same user or a different user, the execution plan is already computed, improving the execution time of the query or DML statement.

If the library cache is sized too small, then frequently used execution plans can be flushed out of the cache, requiring just as frequent reloads of SQL statements into the library cache.

The data dictionary is a collection of database tables, owned by the SYS and SYSTEM schemas, which contain the metadata about the database, its structures, and the privileges and roles of database users. The data dictionary cache holds a subset of the columns from data dictionary tables after first being read into the buffer cache. Data blocks from tables in the data dictionary are used continually to assist in processing user queries and other DML commands.

If the data dictionary cache is too small, requests for information from the data dictionary will cause extra I/O to occur; these I/O-bound data dictionary requests are called recursive calls and should be avoided by sizing the data dictionary cache correctly.

Redo Log Buffer The redo log buffer holds the most recent changes to the data blocks in the datafiles. When the redo log buffer is one-third full, or every 3 seconds, Oracle writes redo log records to the redo log files. Additionally, as of Oracle Database 10g, the Log Writer (LGWR) process will write the redo log records to the redo log files when 1MB of redo is stored in the redo log buffer. The entries in the redo log buffer, once written to the redo log files, are critical to database recovery if the instance crashes before the changed data blocks are written from the buffer cache to the datafiles. A user’s committed transaction is not considered complete until the redo log entries have been successfully written to the redo log files.

Large Pool The large pool is an optional area of the SGA. It is used for transactions that interact with more than one database, message buffers for processes performing parallel queries, and RMAN parallel backup and restore operations. As the name implies, the large pool makes available large blocks of memory for operations that need to allocate large blocks of memory at a time.

The initialization parameter LARGE_POOL_SIZE controls the size of the large pool and is a dynamic parameter as of Oracle9i release 2.

Java Pool The Java pool is used by the Oracle JVM (Java Virtual Machine) for all Java code and data within a user session. Storing Java code and data in the Java pool is analogous to SQL and PL/SQL code cached in the shared pool.

Streams Pool New to Oracle 10g, the streams pool is sized by using the initialization parameter STREAMS_POOL_SIZE. The streams pool holds data and control structures to support the Oracle Streams feature of Oracle Enterprise Edition. Oracle Streams manages the sharing of data and events in a distributed environment. If the initialization parameter STREAMS_POOL_SIZE is uninitialized or set to zero, the memory used for Streams operations is allocated from the shared pool and can use up to 10 percent of the shared pool.

Program Global Area

The PGA is an area of memory allocating dynamic sections of itself, privately for one set of connection processes. The configuration of the PGA depends on the connection configuration of the Oracle database: either shared server or dedicated server.

In a shared server configuration, multiple users share a connection to the database, minimizing memory usage on the server but potentially affecting response time for user requests. In a shared server environment, the SGA holds the persistent session information for a user instead of the PGA. Shared server environments are ideal for a large number of simultaneous connections to the database with infrequent or short-lived requests.

In a dedicated server environment, each user process gets its own connection to the database; the PGA contains the session memory for this configuration. The PGA also includes a sort area that is used whenever a user request requires a sort, bitmap merge, or hash join operation.

As of Oracle9i, the PGA_AGGREGATE_TARGET parameter, in conjunction with the WORKAREA_SIZE_POLICY initialization parameter, can ease system administration by allowing the DBA to choose a total size for all work areas and let Oracle manage and allocate the memory between all user processes. As mentioned earlier in this chapter, the parameter MEMORY_TARGET manages the PGA and SGA memory as a whole to optimize performance. The MEMORY_TARGET parameter can help to manage the sizing of PGA and SGA as a whole. In general, PGA was automated in Oracle9i. SGA was automated in 10g. Now that 11g has arrived, the sum of SGA and PGA is now automated as well. Even experienced DBAs find that automated memory structuring is more effective at managing memory allocations.

Software Code Area

Software code areas store the Oracle executable files that are running as part of an Oracle instance. These code areas are static in nature and change only when a new release of the software is installed. Typically, the Oracle software code areas are located in a privileged memory area separate from other user programs.

Oracle software code is strictly read-only and can be installed as either sharable or nonsharable. Installing Oracle software code as sharable saves memory when multiple Oracle instances are running on the same server, at the same software release level.

Background Processes

When an Oracle instance starts, multiple background processes start. A background process is a block of executable code designed to perform a specific task. Figure 1-4 shows the relationship between the background processes, the database, and the Oracle SGA. In contrast to a foreground process, such as a SQL*Plus session or a web browser, a background process works behind the scenes. Together, the SGA and the background processes make up an Oracle instance.

FIGURE 1-4     Oracle background processes

Image

SMON In the case of a system crash or instance failure, due to a power outage or CPU failure, SMON, the system monitor process, performs crash recovery by applying the entries in the online redo log files to the datafiles. In addition, temporary segments in all tablespaces are purged during system restart.

One of SMON’s routine tasks is to coalesce the free space in tablespaces on a regular basis if the tablespace is dictionary managed (which should be rare or nonexistent in an Oracle 11g database).

PMON If a user connection is dropped or a user process otherwise fails, PMON, the process monitor, does the cleanup work. It cleans up the database buffer cache along with any other resources that the user connection was using. For example, suppose a user session is updating some rows in a table, placing a lock on one or more of the rows. A thunderstorm knocks out the power at the user’s desk, and the SQL*Plus session disappears when the workstation is powered off. Within milliseconds, PMON will detect that the connection no longer exists and perform the following tasks:

Image Roll back the transaction that was in progress when the power went out.

Image Mark the transaction’s blocks as available in the buffer cache.

Image Remove the locks on the affected rows in the table.

Image Remove the process ID of the disconnected process from the list of active processes.

PMON will also interact with the listeners by providing information about the status of the instance for incoming connection requests.

DBWn The database writer process, known as DBWR in older versions of Oracle, writes new or changed data blocks (known as dirty blocks) in the buffer cache to the datafiles. Using an LRU (Least Recently Used) algorithm, DBWn writes the oldest, least active blocks first. As a result, the most commonly requested blocks, even if they are dirty blocks, are in memory.

Up to 20 DBWn processes can be started, DBW0 through DBW9 and DBWa through DBWj. The number of DBWn processes is controlled by the DB_WRITER_PROCESSES parameter.

LGWR LGWR, or Log Writer, is in charge of redo log buffer management. LGWR is one of the most active processes in an instance with heavy DML activity. A transaction is not considered complete until LGWR successfully writes the redo information, including the commit record, to the redo log files. In addition, the dirty buffers in the buffer cache cannot be written to the datafiles by DBWn until LGWR has written the redo information.

If the redo log files are grouped, and one of the multiplexed redo log files in a group is damaged, LGWR writes to the remaining members of the group and records an error in the alert log file. If all members of a group are unusable, the LGWR process fails and the entire instance hangs until the problem can be corrected.

ARCn If the database is in ARCHIVELOG mode, then the archiver process, or ARCn, copies redo logs to one or more destination directories, devices, or network locations whenever a redo log fills up and redo information starts to fill the next redo log in sequence. Optimally, the archive process finishes before the filled redo log is needed again; otherwise, serious performance problems occur—users cannot complete their transactions until the entries are written to the redo log files, and the redo log file is not ready to accept new entries because it is still being written to the archive location. At least three potential solutions to this problem exist: make the redo log files larger, increase the number of redo log groups, and increase the number of ARCn processes. Up to 10 ARCn processes can be started for each instance by increasing the value of the LOG_ARCHIVE_MAX_PROCESSES initialization parameter.

CKPT The checkpoint process, or CKPT, helps to reduce the amount of time required for instance recovery. During a checkpoint, CKPT updates the header of the control file and the datafiles to reflect the last successful System Change Number (SCN). A checkpoint occurs automatically every time one redo log file fills and Oracle starts to fill the next one in a round-robin sequence.

The DBWn processes routinely write dirty buffers to advance the checkpoint from where instance recovery can begin, thus reducing the Mean Time to Recovery (MTTR).

RECO The recoverer process, or RECO, handles failures of distributed transactions (that is, transactions that include changes to tables in more than one database). If a table in the CCTR (contact center) database is changed along with a table in the WHSE (data warehouse) database, and the network connection between the databases fails before the table in the WHSE database can be updated, RECO will roll back the failed transaction.

CERTIFICATION OBJECTIVE 1.02
Describe ASM

ASM is a multiplexing solution that automates the layout of datafiles, control files, and redo log files by distributing them across all available disks. When new disks are added to the ASM cluster, the database files are automatically redistributed across all disk volumes for optimal performance. The multiplexing features of an ASM cluster minimize the possibility of data loss and are generally more effective than a manual scheme that places critical files and backups on different physical drives. One of the key components of an ASM disk is a disk group, a collection of disks that ASM manages as a unit.

When creating a new tablespace or other database structure, such as a control file or redo log file, you can specify a disk group as the storage area for the database structure instead of an operating system file. ASM takes the ease of use of OMF and combines it with mirroring and striping features to provide a robust file system and logical volume manager that can even support multiple nodes in an Oracle Real Application Cluster (RAC). ASM eliminates the need to purchase a third-party logical volume manager.

ASM not only enhances performance by automatically spreading out database objects over multiple devices, but it also increases availability by allowing new disk devices to be added to the database without shutting down the database; ASM automatically rebalances the distribution of files with minimal intervention.

The following sections review the ASM architecture, show you how to create a special type of Oracle instance to support ASM, and show how to start up and shut down an ASM instance.

ASM Architecture

ASM divides the datafiles and other database structures into extents, and it divides the extents among all the disks in the disk group to enhance both performance and reliability. Instead of mirroring entire disk volumes, ASM mirrors the database objects to provide the flexibility to mirror or stripe the database objects differently depending on their type. Optionally, the objects may not be striped at all if the underlying disk hardware is already RAID enabled, part of a storage area network (SAN), or part of a network-attached storage (NAS) device.

Automatic rebalancing is another key feature of ASM. When an increase in disk space is needed, additional disk devices can be added to a disk group, and ASM moves a proportional number of files from one or more existing disks to the new disks to maintain the overall I/O balance across all disks. This happens in the background while the database objects contained in the disk files are still online and available to users. If the impact to the I/O subsystem is high during a rebalance operation, the speed at which the rebalance occurs can be reduced using an initialization parameter.

ASM requires a special type of Oracle instance to provide the interface between a traditional Oracle instance and the file system; the ASM software components are shipped with the Oracle Database software and are always available as a selection when you’re selecting the storage type for the entire database, when the database is created.

Using ASM does not, however, prevent you from mixing ASM disk groups with manual Oracle datafile management techniques. For example, you might have all of your tablespaces in ASM storage but have one tablespace created on your server’s file system to make it easier to transport to another database. Still, the ease of use and performance of ASM makes a strong case for eventually using ASM disk groups for all your storage needs.

Two Oracle background processes introduced in Oracle Database 10g support ASM instances: the rebalancer (RBAL) and ARBn. RBAL coordinates the disk activity for disk groups, performing rebalancing when a disk is added or removed. ARBn, where n can be a number from 0 to 9, performs the actual extent movement between disks in the disk groups.

For databases that use ASM disks, two new background processes exist as of Oracle Database 10g: ASMB and RBAL. ASMB performs the communication between the database and the ASM instance, whereas RBAL performs the opening and closing of the disks in the disk group on behalf of the database. This is the same process as RBAL in an ASM instance, but performs a different, but related, function. In other words, the process behaves differently depending on the type of instance.

EXERCISE 1-1
Find New ASM-related Processes in ASM and RDBMS Instances

For this exercise, identify the new background processes on a Linux server for both the RDBMS instance and the ASM instance. On Linux, every Oracle process has its own thread. You can either join the Oracle dynamic performance views V$BGPROCESS and V$SESSION, or use the Linux ps -ef command and search for command names containing either the ASM or the RDBMS instance names.

1. Query/etc/oratab for the name of the ASM and RDBMS instances:

image

2. Set the ORACLE_SID environment variable for the RDBMS instance; in this case it is DW:

image

3. Connect to the RDBMS instance and query V$SESSION and V$BGPROCESS to get the list of running processes:

image

image

Note the processes RBAL and ASMB near the end of the list.

4. You can use the PID column to identify the Linux process number and query the Linux process directly:

image

5. Next, check for the ASM background processes by setting the ORACLE_SID environment variable for the ASM instance (+ASM):

image

6. Connect to the ASM instance and query V$SESSION and V$BGPROCESS to get the list of running processes:

image

Note the new processes RBAL and ASMB in the list. The ARBn process starts when a rebalance operation is initiated.

Creating an ASM Instance

ASM requires a dedicated Oracle instance to manage the disk groups. An ASM instance generally has a smaller memory footprint than an RDBMS instance, in the range of 60MB to 120MB. It is automatically configured when ASM is specified as the database’s file storage option. When the Oracle software is installed and an existing ASM instance does not already exist, you see the Oracle Universal Installer screen shown in Figure 1-5.

FIGURE 1-5     Specifying ASM as the database file storage method

Image

As an example of disk devices used to create ASM disk groups, suppose your Linux server has a number of raw disk devices with the capacities listed in Table 1-1.

TABLE 1-1 Raw Disks for ASM Disk Groups

Device Name

Capacity

/dev/raw/raw1

12GB

/dev/raw/raw2

12GB

/dev/raw/raw3

12GB

/dev/raw/raw4

12GB

/dev/raw/raw5

4GB

/dev/raw/raw6

4GB

/dev/raw/raw7

4GB

/dev/raw/raw8

4GB

You configure the first disk group within the Oracle Universal Installer (OUI), as shown in Figure 1-6.

FIGURE 1-6     Configuring the initial ASM disk group with OUI

Image

The name of the first disk group is DATA, and you will be using /dev/raw/raw1 and /dev/raw/raw2 to create the normal redundancy disk group. If an insufficient number of raw disks are selected for the desired redundancy level, OUI generates an error message. After the database is created, both the regular instance and the ASM instance are started.

An ASM instance has a few other unique characteristics. Although it does have an initialization parameter file and a password file, it has no data dictionary,

and therefore all connections to an ASM instance are via SYS and SYSTEM using operating system authentication only. You can only connect to an ASM instance with the CONNECT / AS SYSASM command. Any username/password in the CONNECT command is ignored. Disk group commands such as CREATE DISKGROUP, ALTER DISKGROUP, and DROP DISKGROUP are valid only in an ASM instance. Finally, an ASM instance is either in a NOMOUNT or MOUNT state; it is never in an OPEN state.

As of Oracle Database 11g, a new system privilege called SYSASM separates the SYSDBA database administration privilege from the ASM storage administration privilege. The operating system group OSASM automatically grants the SYSASM privilege to the operating system user; therefore, for an ASM disk group, you use the following commands to connect to an ASM instance with the SYSASM privilege using operating system authentication:

image

Although you can still use the SYSDBA privilege in Oracle Database 11g Release 1, Oracle will write a message to the alert log indicating that the SYSDBA privilege is deprecated in the ASM instance for administrative commands and will be removed in a future release.

Here is what you will see in the alert log for the ASM instance:

image

Granting the SYSASM privilege to a database user is identical to granting the SYSDBA or SYSOPER privilege; in this example, you create the user marthag and grant her the SYSASM privilege on the ASM instance, and then connect as marthag:

image

Image

In Oracle Database 11 g Release 1, the operating system group for SYSASM and SYSDBA is the same: dba. Future releases will require separate operating system groups for SYSASM and SYSDBA users.

Because an ASM instance does not have a data dictionary, only the ASM instance’s password file is updated with the new user and SYSASM privilege. As you might expect, you can use the REVOKE command to revoke the SYSASM privilege from a user. As with an RDBMS instance, you can query the dynamic performance view V$PWFILE_USERS to see which users have SYSDBA, SYSASM, or SYSOPER privileges in the ASM instance:

image

CERTIFICATION OBJECTIVE 1.03
Set Up Initialization Parameter Files for ASM and Database Instances

The following sections review the new initialization parameters related to ASM and the existing initialization parameters that have new values to support an ASM instance. Starting up and shutting down an ASM instance has a few caveats, the most important of which is that you can’t shut down any ASM instance that is managing disks from any active RDBMS instance (you need to shut down the database first). This section also covers the naming conventions Oracle uses for ASM filenames and when you can use an abbreviated ASM filename. No discussion of ASM would be complete without a thorough review of the related dynamic performance views.

ASM Instance Components

ASM instances cannot be accessed using the variety of methods available with a traditional database. This section discusses the privileges available to you that connect with SYSDBA, SYSOPER, and SYSASM privileges. It also distinguish an ASM instance by the new and expanded initialization parameters (introduced in Oracle Database 10g and enhanced in Oracle Database 11g) available only for an ASM instance. At the end of this section, you’ll read about the procedures for starting and stopping an ASM instance, along with the dependencies between ASM instances and the database instances they serve.

Accessing an ASM Instance

As mentioned earlier in the chapter, an ASM instance does not have a data dictionary, so access to the instance is restricted to users who can authenticate with the operating system—in other words, connecting as SYSDBA, SYSASM, or SYSOPER using an operating system user in the dba group.

Users who connect to an ASM instance as SYSDBA or SYSASM (remember that using SYSDBA is deprecated as of Oracle Database 11g) can perform all ASM operations, such as creating and deleting disk groups, as well as adding and removing disks from disk groups.

The SYSOPER users have a much more limited set of commands available in an ASM instance. In general, the commands available to SYSOPER users give only enough privileges to perform routine operations for an already configured and stable ASM instance. The following list contains the operations available as SYSOPER:

Image Starting up and shutting down an ASM instance

Image Mounting or dismounting a disk group

Image Altering a disk group’s disk status from ONLINE to OFFLINE, or vice versa

Image Rebalancing a disk group

Image Performing an integrity check of a disk group

Image Accessing the V$ASM_* dynamic performance views

ASM Initialization Parameters

A number of initialization parameters are either specific to ASM instances or have new values within an ASM instance. An SPFILE is highly recommended instead of an initialization parameter file for an ASM instance. For example, parameters such as ASM_DISKGROUPS will automatically be maintained when a disk group is added or dropped, potentially freeing you from ever having to change this value manually.

The ASM-related initialization parameters are covered in the following sections.

INSTANCE_TYPE For an ASM instance, the INSTANCE_TYPE parameter has a value of ASM. The default, for a traditional Oracle instance, is RDBMS.

DB_UNIQUE_NAME The default value for the DB_UNIQUE_NAME parameter is +ASM and is the unique name for a group of ASM instances within a cluster or on a single node.

ASM_POWER_LIMIT To ensure that rebalancing operations do not interfere with ongoing user I/O, the ASM_POWER_LIMIT parameter controls how fast rebalance operations occur. The values range from 1 to 11, with 11 being the highest possible value. The default value is 1 (low I/O overhead). Because this is a dynamic parameter, you may set this to a low value during the day and set it higher overnight (assuming you’re not working a 24-hour shop), whenever a disk-rebalancing operation must occur.

ASM_DISKSTRING The ASM_DISKSTRING parameter specifies one or more operating system dependent strings to limit the disk devices that can be used to create disk groups. If this value is NULL, all disks visible to the ASM instance are potential candidates for creating disk groups. For the examples in this chapter for the test server, the value of the ASM_DISKSTRING parameter is /dev/raw/*:

image

ASM_DISKGROUPS The ASM_DISKGROUPS parameter specifies a list containing the names of the disk groups to be automatically mounted by the ASM instance at startup or by the ALTER DISKGROUP ALL MOUNT command. Even if this list is empty at instance startup, any existing disk group can be manually mounted.

LARGE_POOL_SIZE The LARGE_POOL_SIZE parameter is useful for both regular and ASM instances; however, this pool is used differently for an ASM instance. All internal ASM packages are executed from this pool, so this parameter should be set to at least 12MB for a single instance and 16MB for a RAC instance.

ASM_PREFERRED_READ_FAILURE_GROUPS The ASM_PREFERRED_READ_FAILURE_GROUPS parameter, new to Oracle Database 11g, contains a list of the preferred failure groups for a given database instance when using clustered ASM instances. This parameter is instance specific: Each instance can specify a failure group that is closest to the instance’s node to improve performance—for example, a failure group on the server’s local disk.

ASM Dynamic Performance Views

A few new dynamic performance views are associated with ASM instances. Table 1-2 contains the common ASM-related dynamic performance views. Further explanation is provided later in this chapter.

TABLE 1-2 ASM-Related Dynamic Performance Views

View Name

Used in Standard Database?

Description

V$ASM_DISK

Yes

One row for each disk discovered by an ASM instance, used by a disk group or not. For a database instance, one row for each disk group in use by the instance.

V$ASM_DISKGROUP

Yes

For an ASM instance, one row for each disk group containing general characteristics of the disk group. For a database instance, one row for each disk group in use whether mounted or not.

V$ASM_FILE

No

One row for each file in every mounted disk group.

V$ASM_OPERATION

No

One row for each executing long-running operation in the ASM instance.

V$ASM_TEMPLATE

Yes

One row for each template in each mounted disk group in the ASM instance. For a database instance, one row for each template for each mounted disk group.

V$ASM_CLIENT

Yes

One row for each database using disk groups managed by the ASM instance. For a database instance, one row for the ASM instance if any ASM files are open.

V$ASM_ALIAS

No

One row for every alias in every mounted disk group.

EXERCISE 1-2
Query Disk Groups and Available Raw Devices

In this exercise, you’ll find out the disk groups names and their associated underlying devices; then confirm the raw device list on the Linux server.

1. Connect to your database instance (NOT the ASM instance), and query the dynamic performance view V$ASM_DISK:

image

Note that four out the eight available raw devices are being used for ASM disk groups.

2. From the Linux command line, query the available raw disk groups with the raw command:

image

ASM Filename Formats

Oracle best practices recommend that ASM files should be created as OMFs, so the details of the actual filename within the disk group are not needed for most administrative functions. When an object in an ASM disk group is dropped, the file is automatically deleted. Certain commands will expose the actual filenames, such as ALTER DATABASE BACKUP CONTROLFILE TO TRACE, as well as some data dictionary and dynamic performance views. For example, the dynamic performance view V$DATAFILE shows the actual filenames within each disk group. Here is an example:

image

ASM filenames can be in one of six different formats. In the sections that follow, you’ll learn about the different formats and the context in which they can be used—either as a reference to an existing file, during a single-file creation operation, or during a multiple-file creation operation.

Fully Qualified Names

Fully qualified ASM filenames are used only when referencing an existing file. A fully qualified ASM filename has the format where group is the disk group name, dbname is the database to which the file belongs, file type is the Oracle file type, tag is information specific to the file type, and the file.incarnation pair ensures uniqueness. Here is an example of an ASM file for the USERS tablespace:

image

where group is the disk group name, dbname is the database to which the file belongs, file type is the Oracle file type, tag is information specific to the file type, and the file.incarnation pair ensures uniqueness. Here is an example of an ASM file for the USERS tablespace:

image

The disk group name is +DATA, the database name is DW, it’s a datafile for the USERS tablespace, and the file number/incarnation pair 259.627432977 ensures uniqueness if you decide to create another ASM datafile for the USERS tablespace.

Numeric Names

Numeric names are used only when referencing an existing ASM file. This allows you to refer to an existing ASM file by only the disk group name and the file number/incarnation pair. The numeric name for the ASM file in the preceding section is

image

Alias Names

An alias can be used when either referencing an existing object or creating a single ASM file. Using the ALTER DISKGROUP ADD ALIAS command, you can create a more readable name for an existing or a new ASM file. The alias is distinguishable from a regular ASM filename because it does not end in a dotted pair of numbers (the file number/incarnation pair), as shown here:

image

Alias with Template Names

An alias with a template can be used only when creating a new ASM file. Templates provide a shorthand for specifying a file type and a tag when creating a new ASM file. Here’s an example of an alias using a template for a new tablespace in the +DATA disk group:

image

The template DATAFILE specifies COARSE striping, MIRROR for a normal-redundancy group, and HIGH for a high-redundancy group; it is the default for a datafile. Because the name was not fully qualified, the ASM name for this disk group is as follows:

image

You can read more about ASM templates in the section “ASM File Types andTemplates” later in this chapter.

Incomplete Names

An incomplete filename format can be used either for single-file or multiple-file creation operations. Only the disk group name is specified, and a default template is used depending on the type of file, as shown here:

image

Incomplete Names with Template

As with incomplete ASM filenames, an incomplete filename with a template can be used either for single-file or multiple-file creation operations. Regardless of the actual file type, the template name determines the characteristics of the file.

Even though a tablespace is created in the following example, the striping and mirroring characteristics of an online log file (fine striping) are used for the new tablespace instead of the attributes for the datafile (coarse striping):

image

ASM File Types and Templates

ASM supports all types of files used by the database except for operating system executables. Table 1-3 contains the complete list of ASM file types; the ASM File Type and Tag columns are those presented previously in the section ASM Filename Formats.

TABLE 1-3 ASM File Types

Image

The default ASM file templates referenced in the last column of Table 1-3 are presented in Table 1-4.

TABLE 1-4 ASM File Template Defaults

Image

When a new disk group is created, a set of ASM file templates copied from the default templates in Table 1-4 is saved with the disk group; as a result, individual template characteristics can be changed and apply only to the disk group in which they reside. In other words, the DATAFILE system template in disk group +DATA1 may have the default coarse striping, but the DATAFILE template in disk group

+DATA2 may have fine striping. You can create your own templates in each disk group as needed.

When an ASM datafile is created with the DATAFILE template, by default the datafile is 100MB and autoextensible. The maximum size is 32767MB (32GB).

CERTIFICATION OBJECTIVE 1.04
Start Up and Shut Down ASM Instances

An ASM instance is started much like a database instance, except that the STARTUP command defaults to STARTUP MOUNT. Because there is no control file, database, or data dictionary to mount, the ASM disk groups are mounted instead of a database. The command STARTUP NOMOUNT starts up the instance but does not mount any ASM disks. In addition, you can specify STARTUP RESTRICT to temporarily prevent database instances from connecting to the ASM instance to mount disk groups.

Performing a SHUTDOWN command on an ASM instance will fail if any connected RDBMS instances exist; you need to shut down the dependent RDBMS instances first. The only exception to this is if you use the SHUTDOWN ABORT command on the ASM instance, which eventually forces all dependent databases to perform a SHUTDOWN ABORT after losing the connection with the ASM instance and generating an ORA-15064 error message.

Image

Use SHUTDOWN ABORT only when you have no other option. A SHUTDOWNABORT literally kills all processes off at the operating system level. Any pending database change activity will be rolled back on restart. Effectively, the restart will do the work that something like a SHUTDOWN ABORT will do anyway. The only difference is that with SHUTDOWN ABORT you could lose your database. Recovering using RMAN is possible but avoidable. And sometimes you could discover that whoever set up your backups might not have done it quite to the standard most DBAs would hope for.

For multiple ASM instances sharing disk groups, such as in a RAC environment, the failure of an ASM instance does not cause the database instances to fail. Instead, another ASM instance performs a recovery operation for the failed instance. Failures of connected RDBMS instances do not affect ASM instances.

EXERCISE 1-3
Stop ASM Instance with Active Connections

In this exercise, find out what happens when you attempt to stop an ASM instance with active database clients.

1. Query /etc/oratab for the name of the ASM instance:

image

2. Set the ORACLE_SID environment variable for the ASM instance:

image

3. Connect to the ASM instance:

image

4. Attempt to shut down the ASM instance:

image

Note that using SHUTDOWN ABORT will force the ASM instance to shut down but will eventually force an implicit SHUTDOWN ABORT on each attached database instance.

CERTIFICATION OBJECTIVE 1.05
Administer ASM Disk Groups

Using ASM disk groups benefits you in a number of ways: I/O performance is improved, availability is increased, and the ease with which you can add a disk to a disk group, or add an entirely new disk group, enables you to manage many more databases in the same amount of time. Understanding the components of a disk group as well as correctly configuring a disk group are important goals for a successful DBA.

This section delves more deeply into the details of the structure of a disk group. It also reviews the different types of administrative tasks related to disk groups and shows how disks are assigned to failure groups; how disk groups are mirrored; and how disk groups are created, dropped, and altered. You’ll see some raw disk devices used on a Linux server to demonstrate how disk groups are created and maintained in a hands-on lab exercise. You’ll also briefly review the EM Database Control interface to ASM; at the command line, you get an introduction to the asmcmd command-line utility that you can use to browse, copy, and manage ASM objects.

Disk Group Architecture

As defined earlier in this chapter, a disk group is a collection of physical disks managed as a unit. Every ASM disk, as part of a disk group, has an ASM disk name that is either assigned by the DBA or automatically assigned when it is assigned to the disk group.

Files in a disk group are striped on the disks using either coarse striping or fine striping. Coarse striping spreads files in units of 1MB each across all disks. Coarse striping is appropriate for a system with a high degree of concurrent small I/O requests, such as an Online Transaction Processing (OLTP) environment. Alternatively, fine striping spreads files in units of 128KB and is appropriate for traditional data warehouse environments or OLTP systems with low concurrency; it maximizes response time for individual I/O requests.

Disk Group Mirroring and Failure Groups

Before defining the type of mirroring within a disk group, you must group disks into failure groups. A failure group is one or more disks within a disk group that share a common resource, such as a disk controller, whose failure would cause the entire set of disks to be unavailable to the group. In most cases, an ASM instance does not know the hardware and software dependencies for a given disk. Therefore, unless you specifically assign a disk to a failure group, each disk in a disk group is assigned to its own failure group.

Once the failure groups have been defined, you can define the mirroring for the disk group; the number of failure groups available within a disk group can restrict the type of mirroring available for the disk group. Three types of mirroring are available: external redundancy, normal redundancy, and high redundancy.

External Redundancy

External redundancy requires only one disk location and assumes that the disk is not critical to the ongoing operation of the database or that the disk is managed externally with high-availability hardware such as a RAID controller.

Normal Redundancy

Normal redundancy provides two-way mirroring and requires at least two failure groups within a disk group. Failure of one of the disks in a failure group does not cause any downtime for the disk group or any data loss other than a slight performance hit for queries against objects in the disk group; when all disks in the failure group are online, read performance is typically improved because the requested data is available on more than one disk.

High Redundancy

High redundancy provides three-way mirroring and requires at least three failure groups within a disk group. The failure of disks in two out of the three failure groups is for the most part transparent to the database users, as in normal redundancy mirroring.

Mirroring is managed at a very low level. Extents, not disks, are mirrored. In addition, each disk will have a mixture of both primary and mirrored (secondary and tertiary) extents on each disk. Although a slight amount of overhead is incurred in managing mirroring at the extent level, it provides the advantage of spreading out the load from the failed disk to all other disks instead of to a single disk.

Disk Group Dynamic Rebalancing

Whenever you change the configuration of a disk group—whether you are adding or removing a failure group or a disk within a failure group—dynamic rebalancing occurs automatically to proportionally reallocate data from other members of the disk group to the new member of the disk group. This rebalance occurs while the database is online and available to users. Any impact to ongoing database I/O can be controlled by adjusting the value of the initialization parameter ASM_POWER_LIMIT to a lower value.

Not only does dynamic rebalancing free you from the tedious and often error-prone task of identifying hot spots in a disk group, it also provides an automatic way to migrate an entire database from a set of slower disks to a set of faster disks while the entire database remains online. Faster disks are added as a new failure group in the existing disk group with the slower disks, and the automatic rebalance occurs. After the rebalance operations complete, the failure groups containing the slower disks are dropped, leaving a disk group with only fast disks. To make this operation even faster, both the ADD and DROP operations can be initiated within the same ALTER DISKGROUP command.

As an example, suppose you want to create a new disk group with high redundancy to hold tablespaces for a new credit card authorization. Using the view V$ASM_DISK, you can view all disks discovered using the initialization parameter ASM_DISKSTRING, along with the status of the disk (in other words, whether it is assigned to an existing disk group or is unassigned). Here is the command:

image

Out of the eight disks available for ASM, only four are assigned to two disk groups, DATA and RECOV, each in its own failure group. The disk group name can be obtained from the view V$ASM_DISKGROUP:

image

Note that if you had a number of ASM disks and disk groups, you could have joined the two views on the GROUP_NUMBER column and filtered the query result by GROUP_NUMBER. Also, you see from V$ASM_DISKGROUP that both of the disk groups are normal redundancy groups consisting of two disks each.

Your first step is to create the disk group on the ASM instance:

image

Looking at the dynamic performance views, you see the new disk group available in V$ASM_DISKGROUP and the failure groups in V$ASM_DISK:

image

Now that the configuration of the new disk group has been completed, you can create a tablespace in the new disk group from the database instance:

image

Because ASM files can be OMFs, you don’t need to specify any other characteristics when you create the tablespace.

Disk Group Fast Mirror Resync

Mirroring the files in your disk groups improves performance and availability; when a failed disk in a disk group is repaired and brought back online, however, the remirroring of the entire new disk can be time-consuming. There are occasions when a disk in a disk group needs be brought offline because of a disk controller failure; the entire disk does not need remirroring, and only the data changed during the failed disk’s downtime needs to be resynced. As a result, you can use the ASM fast mirror resync feature introduced in Oracle Database 11g.

To implement fast mirror resync, you set the time window within which ASM will not automatically drop the disk in the disk group when a transient planned or unplanned failure occurs. During the transient failure, ASM keeps track of all changed data blocks so that when the unavailable disk is brought back online, only the changed blocks need to be remirrored instead of the entire disk.

To set a time window for the DATA disk group, you must first set the compatibility level of the disk group to 11.1 or higher for both the RDBMS instance and the ASM instance (this needs to be done only once for the disk group):

image

The side effect to using a higher compatibility level for the RDBMS and ASM instance is that only other instances with a version number 11.1.0.0.0 or higher can access this disk group. Next, set the disk group attribute DISK_REPAIR_TIME as in this example:

image

The default disk repair time is 3.6 hours, which should be more than adequate for most planned and unplanned (transient) outages. Once the disk is back online, run this command to notify the ASM instance that the disk DATA_0001 is back online:

image

This command starts the background procedure to copy all changed extents on the remaining disks in the disk group to the disk DATA_0001 that is now back online.

Altering Disk Groups

Disks can be added and dropped from a disk group; also, most characteristics of a disk group can be altered without re-creating the disk group or impacting user transactions on objects in the disk group.

When a disk is added to a disk group, a rebalance operation is performed in the background after the new disk is formatted for use in the disk group. As mentioned earlier in this chapter, the speed of the rebalance is controlled by the initialization parameter ASM_POWER_LIMIT.

Continuing with our example in the preceding section, suppose you decide to improve the I/O characteristics of the disk group DATA by adding the last available raw disk to the disk group, as follows:

image

The command returns immediately and the formatting and rebalancing continue in the background. You then check the status of the rebalance operation by checking the view V$ASM_OPERATION:

image

image

Because the estimate for completing the rebalance operation is 16 minutes, you decide to allocate more resources to the rebalance operation and change the power limit for this particular rebalance operation:

image

Checking the status of the rebalance operation confirms that the estimated time to completion has been reduced to 4 minutes instead of 16:

image

About 4 minutes later, you check the status once more:

image

Finally, you can confirm the new disk configuration from the V$ASM_DISK and V$ASM_DISKGROUP views:

image

image

Note that the disk group DATA is still normal redundancy, even though it has three failure groups. However, the I/O performance of SELECT statements against objects in the DATA disk group is improved due to additional copies of extents available in the disk group.

Other disk group ALTER commands are listed in Table 1-5.

TABLE 1-5 ALTER DISKGROUP Commands

ALTER DISKGROUP Command

Description

ALTER DISKGROUP ... DROP DISK

Removes a disk from a failure group within a disk group and performs an automatic rebalance

ALTER DISKGROUP ... DROP ... ADD

Drops a disk from a failure group and adds another disk, all in the same command

ALTER DISKGROUP ... MOUNT

Makes a disk group available to all instances

ALTER DISKGROUP ... DISMOUNT

Makes a disk group unavailable to all instances

ALTER DISKGROUP ... CHECK ALL

Verifies the internal consistency of the disk group

Enterprise Manager Database Control and ASM Disk Groups

The Enterprise Manager (EM) Database Control can also be used to administer disk groups. For a database that uses ASM disk groups, the Disk Groups link under the Administration tab brings you to a login page for the ASM instance shown in Figure 1-7. Remember that authentication for an ASM instance uses operating system authentication only. Figure 1-8 shows the home page for the ASM instance.

FIGURE 1-7     EM Database Control ASM instance login page

Image

FIGURE 1-8     EM Database Control ASM instance home page

Image

After authentication with the ASM instance, you can perform the same operations that you performed previously in this chapter at the command line—mounting and dismounting disk groups, adding disk groups, adding or deleting disk group members, and so forth. Figure 1-9 shows the Disk Groups tab of the ASM administration page, whereas Figure 1-10 shows the statistics and options for the disk group DATA.

FIGURE 1-9     EM Database Control ASM disk group administration page

Image

FIGURE 1-10     EM Database Control ASM disk group statistics

Image

On the page shown in Figure 1-10 you can see that the new disk in the disk group is significantly smaller than the other disks in the group; this could affect the performance and waste disk space within the disk group. To remove a failure group using EM Database Control, select the member disk’s check box and click the Remove button.

Other EM Database Control ASM-related pages show I/O response time for the disk group, the templates defined for the disk group, the initialization parameters in effect for this ASM instance, and more.

Using the asmcmd Command

The asmcmd utility, new to Oracle 10g Release 2, is a command-line utility that provides an easy way for you to browse and maintain objects within ASM disk groups by using a command set similar to Linux shell commands such as ls and mkdir.

The hierarchical nature of objects maintained by the ASM instance lends itself to a command set similar to what you would use to browse and maintain files in a Linux file system.

Before you can use asmcmd, you must ensure that the environment variables ORACLE_BASE, ORACLE_HOME, and ORACLE_SID are set to point to the ASM instance; for the ASM instance used in this chapter, these variables are set as follows:

image

In addition, you must be logged into the operating system as a user in the dba group, since the asmcmd utility connects to the database with SYSDBA privileges. The operating system user is usually oracle but can be any other user in the dba group.

You can use asmcmd utility one command at a time by using the format asmcmd command, or you can start asmcmd interactively by typing just asmcmd at the Linux shell prompt. To get a list of available commands, type help at the ASMCMD< prompt. Table 1-6 lists the asmcmd commands and a brief description of their purpose; the asmcmd commands available only in Oracle Database 11g are noted in the middle column with a Y for yes.

When you start the asmcmd utility, you start out at the root node of the ASM instance’s file system; unlike in a Linux file system, the root node is designated by

TABLE 1-6 asmcmd Command Summary

asmcmd Command

11g Only

Description

cd

 

Change the directory to the specified directory.

cp

Y

Copy files between ASM disk groups, both in the same instance and in remote instances.

du

 

Recursively display total disk space usage for the current directory and all subdirectories.

exit

 

Terminate asmcmd and return to the operating system shell prompt.

find

 

Find all occurrences of the name (using wildcards as well) starting with the specified directory.

help

 

List the asmcmd commands.

ls

 

List the contents of the current directory.

lsct

 

List information about current ASM client databases.

lsdg

 

List all disk groups and their attributes.

lsdsk

Y

List all disks visible to this ASM instance.

md_backup

Y

Create metadata backup script for specified disk groups.

md_restore

Y

Restore disk groups from a backup.

mkalias

 

Create an alias for system-generated ASM filenames.

mkdir

 

Create an ASM directory.

pwd

 

Display the current ASM directory.

remap

Y

Repair a range of corrupted or damaged physical blocks on a disk.

rm

 

Remove ASM files or directories.

rmalias

 

Remove an ASM alias, but not the target of the alias.

a plus sign (+) instead of a leading forward slash (/), although subsequent directory levels use a forward slash. In this example, you start asmcmd and query the existing disk groups, along with the total disk space used within all disk groups:

image

As with the Linux shell ls command, you can append −l to get a more detailed listing of the objects retrieved by the command. The ls command shows the three disk groups in the ASM instance used throughout this chapter: +DATA, +DATA2, and +RECOV.

Note also that the du command shows only the used disk space and total disk space used across mirrored disk groups; to get the amount of free space in each disk group, use the lsdg command instead.

In this example, you want to find all files that have the string user in the filename:

image

Note the line with +DATA/purch/users.dbf: the find command finds all ASM objects; in this case, it finds an alias as well as datafiles that match the pattern.

EXERCISE 1-4
Use the asmcmd Utility to Create a Backup of the SPFILE

In this exercise, locate the SPFILE in the ASM file hierarchy and make a backup copy to a file system external to the ASM instance. In this exercise, you will use the cp command to back up the database’s SPFILE to the /tmp directory on the host’s file system.

1. Start at the root of the file system and navigate to the DATA diskgroup for the dw instance:

image

2. Use the cp command to copy the SPFILE:

image

This exercise shows how all database files for the database dw are stored within the ASM file system. It looks like they are stored on a traditional host file system, but instead are managed by ASM, providing built-in performance and redundancy features (optimized for use with Oracle Database 11g), making the DBA’s life a bit easier when it comes to datafile management.

CERTIFICATION SUMMARY

This chapter started with a review of the Oracle Database architecture. Some of this material has been covered in previous coursework, but a refresher course is always helpful for day-to-day database administration. More importantly, understanding the basic Oracle tablespace architecture is a prerequisite for understanding how ASM disk groups will store and manage Oracle tablespaces. In addition, understanding the basic Oracle background process architecture will dovetail nicely into a discussion of the ASM-related background processes available in both an ASM instance and an RDBMS instance.

Next, the chapter covered how to create an ASM instance as well as how to set the various ASM-related initialization parameters, preferably stored in an SPFILE. As with an RDBMS instance, a number of dynamic performance views contain information about ASM disk groups, individual disks within disk groups, and files stored within each disk group. The dynamic performance view V$ASM_FILE details the files within a disk group, such as RMAN backupsets, online redo log files, archived redo log files, and datafiles.

The chapter also showed you how to start up and shut down an ASM instance. As a DBA, you must be cognizant of the dependencies between the ASM instance and any RDBMS instances that are connected to disk groups in the ASM instance when you shut down and start up an ASM instance.

Finally, the chapter provided an in-depth overview of ASM disk group administration. These administrative tasks include how to most effectively use the three types of mirroring available in an ASM disk group: external, normal, and high redundancy. In addition, it covered a number of features available starting in Oracle Database 11g, such as disk group fast mirror resync, minimizing the amount of time required to resync a disk group when a disk in a disk group is offline or otherwise unavailable for reasons other than media failure. The command-line utility asmcmd provides a view of the structure of the ASM disk groups using familiar Unix commands such as ls, mkdir, and cp.

Image TWO-MINUTE DRILL

Understanding Database Architecture and ASM

Image Oracle database logical structures include tablespaces, segments, extents, and blocks, in order of increasing granularity.

Image At a minimum, a database must have a SYSTEM and a SYSAUX tablespace.

Image Oracle database physical structures include datafiles, redo log files, control files, archived log files, initialization parameter files, alert/trace files, and backup files.

Image Oracle memory structures include the System Global Area (SGA), Program Global Area (PGA), and the software code area.

Image The primary Oracle background processes are SMON, PMON, DBWn, LGWR, ARCn, CKPT, and RECO.

Image The background processes that support ASM instances are RBAL and ARBn; databases that use ASM disks have the ASMB and RBAL background processes.

Describe ASM

Image ASM requires a dedicated instance for managing shared disks, called, not surprisingly, an ASM instance.

Image Automatic rebalancing of disks in an ASM disk group happens in the background when disks are added or removed from an ASM disk group.

Image The RBAL background process in an ASM instance coordinates disk activity for disk groups; the ARBn processes perform the actual extent movement between the disks in a disk group.

Image The ASMB background process in an RDBMS instance performs the communication between the database and the ASM instance; the RBAL background process performs the opening and closing of the disks in the disk group for the RDBMS instance.

Image An ASM instance has an initialization parameter file and a password file, but since there are no datafiles in an ASM instance, there is therefore no data dictionary; all connections to an ASM instance use operating system authentication.

Image The new SYSASM privilege in an ASM instance facilitates the separation of database administration and storage administration in an ASM instance.

Set Up Initialization Parameter Files for ASM and Database Instances

Image For an ASM instance, the initialization parameter INSTANCE_TYPE is ASM; for an RDBMS instance, the value is RDBMS.

Image The DB_UNIQUE_NAME is +ASM for an ASM instance.

Image ASM_POWER_LIMIT controls the speed of rebalancing operations and ranges from 1 to 11.

Image ASM_PREFERRED_READ_FAILURE_GROUPS contains a list of preferred failure groups for an RDBMS instance when you use clustered ASM instances.

Image All ASM-related dynamic performance views are available in both the ASM instance and the RDBMS instance except for V$ASM_FILE, V$ASM_OPERATION, and V$ASM_ALIAS.

Image A fully-qualified ASM filename has the format +group/dbname/file type/tag .file.incarnation.

Image Numeric ASM filenames are valid only for existing ASM files.

Image ASM templates are a shorthand for specifying redundancy types and striping in an ASM disk group.

Image Redundancy types for an ASM disk group are external, normal, and high.

Image ASM disk group striping can be fine or coarse.

Start Up and Shut Down ASM Instances

Image An ASM instance is in the MOUNT state when you use the STARTUP command. An ASM instance cannot be in the OPEN state like an RDBMS instance.

Image Using STARTUP RESTRICT in an ASM instance prevents database instances from accessing disk groups controlled by the ASM instance.

Image Performing a SHUTDOWN operation on an ASM instance performs a SHUTDOWN operation on all connected RDBMS instances.

Administer ASM Disk Groups

Image Coarse striping spreads files in units of 1MB each across all disks; fine striping spreads files in units of 128KB.

Image Coarse striping is appropriate for environments with a high degree of small I/O requests, such as in an OLTP environment. Fine striping is appropriate for a data warehouse environment and maximizes response time for individual I/O requests.

Image A failure group is one or more disks in a disk group that share a common resource such as a disk controller.

Image External redundancy is appropriate for noncritical disk groups or disks that are externally managed by a RAID controller.

Image Normal redundancy provides two-way mirroring and requires two failure groups within a disk group.

Image High redundancy provides three-way mirroring with a minimum of three failure groups within a disk group.

Image Mirroring is managed at a very low level. Extents are mirrored and not the disks.

Image Each disk in a disk group has a mixture of both primary and mirrored extents.

Image Dynamic rebalancing occurs automatically within a disk group to proportionally reallocate data from other disk group members to the new disk group member.

Image ASM files are typically Oracle Managed Files (OMF) but can be manually managed.

Image Fast mirror resync, available starting in Oracle Database 11g, speeds disk remirroring when a disk fails for reasons such as a controller failure. Only the changed data needs to be resynced when the disk is brought back online.

Image The default value for disk repair time is 3.6 hours and is controlled by the initialization parameter DISK_REPAIR_TIME.

Image You can monitor disk rebalance operations using the dynamic performance view V$ASM_OPERATION.

Image You can use the asmcmd command-line utility to browse and maintain objects within ASM disk groups.

Image The asmcmd commands new to Oracle Database 11g are cp, lsdsk, md_backup, md_restore, and remap.

SELF TEST

The following questions will help you measure your understanding of the material presented in this chapter. There might be more than one correct answer. Read all the choices carefully. Choose all correct answers for each question:

Understanding Database Architecture and ASM

1. Which of the following tablespaces are required in an installation of Oracle Database 11g? (Choose all that apply.)

A. USERS

B. SYSTEM

C. SYSAUX

D. TEMP

E. UNDOTBS1

F. RMAN

2. What is the maximum number of database writer processes (DBWn) in an Oracle database instance?

A. 1

B. 10

C. 20

D. None; database writer processes exist only in an ASM instance

Describe ASM

3. Which of the following background processes exist in both an ASM instance and an RDBMS instance, and also support ASM disk groups? (Choose all that apply.)

A. ASMB

B. RBAL

C. ARBn

D. LGWR

E. ARCn

4. At which level does ASM perform mirroring?

A. At the database object level

B. At the tablespace level

C. At the disk volume level

D. ASM does not perform mirroring, it only supports disk hardware that is already RAID-enabled

Set Up Initialization Parameter Files for ASM and Database Instances

5. What is the value for INSTANCE_TYPE in the init.ora file or SPFILE for an ASM instance?

A. RDBMS

B. ASM

C. +ASM

D. NOMOUNT

E. There is no such initialization parameter INSTANCE_TYPE

6. You connect to an ASM instance with connected RDBMS instances as SYSOPER and run this command:

image

What happens?

A. The ASM instance shuts down immediately, and all connected RDBMS instances shut down with the ABORT option.

B. The ASM instance shuts down immediately, and all connected RDBMS instances shut down with the IMMEDIATE option.

C. The command is ignored, since the SYSOPER privilege does not include starting up or shutting down an ASM instance.

D. The ASM instance is not shut down because there is at least one connected RDBMS instance.

7. The value of the initialization parameter ASM_DISKGROUPS on your ASM instance is the following:

image

What happens when the ASM instance starts? (Choose the best answer.)

A. Nothing happens until you issue ALTER DISKGROUP MOUNT commands.

B. The ASM instance automatically mounts the disk groups, and you can manually mount any disk groups not in the list.

C. ASM_DISKGROUPS is only valid for RDBMS instances.

D. The disk devices DATA, RECOV, and DATA2 are available to create new disk groups.

8. Which of the following parameters are required for an ASM instance? (Choose all that apply.)

A. INSTANCE_NAME

B. INSTANCE_TYPE

C. ASM_DISKGROUPS

D. ASM_POWER_LIMIT

E. ASM_PREFERRED_READ_FAILURE_GROUPS

Start Up and Shut Down ASM Instances

9. What happens to the state of an ASM instance when a connected RDBMS instance fails?

A. There is no effect on the ASM instance.

B. The ASM instance fails.

C. The ASM instance shuts down automatically.

D. All other connected RDBMS instances are paused until the failed RDBMS instance restarts.

E. The ASM instance performs instance recovery on the failed RDBMS instance while continuing to service other RDBMS clients.

10. Which of the following states are valid for an ASM instance? (Choose all that apply.)

A. OPEN

B. MOUNT

C. NOMOUNT

D. RESTRICT

Administer ASM Disk Groups

11. What is the difference between coarse striping and fine striping for files in a disk group?

A. Coarse striping spreads files in units of 1MB each across all disks, and fine striping spreads files in units of 64KB.

B. Coarse striping spreads files in units of 1MB each across all disks, and fine striping spreads files in units of 128KB.

C. Coarse striping spreads files in units of 64MB each across all disks, and fine striping spreads files in units of 1MB.

D. Coarse striping spreads files in units of 4MB each across all disks, and fine striping spreads files in units of 128KB.

12. In which situation would you use fine striping for an object in an ASM disk group? (Choose all that apply.)

A. A traditional data warehouse environment

B. An OLTP system with low concurrency

C. An OLTP system with high concurrency

D. A hybrid data warehouse and OLTP environment

E. An environment with a high degree of concurrent small I/O requests

13. When is it appropriate to use an incomplete ASM filename? (Choose all that apply.)

A. For single-file creation operations

B. For multiple-file creation operations

C. For referencing single database objects

D. For referencing the same object shared by multiple non-RAC RDBMS instances

14. If you want to use two-way mirroring (normal redundancy) for a new disk group, how many failure groups do you need?

A. One failure group with two disks on the same controller.

B. Two failure groups.

C. Three or more failure groups.

D. Two-way mirroring is supported only at the operating system level using a RAID hardware or software mirroring solution.

15. Which of the following statements are true about ASM objects and alias names? (Choose all that apply.)

A. You can use the ALTER DISKGROUP ADD ALIAS command to add a more readable alias for an existing object.

B. You can use an alias to reference an existing object.

C. You can use an alias to create a new object.

D. You can use an alias when creating multiple objects.

E. Alias names typically contain a dotted pair of numbers at the end.

F. You can use a single alias to reference a group of existing objects.

16. What are the default characteristics of an ASM file created with no options?

A. The ASM object is striped but not mirrored.

B. The ASM object is striped and mirrored by default.

C. The ASM object is mirrored but not striped.

D. The ASM object is neither striped nor mirrored.

LAB QUESTION

For the disk group DATA2 created earlier in this chapter in the section “Disk Group Dynamic Rebalancing," remove one of the failure groups. Disk space is at a premium and you don’t need four-way redundancy for the disk group.

SELF TEST ANSWERS

Understanding Database Architecture and ASM

1. Image B and C. Both the SYSTEM and SYSAUX tablespaces are required.

Image A, D, E, and F are wrong. While the USERS tablespace is highly desirable for placing application tables in its own tablespace, it is not required; TEMP, USERS, and UNDOTBS1 are created in a default installation of Oracle Database 11g. No RMAN tablespace is created, nor is it required in an installation of Oracle Database 11g.

2. Image C. The database writer processes are DBW0 through DBW9, and if needed, DBWa through DBWj on most operating system platforms.

Image A, B, and D are wrong. Database writers exist only in an RDBMS instance.

Describe ASM

3. Image B. Only the RBAL process exists in both ASM and RDBMS instances for ASM operations. RBAL coordinates the disk activity for disk groups in an ASM instance. RBAL performs the opening and closing of the disks in a disk group in an RDBMS instance, on behalf of the database.

Image A is wrong because ASMB exists only in an RDBMS instance that uses ASM disks. C is wrong because ARBn only exists in an ASM instance and performs the extent movement between disks in disk groups. D is wrong because LGWR only exists in an RDBMS instance and is not ASM related; it writes redo information to the online redo log files. E is wrong because ARCn only exists in an RDBMS instance and is not ASM related; it writes online redo log files to archive redo log files when the database is in ARCHIVELOG mode.

4. Image A. ASM mirrors database objects only.

Image B, C, and D are wrong. ASM mirrors database objects to provide the flexibility to mirror or stripe each database object differently depending on their type. ASM does not need to mirror a given object if an underlying object is already mirrored by RAID hardware or the operating system.

Set Up Initialization Parameter files for ASM and Database Instances

5. Image B. As you might expect, INSTANCE_TYPE has a value of ASM for an ASM instance.

Image A, C, D, and E are wrong. A is only valid for an RDBMS (database) instance. C is the value for DB_UNIQUE_NAME in an ASM instance. D is an option to the STARTUP command.

6. Image D. All connected RDBMS instances must be shut down before you can shut down an ASM instance with the IMMEDIATE option. If you stop an ASM instance with the ABORT option, all connected RDBMS instances are stopped.

Image A is wrong because RDBMS instances shut down with ABORT only if the ASM instance shuts down with the ABORT option or the ASM instance crashes. B is wrong because you must explicitly shut down connected RDBMS instances first. C is wrong because the SYSOPER privilege, while not as powerful as the SYSDBA or SYSASM privilege, does have the power to start and stop ASM instances.

7. Image B. The ASM instance automatically mounts the specified disk groups, and you can manually mount any disk groups not in the list.

Image A is incorrect, because ASM_DISKGROUPS facilitates automatic mounting of the specified disk groups at startup. C is incorrect because ASM_DISKGROUPS is only valid for ASM instances. D is incorrect because the parameter ASM_DISKGROUPS contains existing disk groups, not raw devices available for disk groups.

8. Image B. Only the INSTANCE_TYPE parameter is required, and its value must be ASM.

Image A, C, D, and E are wrong. ASM_DISKGROUPS can be empty, but then you must mount disk groups manually after starting an ASM instance. ASM_POWER_LIMIT defaults to 1 if it is not set; ASM_PREFERRED_READ_FAILURE_GROUPS, new to Oracle Database 11g, specifies a preferred failure group that is closest to the instance’s node to improve performance in a clustered ASM environment.

Start Up and Shut Down ASM Instances

9. Image A. The ASM instance continues to serve other RDBMS clients. When the failed RDBMS client restarts and recovers, the ASM instance will be ready to service the recovered RDBMS instance.

Image B, C, D, and E are wrong. The failure of one or more connected RDBMS instances does not affect the status of an ASM instance. An ASM instance does not perform instance recovery for any connected RDBMS instances.

10. Image B, C, and D. You can put the ASM instance into the NOMOUNT mode; the instance is started but no ASM disks are mounted. The MOUNT state mounts all ASM disks in the ASM_DISKGROUPS initialization parameter; you can use STARTUP RESTRICT to temporarily prevent RDBMS instances from accessing mounted disk groups.

Image A is wrong. ASM instances cannot be in the OPEN state as an RDBMS instance can.

Administer ASM Disk Groups

11. Image B. Coarse striping spreads files in units of 1MB each across all disks, and fine striping spreads files in units of 128KB. Remember that striping occurs at the database object level, not at the file or disk failure group level.

Image A, C, and D are wrong. None of these striping combinations are available for files in an ASM disk group.

12. Image A and B. Fine striping is appropriate for traditional data warehouse environments or OLTP systems with low concurrency; this maximizes response time for individual I/O requests.

Image C, D, and E are wrong. Coarse striping is more appropriate for systems with a high degree of concurrent small I/O requests. A hybrid data warehouse and OLTP environment would indicate a mixture of fine and course striping, depending on the object. Remember that you can individually stripe different object types within the same disk group. For example, you may have a tablespace used for OLTP transactions striped as coarse, and another tablespace used for a data warehouse star schema striped as fine.

13. Image A and B. You can use an incomplete ASM filename when creating single or multiple files. For example, you can create a tablespace and specify only the disk group name; the ASM instance uses OMF to fully qualify the internal name for the tablespace within the disk group.

Image C is wrong because you can’t resolve a single database object with an incomplete name such as a disk group name. D is wrong because you can share a disk group between RDBMS instances in a RAC environment, but not between standalone RDBMS instances.

14. Image B. A disk group with two-way mirroring requires two failure groups.

Image A is wrong because two disks on a single controller has a single point of failure and will not support two-way mirroring. C is wrong because three or more failure groups is appropriate for high redundancy disk groups. D is wrong because you can use operating system RAID solutions for any kind of ASM disk group mirroring solution, although typically you will use operating system or hardware RAID solutions for disk groups defined with external redundancy.

15. Image A, B, and C. You use the ALTER DISKGROUP ADD ALIAS command to add an alias for an existing object; you can use an alias to reference an existing object or when you create a new object.

Image D is wrong because you can create an alias only for a single object. E is wrong because aliases do not end in a dotted pair of numbers; you will typically create a more readable alias for a fully qualified object name that already contains a dotted pair of numbers at the end. F is wrong because a single alias can reference only a single database object within a disk group.

16. Image B. By default, all ASM objects are striped and mirrored; you cannot disable striping, but you can disable mirroring by using the EXTERNAL REDUNDANCY clause when you create a disk group.

Image A, C, and D are wrong. You can disable mirroring, but striping always occurs in a disk group for performance.

LAB ANSWER

Disk space is tight so you don’t need four members for the DATA2 disk group. Run a query that joins V$ASM_DISKGROUP and V$ASM_DISK to confirm the list of ASM disks that compose the DATA2 disk group:

image

For a high-redundancy disk group, only three failure groups are necessary, so you drop the disk group and re-create it with only three members:

image

If the disk group has any database objects other than disk group metadata, you have to specify the INCLUDING CONTENTS clause in the DROP DISKGROUP command. This is an extra safeguard to make sure that disk groups with database objects are not accidentally dropped. Here is the command to re-create the disk group with three disks:

image

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset