9.7. Administering ASM Disk Groups

Using ASM disk groups benefits you in a number of ways:

  • I/O performance is improved.

  • Availability is increased.

  • The ease with which you can add a disk to a disk group or add an entirely new disk group enables you to manage many more databases in the same amount of time.

Understanding the components of a disk group as well as correctly configuring a disk group is an important goal for a successful DBA.

In the following sections, we will delve more deeply into the details of the structure of a disk group. Also, we will review the different types of administrative tasks related to disk groups and show how disks are assigned to failure groups, how disk groups are mirrored, and how disk groups are created, dropped, and altered. We will also briefly review the Enterprise Manager (EM) Database Control interface to ASM.

9.7.1. Understanding Disk Group Architecture

As defined earlier in this chapter, a disk group is a collection of physical disks managed as a unit. Every ASM disk, as part of a disk group, has an ASM disk name that is either assigned by the DBA or automatically assigned when it is assigned to the disk group.

Files in a disk group are striped on the disks using either coarse striping or fine striping. Coarse striping spreads files in units of 1MB each across all disks. Coarse striping is appropriate for a system with a high degree of concurrent small I/O requests such as an online transaction processing (OLTP) environment. Alternatively, fine striping spreads files in units of 128KB and is appropriate for traditional data warehouse environments or OLTP systems with low concurrency and improves response time for individual I/O requests.

9.7.2. Understanding Failure Groups and Disk Group Mirroring

Before defining the type of mirroring within a disk group, you must group disks into failure groups. A failure group is one or more disks within a disk group that share a common resource, such as a disk controller, whose failure would cause the entire set of disks to be unavailable to the group. In most cases, an ASM instance does not know the hardware and software dependencies for a given disk. Therefore, unless you specifically assign a disk to a failure group, each disk in a disk group is assigned to its own failure group.

Once the failure groups have been defined, you can define the mirroring for the disk group; the number of failure groups available within a disk group can restrict the type of mirroring available for the disk group. The following three types of mirroring are available:


External redundancy

External redundancy requires only one failure group and assumes that the disk is not critical to the ongoing operation of the database or that the disk is managed externally with high-availability hardware such as a RAID controller.


Normal redundancy

Normal redundancy provides two-way mirroring and requires at least two failure groups within a disk group. The failure of one of the disks in a failure group does not cause any downtime for the disk group or any data loss other than a slight performance hit for queries against objects in the disk group.


High redundancy

High redundancy provides three-way mirroring and requires at least three failure groups within a disk group. The failure of disks in two out of the three failure groups is for the most part transparent to the database users as in normal redundancy mirroring.

Mirroring is managed at a very low level; extents, not disks, are mirrored. In addition, each disk will have a mixture of both primary and mirrored (secondary and tertiary) extents on each disk. While there is a slight overhead incurred for managing mirroring at the extent level, it provides the advantage of spreading the load from the failed disk to all other disks instead of to a single disk.

9.7.3. Understanding Disk Group Dynamic Rebalancing

Whenever the configuration of a disk group changes, whether it is adding or removing a failure group or a disk within a failure group, dynamic rebalancing occurs automatically to proportionally reallocate data from other members of the disk group to the new member of the disk group. This rebalance occurs while the database is online and available to users; any impact to ongoing database I/O can be controlled by adjusting the value of the initialization parameter ASM_POWER_LIMIT to a lower value.

Not only does dynamic rebalancing free you from the tedious and often error-prone task of identifying hot spots in a disk group, it also provides an automatic way to migrate an entire database from a set of slower disks to a set of faster disks while the entire database remains online during the entire operation. The faster disks are added as two or more new failure groups in an existing disk group and the automatic rebalance occurs. The failure groups containing the slower disks are dropped, leaving a disk group with only fast disks. To make this operation even faster, both the ADD and DROP operations can be initiated within the same ALTER DISKGROUP command.

9.7.4. Creating and Deleting Disk Groups

Sometimes you may want to create a new disk group with high redundancy to hold tablespaces for an existing application. Using the view V$ASM_DISK, you can view all disks discovered using the initialization parameter ASM_DISKSTRING along with the status of the diskā€”in other words, whether it is assigned to an existing disk group or it is unassigned. Here is how you do that:

SQL> select group_number, disk_number, name,
  2    failgroup, create_date, path from v$asm_disk;

GROUP_DISK_
NUMBER NUMBER NAME       FAILGROUP" CREATE_DA PATH


------ ------ ---------- ---------- --------- ---------
     0     0                                 /dev/raw/
                                              raw6
     0     1                                 /dev/raw/
                                              raw5
     0     2                                 /dev/raw/
                                              raw4
     0     3                                 /dev/raw/
                                              raw3
     1     1 DATA1_0001 DATA1_0001 18-APR-04 /dev/raw/
                                              raw2
     1     0 DATA1_0000 DATA1_0000 18-APR-04 /dev/raw/
                                              raw1

6 rows selected.

SQL>

Out of the six disks available for ASM, only two of them are assigned to a single disk group, each in their own failure group. You can obtain the disk group name from the view V$ASM_DISKGROUP, as seen here:

SQL> select group_number, name, type, total_mb, free_mb
  2    from v$asm_diskgroup;

GROUP_NUMBER NAME         TYPE     TOTAL_MB    FREE_MB
------------ ------------ ------ ---------- ----------
           1 DATA1        NORMAL     16378     14024

SQL>

Note that if you had a number of ASM disks and disk groups, you could have joined the two views on the GROUP_NUMBER column and filtered the query result by GROUP_NUMBER. Also, you see from V$ASM_DISKGROUP that the disk group DATA1 is a NORMAL REDUNDANCY group consisting of two disks.

Your first step is to create the disk group:

SQL> create diskgroup data2 high redundancy
   2 failgroup fg1 disk '/dev/raw/raw3' name d2a
   3 failgroup fg2 disk '/dev/raw/raw4' name d2b
   4 failgroup fg3 disk '/dev/raw/raw5' name d2c
   5 failgroup fg4 disk '/dev/raw/raw6' name d2d;

Diskgroup created.

SQL>

Looking at the dynamic performance views, you see the new disk group available in V$ASM_DISKGROUP and the failure groups in V$ASM_DISK:

SQL> select group_number, name, type, total_mb, free_mb
  2   from v$asm_diskgroup;

GROUP_NUMBER NAME        TYPE     TOTAL_MB    FREE_MB
------------ ------------ ------ ---------- ----------
           1 DATA1        NORMAL     16378     14024
           2 DATA2        HIGH       24572     24420

SQL> select group_number, disk_number, name,
  2     failgroup, create_date, path from v$asm_disk;

GROUP_DISK_
NUMBER NUMBER NAME       FAILGROUP  CREATE_DA PATH
------ ------ ---------- ---------- --------- ---------
     2      3 D2D        FG4        11-MAY-04 /dev/raw/
                                              raw6
     2      2 D2C        FG3        11-MAY-04 /dev/raw/
                                              raw5
     2      1 D2B        FG2        11-MAY-04 /dev/raw/
                                              raw4
     2      0 D2A        FG1        11-MAY-04 /dev/raw/
                                              raw3
     1      1 DATA1_0001 DATA1_0001 18-APR-04 /dev/raw/
                                              raw2
     1      0 DATA1_0000 DATA1_0000 18-APR-04 /dev/raw/
                                              raw1

6 rows selected.

When you create a disk group and add a disk, you must specify FORCE if the disk has been previously used as part of a disk group. In the following example, the disk /dev/raw/raw4 was previously used as part of a disk group, so you must specify FORCE:

SQL> create diskgroup data2 high redundancy
  2 failgroup fg1 disk '/dev/raw/raw3' name d2a
  3 failgroup fg2 disk '/dev/raw/raw4' name d2b force
  4 failgroup fg3 disk '/dev/raw/raw5' name d2c
  5 failgroup fg4 disk '/dev/raw/raw6' name d2d;

Diskgroup created.

SQL>

For completeness, you can specify NOFORCE for any disk that has not been a part of a disk group in the past, but it is the default and does not need to be specified.

However, in this example, if disk space is tight, you do not need four members; for a high redundancy disk group, only three failure groups are necessary, so the disk group is dropped and recreated with only three members. Here is how you do that:

SQL> drop diskgroup data2;

Diskgroup dropped.

SQL> create diskgroup data2 high redundancy
  2 failgroup fg1 disk '/dev/raw/raw3' name d2a
  3 failgroup fg2 disk '/dev/raw/raw4' name d2b
  4 failgroup fg3 disk '/dev/raw/raw5' name d2c;

Diskgroup created.

SQL> select group_number, disk_number, name,
  2     failgroup, create_date, path from v$asm_disk;

GROUP_DISK_
NUMBER NUMBER NAME       FAILGROUP  CREATE_DA PATH
------ ----- ----------- ---------- --------- -----------
     0     3                       11-MAY-04 /dev/raw/
                                             raw6
     2     2 D2C        FG3        11-MAY-04 /dev/raw/
                                             raw5
     2     1 D2B        FG2        11-MAY-04 /dev/raw/
                                             raw4

2     0 D2A        FG1        11-MAY-04 /dev/raw/
                                             raw3
     1     1 DATA1_0001 DATA1_0001 18-APR-04 /dev/raw/
                                              raw2
     1     0 DATA1_0000 DATA1_0000 18-APR-04 /dev/raw/
                                              raw1

6 rows selected.

If the disk group had any database objects other than disk group metadata, you would have to specify INCLUDING CONTENTS in the DROP DISKGROUP command. This is an extra safeguard to make sure that disk groups with database objects are not accidentally dropped.

Now that the configuration of the new disk group has been completed, you can create a tablespace in the new disk group from the database instance:

SQL> create tablespace users3 datafile '+DATA2';
Tablespace created.

Because ASM files are OMF, no other datafile characteristics need to be specified when creating the tablespace.

9.7.5. Altering Disk Groups

You can add and drop disks from a disk group. In addition, you can alter most characteristics of a disk group without recreating the disk group or impacting user transactions on objects in the disk group. In the following examples, we will show you how to perform many of the common operations that you will perform on disk groups:

  • Adding a disk to a disk group

  • Dropping a disk from a disk group

  • Undropping a disk from a disk group

  • Rebalancing an ongoing disk group operation

  • Dismounting a disk group

  • Checking the internal consistency of a disk group

9.7.5.1. Using ALTER DISKGROUP ... ADD DISK

When a disk is added to a disk group, a rebalance operation is performed in the background after the new disk has been formatted for use in the disk group. As mentioned earlier in this chapter, the initialization parameter ASM_POWER_LIMIT controls the speed of the rebalance.

Continuing with one of the examples earlier in the chapter, suppose you decide to improve the I/O characteristics of the disk group DATA1 by adding the last available raw disk to the disk group, as follows:

SQL> alter diskgroup data1
  2    add failgroup d1fg3 disk '/dev/raw/raw6' name d1c;

Diskgroup altered.

The command returns immediately, and the format and rebalance continue in the background. You then check the status of the rebalance operation by checking V$ASM_OPERATION:

SQL> select group_number, operation, state, power, actual,
  2    sofar, est_work, est_rate, est_minutes
  3    from v$asm_operation;

GROUP_                                                EST_
NUMBER OPERA STAT POWER ACTUA SOFAR EST_WORK EST_RATE MIN
------ ----- ---- ----- ----- ----- -------- -------- ---
     1 REBAL RUN     1     1     3      964      60   16

This output shows that with a POWER setting of 1, the ASM operation is expected to take approximately 16 minutes more to complete. Because the estimate is a bit higher than you expected, you decide to allocate more resources to the rebalance operation and change the power limit for this particular rebalance operation:

SQL> alter diskgroup data1 rebalance power 8;

Diskgroup altered.

Checking the status of the rebalance operation confirms that the estimated time for completion in the column EST_MINUTES has been reduced to 4 minutes instead of 16:

SQL> select group_number, operation, state, power, actual,
  2    sofar, est_work, est_rate, est_minutes
  3    from v$asm_operation;

GROUP_                                                EST_
NUMBER OPERA STAT POWER ACTUA SOFAR EST_WORK EST_RATE MIN
------ ----- ---- ----- ----- ----- -------- -------- ---
     1 REBAL RUN     8     8    16      605      118    4

About four minutes later, you check the status once more:

SQL> /

No rows selected.

Finally, you can confirm the new disk configuration from the V$ASM_DISK and V$ASM_DISKGROUP views:

SQL> select group_number, disk_number, name,
  2   failgroup, create_date, path from v$asm_disk;

GROUP_ DISK_
NUMBER NUMBER NAME       FAILGROUP  CREATE_DA PATH
------ ------ ---------- ---------  --------- ----------
     1      2 D1C        D1F63       11-MAY-04 /dev/raw/
                                               raw6
     2      2 D2C        F63         11-MAY-04 /dev/raw/
                                               raw5
     2      1 D2B        F62         11-MAY-04 /dev/raw/
                                               raw4
     2      0 D2A        F61         11-MAY-04 /dev/raw/
                                               raw3
     1      1 DATA1_0001 DATA1_00001 18-APR-04 /dev/raw/
                                               raw2
     1      0 DATA1_0000 DATA1_00001 18-APR-04 /dev/raw/
                                               raw1

6 rows selected.

SQL> select group_number, name, type, total_mb, free_mb
  2    from v$asm_diskgroup;

GROUP_NUMBER NAME    TYPE   TOTAL_MB FREE_MB
------------ ------- ------ -------- -------
           1 DATA1   NORMAL   22521   20116
           2 DATA2   HIGH     18429   18279

SQL>

Note that the disk group DATA1 is still normal redundancy, even though it has three failure groups. However, the I/O performance of SELECT statements against objects in the disk group is improved because of additional copies of extents available in the disk group.

If /dev/raw/raw7 and /dev/raw/raw8 are the last remaining available disks in the command's discovery string (using the same format as the initialization parameter ASM_DISKSTRING), you can use a wildcard to add the disks:

SQL> alter diskgroup data1
  2    add failgroup d1fg3 disk '/dev/raw/*' name d1c;

Diskgroup altered.

The ALTER DISKGROUP command will ignore all disks that match the discovery string if they are already a part of this or any other disk group.

Real World Scenario: Mixing Disk Types within Disk Groups

For our shop floor scheduling and trouble ticket system, we wanted to improve the response time for technicians who checked the status of a repair job, because the application issuing the queries against the database was taking up to 10 seconds during the first shift. To help alleviate the problem, we noticed that we had two spare disk drives in the server running the Oracle 10g instance and put the disks to good use by using them in another failure group for the existing disk group.

After only a few minutes of testing, the performance of the queries got worse instead of better in many cases. Upon further investigation, we discovered why the extra disk drives in the server were not used for the database: They were older, slower disks, and as a rule of thumb, a disk group should not mix disk drives of different performance levels. Depending on which disk the database object's extents are mapped to, the I/O response time will vary dramatically and may actually be slower than using only the faster disks.

One situation exists where this configuration is temporarily an acceptable configuration: when converting a disk group from slower disks to faster disks. As the faster disks are added, the disk group rebalances, and once the rebalance operation is complete, the slower disks can be dropped from the disk group.


9.7.5.2. Using ALTER DISKGROUP ... DROP DISK

The DROP DISK clause removes a disk from a failure group within a disk group and performs an automatic rebalance. In a previous example, we dropped and re-created the entire disk group just to remove one member; it's a lot easier and less disruptive to the database to merely drop one disk from the group. Here's an example of dropping a disk from the group and monitoring the progress using the data dictionary view V$ASM_OPERATION:

SQL> select group_number, operation, state, power, actual,
  2    sofar, est_work, est_rate, est_minutes

3 from v$asm_operation;

No rows selected.

SQL> alter diskgroup data2 drop disk d2d;

Diskgroup altered.

SQL> select group_number, operation, state, power, actual,
  2    sofar, est_work, est_rate, est_minutes
  3    from v$asm_operation;

GROUP_                                                EST_
NUMBER OPERA STAT POWER ACTUA SOFAR EST_WORK EST_RATE MIN
------ ----- ---- ----- ----- ----- -------- -------- ----
     2 REBAL WAIT     1     0     0        0        0    0


SQL> /

GROUP_                                                EST_
NUMBER OPERA STAT POWER ACTUA SOFAR EST_WORK EST_RATE MIN
------ ----- ---- ----- ----- ----- -------- -------- ----
     2 REBAL  RUN     1     1     2      187      120    1


SQL> /

GROUP_                                                EST_
NUMBER OPERA STAT POWER ACTUA SOFAR EST_WORK EST_RATE MIN
------ ----- ---- ----- ----- ----- -------- -------- ----
     2 REBAL  RUN     1     1    56      196      253    0


SQL> /

No rows selected.
SQL>

As you can see, the DROP DISK operation is initially in a wait state, progresses to a RUN state, and when the operation finishes, no longer appears in V$ASM_OPERATION.

9.7.5.3. Using ALTER DISKGROUP ... UNDROP DISKS

The UNDROP DISKS clause cancels any pending drops of a disk from a disk group. If the drop operation has completed, you must re-add the disk to the disk group manually and incur the rebalancing costs associated with adding the disk back to the disk group.

Using the example from the previous section, you will first add the disk back to the disk group, and then you will drop the disk again. Before the disk rebalance operation completes, however, you will cancel the drop with UNDROP DISKS and verify that the cancel completed successfully by joining V$ASM_DISKGROUP with V$ASM_DISK:

SQL> alter diskgroup data2 add failgroup fg4
  2    disk '/dev/raw/raw6' name d2d;

Diskgroup altered.

SQL> select adg.name DG_NAME,
  2    ad.name FG_NAME, path from v$asm_disk ad
  3    right outer join v$asm_diskgroup adg
  4    on ad.group_number = adg.group_number
  5 where adg.name = 'DATA2';

DG_NAME    FG_NAME    PATH
---------- ---------- ---------------
DATA2      D2A        /dev/raw/raw3
DATA2      D2B        /dev/raw/raw4
DATA2      D2C        /dev/raw/rawS
DATA2      D2D        /dev/raw/raw6

4 rows selected.

SQL> alter diskgroup data2 drop disk d2d;

Diskgroup altered.

SQL> alter diskgroup data2 undrop disks;

Diskgroup altered.

As you can verify with the same query you ran previously that joins V$ASM_DISKGROUP and V$ASM_DISK, the disk group still has all four disks:

SQL> select adg.name DG_NAME,
  2     ad.name FG_NAME, path from v$asm_disk ad

3     right outer join v$asm_diskgroup adg
  4     on ad.group_number = adg.group_number
  5 where adg.name = 'DATA2';

D6_NAME    F6_NAME    PATH
---------- ---------- ---------
DATA2      D2A        /dev/raw/raw3
DATA2      D2B        /dev/raw/raw4
DATA2      D2C        /dev/raw/raw5
DATA2      D2D        /dev/raw/raw6

4 rows selected.

If you wait too long and the DROP DISK completes, this example shows you what happens if you attempt to perform an UNDROP DISKS operation:

SQL> alter diskgroup data2 drop disk d2d;

Diskgroup altered.

SQL> select group_number, operation, state, power, actual,
  2    sofar, est_work, est_rate, est_minutes
  3    from v$asm_operation;

No rows selected.

SQL> select adg.name DG_NAME,
  2    ad.name FG_NAME, path from v$asm_disk ad
  3    right outer join v$asm_diskgroup adg
  4    on ad.group_number = adg.group_number
  5 where adg.name = 'DATA2';

DG_NAME  FG_NAME   PATH
-------- --------- ---------------
DATA2    D2A       /dev/raw/raw3
DATA2    D2B       /dev/raw/raw4
DATA2    D2C       /dev/raw/rawS

3 rows selected.

9.7.5.4. Using ALTER DISKGROUP ... REBALANCE POWER n

The REBALANCE POWER n clause of ALTER DISKGROUP forces a rebalance operation to occur. This command is normally not necessary, because rebalance operations occur automatically when a disk is added, dropped, or modified. However, you need to use this command if you want to override the default speed of the rebalance operation as defined by the initialization parameter ASM_POWER_LIMIT. The earlier section titled "Using ALTER DISKGROUP ... ADD DISK" showed you how to perform a rebalance operation on the fly to adjust the speed of the rebalance operation.

9.7.5.5. Using ALTER DISKGROUP ... DROP ... ADD

The DROP and ADD combination removes a disk from a failure group and adds another disk in the same command. Instead of two rebalance operations occurring, only one occurs, saving a significant amount of CPU and I/O. In this example, you will effectively swap the disk /dev/ raw/raw5 with /dev/raw/raw6:

SQL> select adg.name DG_NAME,
  2   ad.name FG_NAME, path from v$asm_disk ad
  3   right outer join v$asm_diskgroup adg
  4   on ad.group_number = adg.group_number
  5 where adg.name = 'DATA2';

DG_NAME    FG_NAME    PATH
---------- ---------- -------------
DATA2      D2A        /dev/raw/raw3
DATA2      D2B        /dev/raw/raw4
DATA2      D2C        /dev/raw/rawS

3 rows selected.

SQL> alter diskgroup data2
  2    add failgroup fg4
  3       disk '/dev/raw/raw6' name d2d
  4    drop disk d2c;

Diskgroup altered.

SQL> select adg.name DG_NAME,
  2    ad.name FG_NAME, path from v$asm_disk ad
  3    right outer join v$asm_diskgroup adg
  4    on ad.group_number = adg.group_number
  5 where adg.name = 'DATA2';

DG_NAME    FG_NAME    PATH
---------- ---------- ------------
DATA2      D2A        /dev/raw/raw3
DATA2      D2B        /dev/raw/raw4
DATA2      D2D        /dev/raw/raw6

3 rows selected.

As a result, only one rebalance operation is performed instead of two.

9.7.5.6. Using ALTER DISKGROUP ... DISMOUNT

The DISMOUNT keyword makes a disk group unavailable to all instances, as you can see in this example:

SQL> alter diskgroup data2 dismount;

Diskgroup altered.

Note that you cannot dismount a disk group unless there are no open files on the disk group and all tablespaces in the disk group are offline.

9.7.5.7. Using ALTER DISKGROUP ... MOUNT

The MOUNT keyword makes a disk group available to all instances. In the following example, you remount the disk group dismounted previously:

SQL> alter diskgroup data2 mount;

Diskgroup altered.

9.7.5.8. Using ALTER DISKGROUP ... CHECK ALL

The CHECK ALL option verifies the internal consistency of the disk group. In the following example, you will check the consistency of the DATA2 disk group:

SQL> alter diskgroup data2 check all;

Diskgroup altered.

Checking can also be specified for individual files or disks within the disk group. If any errors are found, they are automatically repaired unless you specify the NOREPAIR option. A summary message is returned from the command, and the details are reported in the alert log.

9.7.6. Using EM Database Control with ASM Disk Groups

You can also use EM Database Control to administer disk groups. For a database that uses ASM disk groups, the Disk Groups link in the Administration tab brings you to a login screen for the ASM instance, as shown in Figure 9.4. Remember that authentication for an ASM instance uses operating system authentication only.

After authentication with the ASM instance, you can perform the same operations that you performed earlier in this chapter at the command line: mounting and dismounting disk groups, adding disk groups, adding or deleting disk group members, and so forth. Figure 9.5 shows the ASM administration screen, and Figure 9.6 shows the statistics and options for the disk group DATA1.

Other EM Database Control ASM-related screens show information such as I/O response time for the disk group, the templates defined for the disk group, and the initialization parameters in effect for this ASM instance.

Figure 9.4. ASM instance authentication

Figure 9.5. The ASM administration screen

Figure 9.6. The disk group maintenance screen

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset