Geographic Logical Volume Manager configuration assistant
This chapter covers the following topics:
7.1 Introduction
The following sections give an introduction to Geographical Logical Volume Manager (GLVM) and the configuration assistant. Additional details, including planning and implementing, can be found in the base documentation available at IBM Knowledge Center.
7.1.1 Geographical Logical Volume Manager
GLVM provides an IP-based data mirroring capability for the data at geographically separated sites. It protects the data against total site failure by remote mirroring, and supports unlimited distance between participating sites.
GLVM for PowerHA SystemMirror Enterprise Edition provides automated disaster recovery capability by using the AIX Logical Volume Manager (LVM) and GLVM subsystems to create volume groups (VGs) and logical volumes that span across two geographically separated sites.
You can use the GLVM technology as a stand-alone method, or use it in combination with PowerHA SystemMirror Enterprise Edition.
The software increases data availability by providing continuing service during hardware or software outages (or both), planned or unplanned, for a two-site cluster. The distance between sites can be unlimited, and both sites can access the mirrored VGs serially over IP-based networks.
Also, it enables your business application to continue running at the takeover system at a remote site while the failed system is recovering from a disaster or a planned outage.
The software takes advantage of the following software components to reduce downtime and recovery time during disaster recovery:
AIX LVM subsystem and GLVM
TCP/IP subsystem
PowerHA SystemMirror for AIX cluster management
Definitions and concepts
This section defines the basic concepts of GLVM:
Remote physical volume (RPV)
A pseudo-device driver that provides access to the remote disks as though they were locally attached. The remote system must be connected by way of the Internet Protocol network. The distance between the sites is limited by the latency and bandwidth of the connecting networks.
The RPV consists of two parts:
 – RPV Client:
This is a pseudo-device driver that runs on the local machine and allows the AIX LVM to access RPVs as though they were local. The RPV clients are seen as hdisk devices, which are logical representations of the RPV.
The RPV client device driver appears as an ordinary disk device. For example, the RPV client device hdisk8 has all its I/O directed to the remote RPV server. It also has no knowledge at all about the nodes, networks, and so on.
When configuring the RPV client, the following details are defined:
 • The IP address of the RPV server.
 • The local IP address (defines the network to use).
 • The timeout. This field is primarily for the stand-alone GLVM option, as PowerHA overwrites this field with the cluster’s config_too_long time. In a PowerHA cluster, this is the worst case scenario, as PowerHA detects problems with the remote node well before then.
The SMIT fast path to configure the RPV clients is smitty rpvclient.
 – RPV server
The RPV server runs on the remote machine, one for each physical volume that is being replicated. The RPV server can listen to a number of remote RPV clients on different hosts to handle fallover.
The RPV server is an instance of the kernel extension of the RPV device driver with names such as rpvserver0, and is not an actual physical device.
When configuring the RPV server, the following items are defined:
 • The PVID of the local physical volume.
 • The IP addresses of the RPV clients (comma separated).
 – Geographically mirrored volume group (GMVG)
A VG that consists of local PVs and RPVs. Strict rules apply to GMVGs to ensure that you have a complete copy of the mirror at each site. For this reason, the superstrict allocation policy is required for each logical volume in a GMVG.
PowerHA SystemMirror Enterprise Edition also expects each logical volume in a GMVG to be mirrored and, for asynchronous replication, requires the use of AIX mirror pools. GMVGs are managed by PowerHA and recognized as a separate class of resource (GMVG Replicated Resources), so they have their own events. PowerHA verification issued a warning if there are resource groups (RGs) that contain GMVG resources that do not have the forced varyon flag set and if quorum is not disabled.
The SMIT fast path to configure the RPV servers is smitty rpvserver.
PowerHA enforces the requirement that each physical volume that is part of a VG with RPV clients has the reverse relationship defined. This, at a minimum, means that every GMVG consists of two physical volumes on each site. One disk is locally attached, and the other is a logical representation of the RPV.
 – GLVM utilities
GLVM provides SMIT menus to create the GMVGs and the logical volumes. Although not required because they perform the same function as the equivalent SMIT menus in the background, they do control the location of the logical volumes to ensure proper placement of mirror copies. If you use the standard commands to configure your GMVGs, use the GLVM verification utility.
 – Network types:
XD_data Network that can be used for data replication only. A maximum of four XD_data networks can be defined. Etherchannel is supported for this network type. This network supports adapter swap, but not fallover to another node. Heartbeat packets are also sent over this network.
XD_ip An IP-based network that is used for participation in heartbeating and client communication.
 – Mirror pools:
Mirror pools make it possible to divide the physical volumes of a VG into separate pools.
A mirror pool is made up of one or more physical volumes. Each physical volume can belong to only one mirror pool at a time. When creating a logical volume, each copy of the logical volume being created can be assigned to a mirror pool. Logical volume copies that are assigned to a mirror pool allocates only partitions from the physical volumes in that mirror pool, which provides the ability to restrict the disks that a logical volume copy can use.
Without mirror pools, the only way to restrict which physical volume is used for allocation when creating or extending a logical volume is to use a map file. Thus, using mirror pools greatly simplify this process. Think of mirror pools as an operating system level feature similar to storage consistency groups that are used when replicating data.
Although mirror pools are an AIX and not a GLVM-specific component, it is a preferred practice to use them in all GLVM configurations. However, they are required only when configuring asynchronous mode of GLVM.
 – aio_cache logical volumes
An aio_cache is a special type of logical volume that stores write requests locally while it waits for the data to be written to a remote disk. The size of this logical volume dictates how far behind the data is allowed to be between the two sites. There is one defined at each site and they are not mirrored. Similar to data volumes, these volumes must be protected locally, usually by some form of RAID.
GLVM example
Figure 7-1 on page 245 shows a relatively basic two-site GLVM implementation. It consists of only one node at each site, although PowerHA does support multiple nodes within a site.
The New York site is considered the primary site because its node primarily hosts RPV clients. The Texas site is the standby site because it primarily hosts RPV servers. However, each site contains both RPV servers and clients based on where the resources are running.
Each site has two data disk volumes that are physically associated with the site node. In this case, the disks are hdisk1 and hdisk2 at both sites. However, the hdisk names do not need to match across sites. These two disks are also configured as RPV servers on each node. In turn, these are logically linked to the RPV clients at the opposite site. This configuration creates two additional psuedo-device disks that are known as hdisk3 and hdisk4. Their associated disk definitions clearly state that they are RPV clients and not real physical disks.
Figure 7-1 GLVM example configuration
7.1.2 GLVM configuration assistant
The GLVM configuration assistant was first introduced in PowerHA SystemMirror for AIX Enterprise Edition 6.1.0 primarily for asynchronous mode. It has been continuously enhanced over its release cycle and also includes support for synchronous mode. It is also often referred to as the GLVM wizard. The idea of the GLVM wizard is to streamline an otherwise cumbersome set of procedures down to minimal inputs:
It takes the name of the nodes from both sites.
It prompts for the selection of PVIDs to be mirrored on each site.
When configuring async GLVM, it also prompts for the size of the aio_cache.
Given this information, the GLVM wizard configures all of the following items:
GMVGs.
RPV servers.
RPV clients.
Mirror pools.
Resource Group.
Synchronizes the cluster.
The GMVG is created as a scalable VG. It also activates the rpvserver at the remote site and the rpvclient on the local site and leaves the VG active. The node upon which the GLVM wizard is run becomes the primary node, and is considered the local site. The RG is created with the key settings that are shown in Example 7-1.
Example 7-1 GLVM wizard resource group settings
Startup Policy                         Online On Home Node Only
Fallover Policy                        Fallover To Next Priority Node In The List
Fallback Policy                        Never Fallback
Site Relationship                      Prefer Primary Site
Volume Groups asyncglvm
Use forced varyon for volume groups, if necessary true
GMVG Replicated Resources asyncglvm
The GMVG does not do the following actions:
Create any data-specific logical volumes or file systems within the GMVGs.
Add any other resources into the RG (for example, service IPs and application controllers).
Work for more than one GMVG.
This process can be used for the first GMVG, but additional GMVGs must be manually created and added into an RG.
7.2 Prerequisites
Before you use the GLVM wizard, complete the following prerequisites:
Additional filesets from the PowerHA SystemMirror Enterprise Edition media:
 – cluster.xd.base
 – cluster.xd.glvm
 – cluster.xd.license
 – glvm.rpv.client
 – glvm.rpv.server
A linked cluster is configured with sites.
A repository disk is defined at each site.
The verification and synchronization process completes successfully on the cluster.
XD_data networks with persistent IP labels are defined on the cluster.
The network communication between the local site and remote site is working.
All PowerHA SystemMirror services are active on both nodes in the cluster.
The /etc/hosts file on both sites contains all of the host IP, service IP, and persistent IP labels that you want to use in the GLVM configuration.
The remote site must have enough free disks and enough free space on those disks to support all of the local site VGs that are created for geographical mirroring.
7.3 Using the GLVM wizard
This section goes through an example on our test cluster of using the GLVM wizard for both synchronous and asynchronous configurations.
7.3.1 Test environment overview
For the following example shown in Example 7-2 on page 248, we are using a two-node cluster with nodes Jess and Ellie. Our test configuration consists of two sites, New York and Chicago, along with the following hardware and software (Figure 7-2):
Two POWER8 S814 with firmware 850
HMC 850
AIX 7.2.0 SP2
PowerHA SystemMirror for AIX Enterprise Edition 7.2.1
Two Storwize V7000 v7.6.1.1, one at each site for each S814
Two IP networks, one public Ethernet, and one xd_data network for GLVM traffic
Figure 7-2 GLVM test cluster
7.3.2 Synchronous configuration
Before attempting to use the GLVM wizard, all the prerequisites that are listed in 7.2, “Prerequisites” on page 246 must be complete. Our scenario is a basic two-site configuration, with one node at each site, and an XD_data network with persistent alias defined in the configuration, as shown in Example 7-2.
Example 7-2 Base GLVM cluster topology
# cltopinfo
Cluster Name: GLVMdemocluster
Cluster Type: Linked
Heartbeat Type: Unicast
Repository Disks:
Site 1 (NewYork@Jess): repossiteA
Site 2 (Chicago@Ellie): repossiteB
Cluster Nodes:
Site 1 (NewYork):
Jess
Site 2 (Chicago):
Ellie
 
#cllsif
Adapter Type Network   Net Type Attribute Node
Jess boot net_ether_01 ether public Jess
Jess_glvm boot net_ether_02 XD_data public Jess
Jess_glvm         boot       net_ether_02 XD_data public Jess
Ellie             boot  net_ether_01 ether public Ellie
Ellie_glvm        boot  net_ether_02 XD_data public Ellie
Ellie_glvm_pers   persistent net_ether_02 XD_data public     Ellie
Run smitty sysmirror and select Cluster Applications and Resources → Make Applications Highly Available (Use Smart Assists) → GLVM Configuration Assistant → Configure Asynchronous GMVG.
The menu that is shown in Figure 7-3 opens. If not, then the previously mentioned prerequisites have not been met, and you see a similar message as shown in Figure 7-4 on page 249.
                  Create GMVG with Synchronous Mirror Pools
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Enter the name of the VG [syncglvm]
* Select disks to be mirrored from the local site (00f92db138ef5aee) +
* Select disks to be mirrored from the remote site (00f92db138df5181)     +
 
 
Figure 7-3 Synchronous GLVM wizard menu
                            COMMAND STATUS
 
Command: OK stdout: yes stderr: no
 
Before command completion, additional instructions may appear below.
 
No nodes are currently defined for the cluster.
 
Define at least one node, and ideally all nodes, prior to defining
the repository disk/disks and cluster IP address. It is important that all
nodes in the cluster have access to the repository disk or respective
repository disks(in case of a linked cluster) and can be reached via the
cluster IP addresses, therefore you should define the nodes in the cluster first
Figure 7-4 Synchronous GLVM prerequisites not met
Enter the field values as follows:
Enter the Name of the VG
Enter the name of the VG that you want to create as a geographically mirrored VG. If the RG is created by using the GLVM Configuration Assistant, the VG name is appended with _RG. For example, if the VG name is syncglvmvg, the RG name is syncglvmvg_RG.
Select disks to be mirrored from the local site
Press F4 to display a list of available disks. Press F7 to select the disks that you want to geographically mirror from the local site. After all disks are selected, press Enter.
Select disks to be mirrored from the remote site
Press F4 to display a list of available disks. Press F7 to select the disks that you want to geographically mirror from the remote site. After all disks are selected, press Enter.
Node Jess uses local disk hdisk9, and node Ellie uses local disk hdisk3 for the GMVG. Each one is associated with a rpvserver, which in turn is linked to their respective rpvclients. The rpvclients become hdisk1 on Jess and hdisk0 on Ellie, as shown in Figure 7-2 on page 247. The rpvclients acquire these disk names because they are the first hdisk names that are available on each node. The output from running the synchronous GLVM wizard is shown in Example 7-3.
Example 7-3 Synchronous GLVM wizard output
Extracting the names for sites.
Extracting the name for nodes from both local and remote sites.
Creating RPVServers on all nodes of local site.
 
Creating RPVServers on node rpvserver0 Available
Creating RPVServers on all nodes of remote site.
Creating RPVServers on node rpvserver0 Available
Creating RPVServers on node rpvserver0 Available
Creating RPVClients on all nodes of local site.
Creating RPVClients on node hdisk1 Available
Creating RPVClients on all nodes of remote site.
Creating RPVClients on node hdisk0 Available
Changing RPVServers and RPVClients to defined and available state accordingly
to facilitate the creation of VG.
Changing RPVServer rpvserver0 Defined
 
Changing RPVClient hdisk0 Defined
Generating Unique Names for Mirror pools and Resource Group.
Generating resource group (RG) name.
Unique names generated.
Creating VG syncglvmvg
Creating first mirror pool
Extending the VG to RPVClient disks and creating second mirror pool
 
Creating SYNC Mirror Pools
Varying on volume group:
Setting attributes for 0516-1804 chvg: The quorum change takes effect immediately.
Varying off volume group:
Changing RPVClient hdisk1 Defined
Changing RPVServer rpvserver0 Defined
Changing RPVServer rpvserver0 Available
 
Importing the VG
Changing RPVClient hdisk0 Available
Importing the VG synclvodm: No logical volumes in volume group syncglvmvg.
syncglvmvg
 
Varying on volume group:
Setting attributes for 0516-1804 chvg: The quorum change takes effect immediately.
 
Varying off volume group:
Changing RPVClient hdisk0 Defined
Definition of VG is available on all the nodes of the cluster.
Changing RPVServer rpvserver0 Defined
Creating a resource group.
Adding VG Verifying and synchronising the cluster configuration ...
 
Verification to be performed on the following:
Cluster Topology
Cluster Resources
 
Retrieving data from available cluster nodes. This could take a few minutes.
 
Start data collection on node Jess
Start data collection on node Ellie
Collector on node Jess completed
Collector on node Ellie completed
Data collection complete
WARNING: No backup repository disk is UP and not already part of a VG for nodes:
- Jess
- Ellie
 
Completed 10 percent of the verification checks
 
WARNING: There are IP labels known to PowerHA SystemMirror and not listed in file /usr/es/sbin/cluster/etc/clhosts.client on node: Jess. Clverify can automat
ically populate this file to be used on a client node, if executed in auto-corre
ctive mode.
WARNING: There are IP labels known to PowerHA SystemMirror and not listed in file /usr/es/sbin/cluster/etc/clhosts.client on node: Ellie. Clverify can automati
cally populate this file to be used on a client node, if executed in auto-correc
tive mode.
WARNING: An XD_data network has been defined, but no additional
XD heartbeat network is defined. It is strongly recommended that
an XD_ip network be configured in order to help prevent
cluster partitioning if the XD_data network fails. Cluster partitioning
may lead to data corruption for your replicated resources.
Completed 30 percent of the verification checks
This cluster uses Unicast heartbeat
Completed 40 percent of the verification checks
Completed 50 percent of the verification checks
Completed 60 percent of the verification checks
Completed 70 percent of the verification checks
 
Verifying XD Solutions...
 
Completed 80 percent of the verification checks
Completed 90 percent of the verification checks
Verifying additional prerequisites for Dynamic Reconfiguration...
...completed.
 
Committing any changes, as required, to all available nodes...
Adding any necessary PowerHA SystemMirror for AIX entries to /etc/inittab and
/etc/rc.net for IP address Takeover on node Jess.
Checking for any added or removed nodes
1 tunable updated on cluster GLVMdemocluster.
Adding any necessary PowerHA SystemMirror for AIX entries to /etc/inittab and
/etc/rc.net for IP address Takeover on node Ellie.
Updating Split Merge policies
 
Verification has completed normally.
 
clsnapshot: Creating file /usr/es/sbin/cluster/snapshots/active.0.odm.
 
clsnapshot: Succeeded creating Cluster Snapshot: active.0
Attempting to sync user mirror groups (if any)...
Attempting to refresh user mirror groups (if any)...
 
cldare: Requesting a refresh of the Cluster Manager...
00026|NODE|Jess|VERIFY|PASSED|Fri Nov 18 11:20:38|A cluster configuration ver
ification operation PASSED on node "Jess". Detailed output can be found in "/
var/hacmp/clverify/clverify.log" on that node.
 
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING.
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER..
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER..
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_STABLE.....completed.
Synchronous cluster configuration
After the successful running of the GLVM wizard, the cluster RG is shown in Example 7-4.
Example 7-4 Synchronous GLVM resource group
Resource Group Name syncglvmvg_RG
Participating Node Name(s) Jess Ellie
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Site Relationship Prefer Primary Site
Node Priority
Service IP Label
Filesystems ALL
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to be exported (NFSv3)
Filesystems/Directories to be exported (NFSv4)
Filesystems to be NFS mounted
Network For NFS Mount
Filesystem/Directory for NFSv4 Stable Storage
Volume Groups syncglvmvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessary true
Disks
Raw Disks
Disk Error Management? no
GMVG Replicated Resources syncglvmvg
GMD Replicated Resources
PPRC Replicated Resources
SVC PPRC Replicated Resources
EMC SRDF? Replicated Resources
Hitachi TrueCopy? Replicated Resources
Generic XD Replicated Resources
AIX Connections Services
AIX Fast Connect Services
Shared Tape Resources
Application Servers
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups false
Inactive Takeover
SSA Disk Fencing false
Filesystems mounted before IP configured false
WPAR Name
Primary node and site configuration
The primary node, Jess, has both a rpvserver, rpvserver0, and rpvclient, hdisk1, created, and the scalable GMVG, syncglvmvg, is active. Also, there are two mirror pools, glvmMP01 and glvmMP02. Considering that there are no logical volumes or file systems that are created, the GMVG is also in sync. All of this is shown in Example 7-5.
Example 7-5 Synchronous GLVM primary site configuration
Jess# lspv
hdisk0 00f92db16aa2703a rootvg active
repossiteA 00f92db10031b9e9 caavg_private active
hdisk9 00f92db138ef5aee syncglvmvg active
hdisk10 00f92db17835e777 None
hdisk1 00f92db138df5181 syncglvmvg active
 
Jess# lsvg syncglvmvg
 
VOLUME GROUP: syncglvmvg VG IDENTIFIER: 00f92db100004c00000001587873d400
VG STATE: active PP SIZE: 8 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 2542 (20336 megabytes)
MAX LVs: 256 FREE PPs: 2542 (20336 megabytes)
LVs: 0 USED PPs: 0 (0 megabytes)
OPEN LVs: 0 QUORUM: 1 (Disabled)
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: no
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 512 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: non-relocatable
MIRROR POOL STRICT: super
PV RESTRICTION: none INFINITE RETRY: no
DISK BLOCK SIZE: 512 CRITICAL VG: yes
FS SYNC OPTION: no
 
Jess# lsmp -A syncglvmvg
VOLUME GROUP: syncglvmvg Mirror Pool Super Strict: yes
 
MIRROR POOL: glvmMP01 Mirroring Mode: SYNC
MIRROR POOL: glvmMP02 Mirroring Mode: SYNC
 
Jess# lsrpvclient -H
# RPV Client Physical Volume Identifier Remote Site
# -----------------------------------------------------------
hdisk1 00f92db138df5181 Chicago
 
Jess# lsrpvserver -H
# RPV Server Physical Volume Identifier Physical Volume
# -----------------------------------------------------------
rpvserver0 00f92db138ef5aee hdisk9
 
Jess# gmvgstat
GMVG Name PVs RPVs Tot Vols St Vols Total PPs Stale PPs Sync
--------------- ---- ---- -------- -------- ---------- ---------- ----
syncglvmvg 1 1 2 0 2542 0 100%
Secondary node and site configuration
The secondary node, Ellie, has both a rpvserver, rpvserver0, and rpvclient, hdisk0, created, and the scalable GMVG, syncglvmvg, is offline. Although the GMVG and mirror pools exist, they are not active on the secondary node, so their status is not known. All of this is shown in Example 7-6.
Example 7-6 Synchronous GLVM secondary site configuration
Ellie# lspv
 
repossiteB 00f92db1002568b2 caavg_private active
hdisk12 00f92db16aa2703a rootvg active
hdisk3 00f92db138df5181 syncglvmvg
hdisk2 00f92db17837528a None
 
Ellie#lsvg syncglvmvg
0516-010 : Volume group must be varied on; use varyonvg command.
#
Ellie# lsmp -A syncglvmvg
0516-010 lsmp: Volume group must be varied on; use varyonvg command.
 
Ellie# lsrpvserver -H
# RPV Server Physical Volume Identifier Physical Volumer
# -----------------------------------------------------------
rpvserver0 00f92db138df5181 hdisk3
 
# lsrpvclient -H
# RPV Client Physical Volume Identifier Remote Site
# -----------------------------------------------------------
hdisk0 00f92db138ef5aee Unknown
 
# gmvgstat
GMVG Name PVs RPVs Tot Vols St Vols Total PPs Stale PPs Sync
--------------- ---- ---- -------- -------- ---------- ---------- ----
gmvgstat: Failed to obtain geographically mirrored volume group information using lsglvm -v.
Completing the cluster configuration
To complete the configuration, perform the following steps.
Create any additional resources that are required. The most common ones are as follows:
 – Site-specific service IPs
 – Application controllers
Add the additional resources to the RG.
Create all logical volumes and file systems that are required in the GMVG syncglvmvg.
Synchronize the cluster.
 
Important: This procedure does not configure the GMVG on the remote node; that action must be done manually.
When creating logical volumes, ensure that two copies are created wit the superstrict allocation policy and the mirror pools. This should be completed on the node in which the GMVG is active. In our case, it is node Jess. An example of creating a mirrored logical volume by running smitty mklv is shown in Example 7-7. Repeat as needed for every logical volume, and add any file systems that use the logical volumes, if applicable.
Example 7-7 Creating a mirrored logical volume
                                Add a Logical Volume
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Logical volume NAME [synclv]
* VOLUME GROUP name syncglvmvg
* Number of LOGICAL PARTITIONS [10] #
PHYSICAL VOLUME names [] +
Logical volume TYPE [jfs2] +
POSITION on physical volume middle +
RANGE of physical volumes minimum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [2] #
to use for allocation
Number of COPIES of each logical 2 +
partition
Mirror Write Consistency? active +
Allocate each logical partition copy superstrict +
on a SEPARATE physical volume?
RELOCATE the logical volume during yes +
reorganization?
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [512] #
Enable BAD BLOCK relocation? yes +
SCHEDULING POLICY for writing/reading parallel write/sequen> +
logical partition copies
Enable WRITE VERIFY? no +
File containing ALLOCATION MAP []
Stripe Size? [Not Striped] +
Serialize IO? no +
Mirror Pool for First Copy glvmMP01 +
Mirror Pool for Second Copy glvmMP02 +
Mirror Pool for Third Copy +
Infinite Retry Option no
After all logical volumes are created, it is necessary to take the VG offline on the primary node, and then reimport the VG on the standby node by performing the following steps:
On primary node Jess:
a. Deactivate the GMVG by running varyoffvg syncglvmvg.
b. Deactivate the rpvclient, hdisk1 by running rmdev -l hdisk1.
c. Activate the rpvserver, rpvserver0 by running mkdev -l rpvserver0.
On standby node Ellie:
a. Deactivate the rpvserver, rpvserver0 by running rmdev -l rpvserver0.
b. Activate rpvclient, hdisk0 by running mkdev -l hdisk0.
c. Import the new VG information by running importvg -L syncglvmvg hdisk0.
d. Activate the VG by running varyonvg syncglvmg.
e. Verify the GMVG information by running lsvg -l syncglvmg.
After you are satisfied that the GMVG information is correct, reverse these procedures to return the GMVG back to the primary node as follows:
On standby node Ellie:
a. Deactivate the VG by running varyoffvg syncglvmg.
b. Deactivate the rpvclient, hdisk0 by running rmdev -l hdisk0.
c. Activate the rpvserver by running mkdev -l rpvserver0.
On primary node Jess:
a. Deactivate the rpvserver, rpvserver0 by running rmdev -l rpvserver0.
b. Activate the rpvclient, hdisk1 by running mkdev -l hdisk1.
c. Activate the GMVG by running varyonvg syncglvmvg.
Run a cluster verification, and if there are no errors, then the cluster can be tested.
7.3.3 Asynchronous configuration
Before attempting to use the GLVM wizard, you must complete all the prerequisites that are described in 7.2, “Prerequisites” on page 246. Our scenario consists of a basic two-site configuration, with one node at each site, and an XD_data network with a persistent alias defined in the configuration, as shown in Example 7-2 on page 248.
To begin, run smitty sysmirror and select Cluster Applications and Resources → Make Applications Highly Available (Use Smart Assists) → GLVM Configuration Assistant → Configure Synchronous GMVG.
The menu that is shown in Figure 7-5 on page 257 opens. If not, then the previously mentioned prerequisites have not been met, and you see a similar message as shown in Figure 7-6 on page 257.
                  Create GMVG with Asynchronous Mirror Pools
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Enter the name of the VG [asyncglvm]
* Select disks to be mirrored from the local site (00f92db138ef5aee) +
* Select disks to be mirrored from the remote site (00f92db138df5181) +
* Enter the size of the ASYNC cache [2] #
Figure 7-5 Asynchronous GLVM wizard menu
                            COMMAND STATUS
 
Command: OK stdout: yes stderr: no
 
Before command completion, additional instructions may appear below.
 
No nodes are currently defined for the cluster.
 
Define at least one node, and ideally all nodes, prior to defining
the repository disk/disks and cluster IP address. It is important that all
nodes in the cluster have access to the repository disk or respective
repository disks(in case of a linked cluster) and can be reached via the
cluster IP addresses, therefore you should define the nodes in the cluster first
Figure 7-6 Async GLVM prerequisites not met
Enter the field values as follows:
Enter the Name of the VG
Enter the name of the VG that you want to create as a geographically mirrored VG. If the RG is created by using the GLVM Configuration Assistant, the VG name is appended with _RG. For example, if the VG name is syncglvmvg, the RG name is syncglvmvg_RG.
Select disks to be mirrored from the local site
Press F4 to display a list of available disks. Press F7 to select the disks that you want to geographically mirror from the local site. After all disks are selected, press Enter.
Select disks to be mirrored from the remote site
Press F4 to display a list of available disks. Press F7 to select the disks that you want to geographically mirror from the remote site. After all disks are selected, press Enter.
Enter the size of the ASYNCH cache
This is the aio_cache_lv, and one is created at each site. Enter the number of physical partitions (PPs) on the VG. The number that you enter depends on the load of the applications and bandwidth that is available in the network. You might need to enter different values for peak workload optimization.
Node Jess is using local disk hdisk9 and node Ellie is using hdisk3 for the GMVG. Each one is associated with a rpvserver, which in turn is linked to their respective rpvclients. The rpvclients become hdisk1 on Jess and hdisk0 on Ellie, as shown in Figure 7-2 on page 247. The rpvclients acquire those disk names because they are the first hdisk names that are available on each node. The output from running the synchronous GLVM wizard is shown in Example 7-8.
Example 7-8 Asynchronous GLVM wizard output
Extracting the names for sites.
Extracting the name for nodes from both local and remote sites.
Creating RPVServers on all nodes of local site.
 
Creating RPVServers on node rpvserver0 Available
Creating RPVServers on all nodes of remote site.
Creating RPVServers on node rpvserver0 Available
Creating RPVServers on node rpvserver0 Available
Creating RPVClients on all nodes of local site.
Creating RPVClients on node hdisk1 Available
Creating RPVClients on all nodes of remote site.
Creating RPVClients on node hdisk0 Available
Changing RPVServers and RPVClients to defined and available state accordingly
to facilitate the creation of VG.
Changing RPVServer rpvserver0 Defined
 
Changing RPVClient hdisk0 Defined
Generating Unique Names for Mirror pools and Resource Group.
Generating resource group (RG) name.
Unique names generated.
Creating VG asyncglvmvg
Creating first mirror pool
Extending the VG to RPVClient disks and creating second mirror pool
 
Creating first ASYNC cache LV glvm_cache_LV01
 
Creating second ASYNC cache LV glvm_cache_LV02
 
Varying on volume group:
Setting attributes for 0516-1804 chvg: The quorum change takes effect immediately.
Varying off volume group:
Changing RPVClient hdisk1 Defined
Changing RPVServer rpvserver0 Defined
Changing RPVServer rpvserver0 Available
 
Importing the VG
Changing RPVClient hdisk0 Available
Importing the VG synclvodm: No logical volumes in volume group asyncglvmvg.
asyncglvmvg
 
Varying on volume group:
Setting attributes for 0516-1804 chvg: The quorum change takes effect immediately.
 
Varying off volume group:
Changing RPVClient hdisk0 Defined
Definition of VG is available on all the nodes of the cluster.
Changing RPVServer rpvserver0 Defined
Creating a resource group.
Adding VG Verifying and synchronising the cluster configuration ...
 
Verification to be performed on the following:
Cluster Topology
Cluster Resources
 
Retrieving data from available cluster nodes. This could take a few minutes.
 
Start data collection on node Jess
Start data collection on node Ellie
Collector on node Jess completed
Collector on node Ellie completed
Data collection complete
WARNING: No backup repository disk is UP and not already part of a VG for nodes:
- Jess
- Ellie
 
Completed 10 percent of the verification checks
 
WARNING: There are IP labels known to PowerHA SystemMirror and not listed in file /usr/es/sbin/cluster/etc/clhosts.client on node: Jess. Clverify can automat
ically populate this file to be used on a client node, if executed in auto-corre
ctive mode.
WARNING: There are IP labels known to PowerHA SystemMirror and not listed in file /usr/es/sbin/cluster/etc/clhosts.client on node: Ellie. Clverify can automati
cally populate this file to be used on a client node, if executed in auto-correc
tive mode.
WARNING: An XD_data network has been defined, but no additional
XD heartbeat network is defined. It is strongly recommended that
an XD_ip network be configured in order to help prevent
cluster partitioning if the XD_data network fails. Cluster partitioning
may lead to data corruption for your replicated resources.
Completed 30 percent of the verification checks
This cluster uses Unicast heartbeat
Completed 40 percent of the verification checks
Completed 50 percent of the verification checks
Completed 60 percent of the verification checks
Completed 70 percent of the verification checks
 
Verifying XD Solutions...
 
Completed 80 percent of the verification checks
Completed 90 percent of the verification checks
Verifying additional prerequisites for Dynamic Reconfiguration...
...completed.
 
Committing any changes, as required, to all available nodes...
Adding any necessary PowerHA SystemMirror for AIX entries to /etc/inittab and
/etc/rc.net for IP address Takeover on node Jess.
Checking for any added or removed nodes
1 tunable updated on cluster GLVMdemocluster.
Adding any necessary PowerHA SystemMirror for AIX entries to /etc/inittab and
/etc/rc.net for IP address Takeover on node Ellie.
Updating Split Merge policies
 
Verification has completed normally.
 
clsnapshot: Creating file /usr/es/sbin/cluster/snapshots/active.0.odm.
 
clsnapshot: Succeeded creating Cluster Snapshot: active.0
Attempting to sync user mirror groups (if any)...
Attempting to refresh user mirror groups (if any)...
 
cldare: Requesting a refresh of the Cluster Manager...
00026|NODE|Jess|VERIFY|PASSED|Fri Nov 18 11:20:38|A cluster configuration ver
ification operation PASSED on node "Jess". Detailed output can be found in "/
var/hacmp/clverify/clverify.log" on that node.
 
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING.
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER..
PowerHA SystemMirror Cluster Manager current state is: ST_RP_RUNNING
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_BARRIER..
PowerHA SystemMirror Cluster Manager current state is: ST_UNSTABLE.
PowerHA SystemMirror Cluster Manager current state is: ST_STABLE.....completed.
Asynchronous cluster configuration
After the successful running of the GLVM wizard, the cluster RG is shown in Example 7-9.
Example 7-9 Synchronous GLVM resource group
Resource Group Name asyncglvmvg_RG
Participating Node Name(s) Jess Ellie
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Never Fallback
Site Relationship Prefer Primary Site
Node Priority
Service IP Label
Filesystems ALL
Filesystems Consistency Check fsck
Filesystems Recovery Method sequential
Filesystems/Directories to be exported (NFSv3)
Filesystems/Directories to be exported (NFSv4)
Filesystems to be NFS mounted
Network For NFS Mount
Filesystem/Directory for NFSv4 Stable Storage
Volume Groups asyncglvmvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessary true
Disks
Raw Disks
Disk Error Management? no
GMVG Replicated Resources asyncglvmvg
GMD Replicated Resources
PPRC Replicated Resources
SVC PPRC Replicated Resources
EMC SRDF? Replicated Resources
Hitachi TrueCopy? Replicated Resources
Generic XD Replicated Resources
AIX Connections Services
AIX Fast Connect Services
Shared Tape Resources
Application Servers
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups false
Inactive Takeover
SSA Disk Fencing false
Filesystems mounted before IP configured false
WPAR Name
Primary node and site configuration
The primary node, Jess, has both a rpvserver, rpvserver0, and rpvclient, hdisk1, created, and the scalable GMVG, asyncglvmvg, is active. Also, the node has two mirror pools, glvmMP01 and glvmMP02. Two aio_cache_lv logical volumes, glvm_cache_LV01 and glvm_cache_LV02, are also created. All of this is shown in Example 7-10.
Example 7-10 Asynchronous GLVM primary site configuration
Jess# lspv
hdisk0 00f92db16aa2703a rootvg active
repossiteA 00f92db10031b9e9 caavg_private active
hdisk9 00f92db138ef5aee asyncglvmvg active
hdisk10 00f92db17835e777 None
hdisk1 00f92db138df5181 syncglvmvg active
 
Jess# lsvg asyncglvmvg
 
VOLUME GROUP: syncglvmvg VG IDENTIFIER: 00f92db100004c00000001587873d400
VG STATE: active PP SIZE: 8 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 2542 (20336 megabytes)
MAX LVs: 256 FREE PPs: 2538 (20304 megabytes)
LVs: 0 USED PPs: 0 (0 megabytes)
OPEN LVs: 0 QUORUM: 1 (Disabled)
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: no
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 512 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: non-relocatable
MIRROR POOL STRICT: super
PV RESTRICTION: none INFINITE RETRY: no
DISK BLOCK SIZE: 512 CRITICAL VG: yes
FS SYNC OPTION: no
 
Jess# lsvg -l asyncglvm
asyncglvm:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
glvm_cache_LV01 aio_cache 2 2 1 open/syncd N/A
glvm_cache_LV02 aio_cache 2 2 1 closed/syncd N/A
 
Jess# lsmp -A asyncglvm
VOLUME GROUP: asyncglvm Mirror Pool Super Strict: yes
 
MIRROR POOL: glvmMP01 Mirroring Mode: ASYNC
ASYNC MIRROR STATE: inactive ASYNC CACHE LV: glvm_cache_LV02
ASYNC CACHE VALID: yes ASYNC CACHE EMPTY: yes
ASYNC CACHE HWM: 80 ASYNC DATA DIVERGED: no
 
MIRROR POOL: glvmMP02 Mirroring Mode: ASYNC
ASYNC MIRROR STATE: active ASYNC CACHE LV: glvm_cache_LV01
ASYNC CACHE VALID: yes ASYNC CACHE EMPTY: no
ASYNC CACHE HWM: 80 ASYNC DATA DIVERGED: no
 
Jess# lsrpvclient -H
# RPV Client Physical Volume Identifier Remote Site
# -----------------------------------------------------------
hdisk1 00f92db138df5181 Chicago
 
Jess# lsrpvserver -H
# RPV Server Physical Volume Identifier Physical Volume
# -----------------------------------------------------------
rpvserver0 00f92db138ef5aee hdisk9
 
Jess# gmvgstat
GMVG Name PVs RPVs Tot Vols St Vols Total PPs Stale PPs Sync
--------------- ---- ---- -------- -------- ---------- ---------- ----
syncglvmvg 1 1 2 0 2542 0 100%
Secondary node and site configuration
The secondary node, Ellie, has both a rpvserver, rpvserver0, and rpvclient, hdisk0, created, and the scalable GMVG, syncglvmvg, is offline. Although the GMVG and mirror pools exist, they are not active on the secondary node, and their status is not known. All of this is shown in Example 7-11 on page 263.
Example 7-11 Asynchronous GLVM secondary site configuration
Ellie# lspv
 
repossiteB 00f92db1002568b2 caavg_private active
hdisk12 00f92db16aa2703a rootvg active
hdisk3 00f92db138df5181 asyncglvmvg
hdisk2 00f92db17837528a None
 
Ellie#lsvg syncglvmvg
0516-010 : Volume group must be varied on; use varyonvg command.
#
Ellie# lsmp -A syncglvmvg
0516-010 lsmp: Volume group must be varied on; use varyonvg command.
 
Ellie# lsrpvserver -H
# RPV Server Physical Volume Identifier Physical Volumer
# -----------------------------------------------------------
rpvserver0 00f92db138df5181 hdisk3
 
# lsrpvclient -H
# RPV Client Physical Volume Identifier Remote Site
# -----------------------------------------------------------
hdisk0 00f92db138ef5aee Unknown
 
# gmvgstat
GMVG Name PVs RPVs Tot Vols St Vols Total PPs Stale PPs Sync
--------------- ---- ---- -------- -------- ---------- ---------- ----
gmvgstat: Failed to obtain geographically mirrored volume group information using lsglvm -v.
Completing the cluster configuration
To complete the configuration, complete the following steps.
Create any additional resources that are required. The most common ones are the following ones:
 – Site-specific service IPs
 – Application controllers
Add the additional resources to the RG.
Create all the logical volumes and file systems that are required in the GMVG syncglvmvg.
Synchronize the cluster.
 
Important: This procedure does not configure the GMVG on the remote node. That procedure must be done manually.
When creating logical volumes, ensure that two copies are created with a superstrict allocation policy and the mirror pools, which should be completed on the node in which the GMVG is active. In our case, it is node Jess. An example of creating a mirrored logical volume by running smitty mklv is shown in Example 7-12. Repeat as needed for every logical volume, and add any file systems that use the logical volumes, if applicable.
Example 7-12 Creating a mirrored logical volume
                                Add a Logical Volume
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Logical volume NAME [asynclv]
* VOLUME GROUP name asyncglvmvg
* Number of LOGICAL PARTITIONS [20] #
PHYSICAL VOLUME names [] +
Logical volume TYPE [jfs2] +
POSITION on physical volume middle +
RANGE of physical volumes minimum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [2] #
to use for allocation
Number of COPIES of each logical 2 +
partition
Mirror Write Consistency? active +
Allocate each logical partition copy superstrict +
on a SEPARATE physical volume?
RELOCATE the logical volume during yes +
reorganization?
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [512] #
Enable BAD BLOCK relocation? yes +
SCHEDULING POLICY for writing/reading parallel write/sequen> +
logical partition copies
Enable WRITE VERIFY? no +
File containing ALLOCATION MAP []
Stripe Size? [Not Striped] +
Serialize IO? no +
Mirror Pool for First Copy glvmMP01 +
Mirror Pool for Second Copy glvmMP02 +
Mirror Pool for Third Copy +
Infinite Retry Option no
After all logical volumes are created, it is necessary to take the VG offline on the primary node and then reimport the VG on the standby node by completing the following steps:
On primary node Jess:
a. Deactivate the GMVG by running varyoffvg asyncglvmvg.
b. Deactivate the rpvclient, hdisk1 by running rmdev -l hdisk1.
c. Activate the rpvserver, rpvserver0 by running mkdev -l rpvserver0.
On standby node Ellie:
a. Deactivate the rpvserver, rpvserver0 by running rmdev -l rpvserver0.
b. Activate rpvclient, hdisk0 by running mkdev -l hdisk0.
c. Import new VG information by running importvg -L asyncglvmvg hdisk0.
d. Activate the VG by running varyonvg syncglvmg.
e. Verify the GMVG information by running lsvg -l syncglvmg.
After you are satisfied that the GMVG information is correct, reverse these procedures to return the GMVG back to the primary node:
On standby node Ellie:
a. Deactivate the VG by running varyoffvg asyncglvmg.
b. Deactivate the rpvclient, hdisk0 by running rmdev -l hdisk0.
c. Activate the rpvserver by running mkdev -l rpvserver0.
On primary node Jess:
a. Deactivate the rpvserver, rpvserver0 by running rmdev -l rpvserver0.
b. Activate the rpvclient, hdisk1 by running mkdev -l hdisk1.
c. Activate the GMVG by running varyonvg asyncglvmvg.
Run a cluster verification, and if there are no errors, then the cluster can be tested.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset