IBM PowerVC for managing IBM PowerVM
This chapter describes the general setup of IBM Power Virtualization Center (IBM PowerVC) for managing PowerVM. IBM PowerVC Version 1.3.1 has two offerings that can be installed called IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager. This chapter describes the most notable differences between the IBM Cloud PowerVC Manager offering and IBM PowerVC Standard Edition as they appear throughout this chapter. The following sections explain the discovery or configuration of the managed objects. They describe the verification of the environment and the operations that can be performed on virtual machines (VMs) and images.
This chapter covers the following topics:
5.1 IBM PowerVC graphical user interface
This section presents the IBM PowerVC graphical user interface (GUI) and explains how to access functions from the IBM PowerVC home window. The management functions of IBM PowerVC are grouped by classes, which can be accessed from different locations. In all IBM PowerVC windows, you can find hot links to several areas and components:
Environment configuration, messages, DRO events, and Project selection are found at the top of the IBM PowerVC window.
Management functions that relate to VM images, VMs, hosts, networks, and storage are found in the column of icons at the left of the window (which also includes a link to the home window).
The Requests management function is available only at the IBM Cloud PowerVC Manager offering home window.
The functions that are available differ between the IBM PowerVC offerings that are installed. The IBM PowerVC Standard Edition offering home window is shown in Figure 5-1. The hot links are highlighted in red in the illustration.
Figure 5-1 IBM PowerVC Standard Edition home window access to a group of functions
In all IBM PowerVC windows, most of the icons and text are hot links to groups of functions. Several ways exist to access a group of functions. The blue arrows in Figure 5-2 show, for example, the two hot links that can be used from the home window to access the VM management functions.
Figure 5-2 IBM Cloud PowerVC Manager home window access to a group of functions
5.2 Introduction to the IBM PowerVC setup
Before you can perform tasks in IBM PowerVC, you must discover and register the resources that you want to manage. You can register storage systems and hosts, and you can create networks to use when you deploy images. When you register resources with IBM PowerVC, you make them available to the management functions of ­IBM PowerVC (such as deploying a VM on a discovered host or storing images of captured VMs).
This discovery or registration mechanism is the key to the smooth deployment of
­IBM PowerVC in an existing environment. For example, a host can host several partitions while you deploy ­IBM PowerVC. You first register the host without registering any of the hosted partitions. All IBM PowerVC functions that relate to host management are available to you, but no objects exist where you can apply the functions for managing partitions. You can then decide whether you want to manage all of the existing partitions with IBM PowerVC. If you prefer a progressive adoption plan instead, start by managing only a subset of these partitions.
Ensure that the following preliminary steps are complete before you proceed to 5.4, “Connecting to IBM PowerVC” on page 103:
1. Configure the IBM Power Systems environment to be managed through the Hardware Management Console (HMC) or PowerVM Novalink.
2. Set up the users’ accounts with an administrator role on IBM PowerVC, as described in 3.8, “Planning users and groups” on page 61.
3. Set up the host name, IP address, and an operator user ID for the HMC or PowerVM Novalink connection.
5.3 Managing resources outside of IBM PowerVC
When you switch from using IBM PowerVC to manage your resources to accessing the managed resource directly, you might see unexpected or adverse results in IBM PowerVC.
As you manage resources in IBM PowerVC, you might want to perform certain operations directly on the resource. For example, you are managing a VM with IBM PowerVC, but you use another method to stop the VM. In this case, IBM PowerVC might not be notified that the VM was stopped and might not immediately reflect the updated status of the VM. IBM PowerVC typically polls a resource to obtain an update the next time that you use IBM PowerVC to manage the resource. As a result, operations from IBM PowerVC might fail until the state of the VM in IBM PowerVC is the same as the state on the VM itself.
When you perform a management operation outside of IBM PowerVC, such as adding or deleting resources, the action can adversely affect the operation of IBM PowerVC and the data center. For example, you might delete a VM by using a Platform Manager for the VM. The VM goes into Error state and you must take the additional step of deleting the VM in IBM PowerVC. Similar results can occur when you remove a disk or network device for a VM that IBM PowerVC is managing.
For appropriate results, use IBM PowerVC to perform management tasks on the VMs and associated resources in your IBM PowerVC environment.
5.4 Connecting to IBM PowerVC
After IBM PowerVC is installed and started on a Linux partition, you can connect to the
IBM PowerVC management GUI by completing the following steps:
1. Open a web browser on your workstation and point it to the IBM PowerVC address:
https://<ipaddress or hostname>/
2. Log in to IBM PowerVC as an administrative user (Figure 5-3). The first time that you use IBM PowerVC, this administrative user is root. As a preferred practice, after the initial setup of IBM PowerVC, define other user IDs and passwords rather than using the root user. For more information about how to add, modify, or remove users, see 3.8.1, “User management” on page 61.
Figure 5-3 IBM PowerVC Login window
3. Now, you see the IBM IBM PowerVC home window.
 
Important: It is important that your environment meets all of the hardware and software requirements and that it is configured correctly before you start to work with IBM PowerVC and register your resources.
4. As a preferred practice, your first action is to check the IBM PowerVC installation by clicking Verify Environment, as shown in Figure 5-4.
Figure 5-4 Initial system check
Then, you can click View Results to verify that IBM PowerVC is installed correctly.
5.5 Host setup
Setting up the communication between managed hosts and IBM PowerVC varies, depending on whether HMC or PowerVM Novalink is used to communicate with hosts. This section covers the steps for using HMC to communicate with hosts. The differences in this process when PowerVM Novalink is used are shown.
 
Note: If the host system has PowerVM Novalink installed, the system can still be added for HMC management by following normal procedures. If the host has PowerVM Novalink installed and HMC connected, the management type from IBM PowerVC for this host must always be PowerVM Novalink. HMC can be used to manage the hardware and firmware on the host system.
First, enable IBM PowerVC to communicate with the HMCs in the environment to manage the host systems running IBM PowerVM hypervisor. After the hosts, storage, and networks are configured correctly in the IBM PowerVC domain, you can add a VM.
If PowerVM Novalink is used to manage hosts, the communication is established between PowerVM Novalink and IBM PowerVC.
To add hosts for IBM PowerVC management, complete the following steps:
1. On the home window (Figure 5-4 on page 104), click Add Hosts.
2. If the host to be added is managed by HMC, follow the instructions in step a. If PowerVM Novalink is used for management, follow the instructions in step b on page 106.
a. In the Add Hosts window (Figure 5-5), select the Host Management type as HMC and provide the name and credentials for the HMC. In the Display name field, enter the string that is used by IBM PowerVC to refer to this HMC in all of its windows. Click Add Connection. IBM PowerVC connects to the HMC and reads the host information.
Figure 5-5 HMC connection information
The user ID and password can be the default HMC hscroot administrator user ID combination. The ID can also be other IDs by using the hscsuperadmin role that you created to manage the HMC. If using an HMC user other than hscroot, allow remote access to the web in the user profile settings. Otherwise, IBM PowerVC reports authentication failure when adding the HMC connection.
 
Note: As a preferred practice, do not specify hscroot for the user ID. Instead, create a user ID on the HMC with the hscsuperadmin role and use it for managing the HMC from IBM PowerVC. Use this approach to identify actions on the HMC that were initiated by a user who was logged in to the HMC or from the IBM PowerVC management station. If a security policy requires that the hscroot password is changed regularly, the use of a different user ID for IBM PowerVC credentials avoids breaking the IBM PowerVC ability to connect to the HMC after a system administrator changes the hscroot password.
b. In the Add Hosts window (Figure 5-6), select the Host Management type as NovaLink and provide the name and credentials for the PowerVM Novalink partition. In the Display name field, enter the string that is used by IBM PowerVC to refer to this PowerVM Novalink partition in all of its windows. Click Add Connection. IBM PowerVC connects to the PowerVM Novalink partition and reads the host information.
Figure 5-6 PowerVM Novalink connection information
The user ID specified for PowerVM Novalink connection must be a member of the pvm_admin group or have both SSH access and sudoers NOPASSWD capabilities.
 
Note: Although this is not preferable, the HMC managed systems allow the same host to be added into more one IBM PowerVC system. PowerVM Novalink managed hosts allow a host to be added only into one IBM PowerVC system. If you try to add PowerVM Novalink managed host into another IBM PowerVC system, the message is displayed that this host is already managed by another IBM PowerVC installation and will be removed from that system.
3. IBM PowerVC prompts you to accept the HMC X.509 certificate, or if PowerVM Novalink is used, the PowerVM Novalink host SSH key. Review the message details to determine whether you are willing to override this warning. If you are willing to trust the certificate or accept the SSH key, click Connect to continue.
4. If the host being added is managed by HMC, follow the instructions in step a. If the host being added is managed by PowerVM Novalink, follow the instructions in step b.
a. Next, you see information about all hosts that are managed by that HMC. Figure 5-7 shows the window for an HMC that manages four IBM POWER servers. To choose the hosts to manage with IBM PowerVC, click their names. By holding down the Ctrl key while you click the host names, you can select several host names simultaneously.
When the HMC manages several hosts, you can use the filter to select the name that contains the character string that is used as a filter.
Figure 5-7 IBM PowerVC Add Hosts window
b. After selecting Add Host for a new system that is managed by PowerVM Novalink, IBM PowerVC displays three messages in the lower right corner with information about the IBM PowerVC image transfer and installation on the PowerVM Novalink host.
5. After a few seconds, the home window is updated and it shows the number of added objects. Figure 5-8 shows that four hosts were added.
Figure 5-8 Managed hosts
6. Click the Hosts tab to open a Hosts window that is similar to Figure 5-9, which shows the status of the discovered hosts and the management connection to this host.
Figure 5-9 IBM PowerVC shows the managed hosts
Add hosts by clicking Add Host. The windows to add a host are the same as the windows in step 2 on page 105. Select one host from the list to activate functions that can be performed for this host.
7. Click one host name to see the detailed host information, as shown in Figure 5-10. Operations that can be performed for this host are shown here as well.
After hosts, storage, and networks are configured correctly in the IBM PowerVC domain, you can add a VM by expanding the Virtual Machines section. VMs that are hosted by this host can be added to IBM PowerVC by selecting Manage Existing under the Virtual Machines section.
Figure 5-10 Host information
The Host details view contains information about host CPU usage history for the selected time range, as shown in Figure 5-11.
Figure 5-11 Host CPU usage history
5.5.1 Host maintenance mode
Before you perform maintenance activities on a host, such as updating the operating system or firmware or replacing hardware, you should move the host into maintenance mode. Maintenance mode is sometimes referred to as one-click evacuation.
IBM PowerVC can be used to put IBM PowerVC managed hosts into maintenance mode. Hosts can be put into maintenance mode from the IBM PowerVC Hosts view by selecting the host from list and selecting Enter Maintenance Mode, as shown in Figure 5-12.
Figure 5-12 Place the host into maintenance mode
Before the host can be put into maintenance mode, the following conditions must apply:
The host’s hypervisor must be in the Operating state.
If VMs on the host are to be migrated to other servers, the following conditions apply:
 – The hypervisor must be licensed for Live Mobility.
 – The VMs on the host cannot be in the error, paused, building states.
 – All running VMs must be in the OK state and the Resource Monitoring and Control (RMC) connection status must be Active.
 – All requirements for live migration must be met.
If the request was made to migrate active VMs when entering maintenance mode, the following cannot also be true, or the request fails:
 – There is a VM on the host that is a member of a collocation rule that specifies affinity and has multiple members.
 – The collocation rule has a member that is already undergoing a migration or is being remote restarted.
The following process describes what happens when the host is put into maintenance mode:
If requested, all the VMs on the host are migrated to a different host. If there are any errors during migration, the maintenance operation fails and enters the error state until an administrator resolves the problem. By default, IBM PowerVC migrates the VMs that are allocated the most memory first.
After maintenance mode is requested, you cannot perform any actions on the host’s VMs except stop, delete, or live migrate, and you cannot deploy or migrate VMs to the host. Therefore, this host is not listed in the IBM PowerVC user interface selection lists for any actions. You can perform maintenance activities on the host.
When a host is ready for use, exit the maintenance mode from the IBM PowerVC user interface. When the host is available in IBM PowerVC again, you must manually move VMs to the host. The VMs that were previously on this host are not automatically moved back to this host.
To put the host into maintenance mode, complete the following steps:
1. Select the host that you want to put into maintenance mode and select Enter Maintenance Mode.
2. If you want to migrate the VMs to other hosts, select Migrate active virtual machines to another hosts and Destination for migration, as shown in Figure 5-13. This option is unavailable if there are no hosts available for the migration.
Figure 5-13 Virtual machine migration
3. Click OK.
After maintenance mode is requested, the host’s maintenance state is Entering while the VMs are migrated to another host, if requested. This status changes to On after the migration is complete and the host is fully in the maintenance state.
To remove a host from maintenance mode, select the host and select Exit Maintenance Mode.
After the host is brought out of maintenance mode, you can add VMs to the host. IBM PowerVC does not automatically migrate back VMs that were running on host before it was put into maintenance mode.
 
Note: You can edit the period after which the migration operation times out and the maintenance mode enters an error state by running the following commands:
/usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT prs_ha_timeout_seconds <duration_in_seconds>
service openstack-nova-ibm-ego-ha-service restart
The <duration_in_seconds> is the timeout period in seconds.
5.5.2 Host groups
You can use host groups to group logically hosts regardless of any features that they might have in common. For example, the hosts do not have to have the same architecture, network configuration, or storage. Host groups have the following features:
Every host must be in a host group.
VMs are kept within the host group.
Placement policies are associated with the host groups.
For example, we added a host group that is named ProdGroup, as shown in Figure 5-14. Open the Host Groups tab and click Create.
Figure 5-14 Host Groups window
A window opens, as shown in Figure 5-15. Enter the host group name, placement policy, and Dynamic Resource Optimizer (DRO) settings for the host group. Click Add to add hosts, and then click Create Host Group. A host group can have a host that is managed by the HMC, PowerVM Novalink, or both. For the placement policies that are supported by IBM PowerVC, see 3.4.2, “Placement policies” on page 33. For more information about DRO, see 2.3.1, “Dynamic Resource Optimizer” on page 12.
Figure 5-15 Create Host Group
5.6 Hardware Management Console management
Beginning with IBM PowerVC Version 1.2.3, users can add redundant HMCs for Power Systems servers. If one HMC fails, the user can change the HMC to one of the redundant HMCs.
5.6.1 Adding an HMC
With IBM PowerVC Version 1.2.3 or later, you can add redundant HMCs for Power System servers. To add an HMC, in the HMC Connections section in the Hosts window, click Add HMC, as shown in Figure 5-16. Enter the HMC host name or IP address, display name, user ID, and password. Click Add HMC Connection. The new HMC is added.
You also can click Remove HMC to remove an HMC.
Figure 5-16 Add HMC Connection
5.6.2 Changing the HMC credentials
If you want to change the credentials that are used by IBM PowerVC to access the HMC, open the Hosts window and click the HMC Connections tab. Select the row for the HMC that you want to work with, and then click Edit. A window opens (Figure 5-17) where you can specify another user ID, which must already be defined on the HMC with the hscsuperadmin role.
Figure 5-17 Change the HMC credentials
5.6.3 Changing the HMC
With IBM PowerVC Version 1.2.3 or later, you can add redundant HMCs for Power Systems servers. IBM PowerVC uses only one HMC for one server. If one HMC fails, you must change the management console to another HMC. As shown in Figure 5-18, on the Hosts window, select all of the servers that you want to change, click Edit Host Connection, select the HMC you want, and click OK.
Figure 5-18 Change the HMC
The management console of the Power System servers changes to the new HMC, as shown in Figure 5-19.
Figure 5-19 Select the new HMC for hosts
5.6.4 Change management connection, HMC, and PowerVM Novalink
You cannot use IBM PowerVC to change your hosts to be PowerVM Novalink managed if you are using shared processor pools or shared storage pools (SSPs). If you are using the DRO with Capacity on Demand (CoD), you can switch to PowerVM Novalink. DRO continues to monitor and adjust workloads, but cannot take advantage of CoD.
To prepare the system for PowerVM Novalink management, complete the following steps:
1. Install the PowerVM Novalink software on a system that is managed by an HMC.
 
Note: When IBM PowerVC recognizes that PowerVM Novalink is installed on it, a warning is displayed and the host is put into maintenance mode. Any operations that are running likely fail.
2. From the IBM PowerVC user interface, open the Host window, select the host that you want to update, and then select Edit Host Connection.
3. For the Host management type, select NovaLink. Enter the appropriate information and click OK.
IBM PowerVC now recognizes the host as PowerVM Novalink managed and the host comes out of maintenance mode.
5.7 Storage and SAN fabric setup
When you use external storage area network (SAN) storage, you must prepare the storage controllers and Fibre Channel (FC) switches before they can be managed by IBM PowerVC.
IBM PowerVC needs management access to the storage controller. When you use user authentication, the administrative user name and password for the storage controller must be set up. For IBM Storwize storage, another option is to use cryptographic key pairs. For instructions to generate use key pairs, see the documentation for your device.
To configure the storage controller and SAN switch, complete the following steps:
1. Configure the FC SAN fabric for the IBM PowerVC environment.
2. Connect the required FC ports that are owned by the Virtual I/O Server (VIOS) and the storage controllers to the SAN switches.
3. Set up the host names, IP addresses, and administrator user ID and password combination for the SAN switches.
4. Set up the host names, IP addresses, and the administrator user ID and password combination for the storage controllers.
5. Create volumes for the initial VMs that are to be imported (installed) to IBM PowerVC later.
 
Note: IBM PowerVC creates VMs from an image. No image is provided with
IBM PowerVC. Therefore, you must manually configure at least one initial partition from which you create this image. The storage volumes for this initial partition must be created manually. When IBM PowerVC creates more partitions, it also creates the storage volumes for them.
Note: For IBM PowerVC Version 1.2.2 and higher, you can import an image (that you created earlier) from storage into IBM PowerVC.
For more information about supported storage in IBM PowerVC Standard Edition, see 3.1.1, “Hardware and software requirements” on page 22.
5.7.1 Adding a storage controller to IBM PowerVC
To set up storage providers and the SAN fabric, complete the following steps:
1. To add a storage controller, click the Add Storage link on the IBM PowerVC home window, as shown in Figure 5-4 on page 104. If a storage provider is already defined, the icon differs slightly. Click the plus sign (+) to the right of Storage Providers, as shown in Figure 5-20.
Figure 5-20 Add extra storage providers
2. The window that is shown in Figure 5-21 requires this information:
 – Type: Five types are supported: DS8000, EMC VMAX, EMC VNX, Storwize, and IBM XIV Storage System. In this example, we select Storwize for our IBM V7000 storage system.
 – Storage controller name or IP address and display name.
 – User ID and password or Secure Shell SSH encryption key. (The encryption key option is only for IBM Storwize storage.)
Figure 5-21 Add Storage
3. Click Add Storage. IBM PowerVC presents a message that indicates that the authenticity of the storage cannot be verified. Confirm that you want to continue. IBM PowerVC connects to the storage controller and retrieves information.
4. IBM PowerVC presents information about storage pools that are configured on the storage controller. You must select the default pool where IBM PowerVC creates logical unit numbers (LUNs) for this storage provider, as shown in Figure 5-22.
Click Add Storage, and IBM PowerVC finishes adding the storage controller.
Figure 5-22 IBM PowerVC window to select a storage pool
 
Tip: For more information about the storage template, see 5.9, “Storage connectivity group setup” on page 122.
5.7.2 Adding a SAN fabric to IBM PowerVC
Add the SAN fabric to IBM PowerVC. After you add the storage, IBM PowerVC automatically prompts you to add fabrics. To do so, complete the following steps:
1. Open the window that is shown in Figure 5-23 and click Add Fabric.
Figure 5-23 Add Fabric window
You must complete the following information about the first SAN switch to add it under
IBM PowerVC control:
 – Fabric type: For IBM PowerVC V1.2.2 or later, Brocade and Cisco SAN switches are supported.
 – Principal switch name or IP address and display name.
 – User ID and password.
2. In the Add Fabric window, click Add Fabric, and then confirm the connection in the window that opens. IBM PowerVC connects to the switch and retrieves the setup information. The window is shown in Figure 5-24.
Figure 5-24 IBM PowerVC Add Fabric
3. Figure 5-25 shows the IBM PowerVC Storage window after you successfully add the SAN storage controllers and SAN switches. The Storage Providers tab is selected. To show managed SAN switches, click the Fabrics tab.
Figure 5-25 IBM PowerVC Storage providers tab
Additional storage controllers can be added by clicking Storage → the Storage Providers tab → Add Storage. The window to add a storage controller is the same window that was used for the first storage controller in steps 1 on page 117 and 2 on page 118 in 5.7.1, “Adding a storage controller to IBM PowerVC” on page 117.
You can add SAN switches by clicking Storage → the Fabrics tab → Add Fabric. The window to add a switch is the same window that was used for the first switch (fabric) in 5.7.2, “Adding a SAN fabric to IBM PowerVC” on page 120.
 
Note: IBM PowerVC Version 1.3.1 supports a maximum of 25 fabrics.
5.8 Storage port tags setup
The next step is to customize IBM PowerVC is the FC port tag setup. This setting is optional. Individual FC ports in VIOSs that are managed by IBM PowerVC can be tagged with named labels. For more information about IBM PowerVC tags and storage connectivity groups, see 3.6.3, “Storage connectivity groups and tags” on page 52.
 
Note: Tagging is optional. It is needed only when you want to partition the I/O traffic and restrict certain traffic to use a subset of the available FC ports.
To set up tagging, start from the IBM PowerVC home window and click Configuration → Fibre Channel Port Configuration to open the window that is shown in Figure 5-26.
For each FC adapter in all VIOSs that are managed by IBM PowerVC, you can enter or select a port tag (arbitrary name). IBM PowerVC automatically recognizes the fabric that the port is connected to if the fabric is defined in IBM PowerVC. You can either double-click a Port Tag field and enter a new tag or use the drop-down menu to select a tag from a list of predefined tags. You can also set the tag to None or define your own tag. You can also select N_Port ID Virtualization (NPIV) or virtual SCSI (vSCSI) for the Connectivity field to restrict the port to special SAN access. To disable the FC port in IBM PowerVC, set the connectivity to None. By default, all ports are set to Any connectivity, which allows all connectivity methods. In this example, two sets of FC ports were defined, with Product and Test tags. Certain ports allow NPIV access only, and other ports allow vSCSI, or Any. Do not forget to click Save to validate your port settings, as shown in Figure 5-26.
Figure 5-26 IBM PowerVC Fibre Channel port configuration
Note: Situations exist where you add adapters to a host after IBM PowerVC is installed and configured. Assign them to a VIOS. Enable the VIOS to discover them by running the cfgdev command. Then, IBM PowerVC automatically discovers them. If you open the Fibre Channel Port Configuration window, IBM PowerVC shows these new adapters.
5.9 Storage connectivity group setup
Next, define the storage connectivity groups. A storage connectivity group is a set of VIOSs with access to the same storage controllers. The storage connectivity group also controls the boot volumes and data volumes to use NPIV or vSCSI storage access. For a detailed description, see 3.7, “Network management planning” on page 57.
IBM PowerVC creates a default storage connectivity Any Host, All VIO during the initial setup and adds all hosts and VIO servers into that group by default. Defining additional storage connectivity groups is an optional task that can be performed any time after the initial installation. All VMs that are deployed in IBM PowerVC require a storage connectivity group to be assigned during the deployment. VM deployed into a storage connectivity group remains on that group during its lifecycle and cannot be changed.
To set up a storage connectivity group, complete the following steps:
1. Start from the IBM PowerVC home window and click Configuration → Storage Connectivity Groups to open the window that is shown in Figure 5-27.
Figure 5-27 IBM PowerVC Storage Connectivity Groups window
Default ­storage connectivity groups are defined for the following components:
 – All ports of all VIOSs that can access the storage providers by using NPIV.
 – A vSCSI boot volume storage connectivity group is added if the environment meets the requirements of vSCSI SAN access.
 – For all VIOSs that belong to the SSPs that IBM PowerVC discovered if SSP was configured.
2. You can then create your own storage connectivity group. Click Create. In the next window, enter information or select predefined options for the new storage connectivity group:
 – Name of the storage connectivity group.
 – Boot and Data volume connectivity types: NPIV or vSCSI.
 – “Automatically add applicable VIOSs from newly registered hosts to this storage connectivity group”: If checked, from now on, newly added VIOSs are added to this group if they can access the same storage (fabrics and tags) as the other members of the group.
 – “Allow deployments using this storage connectivity group (enable)”: If checked, the storage connectivity group is enabled for deployment on VMs; otherwise, it is disabled. You can change this selection later, if necessary.
 – Restrict image deployments to hosts with FC-tagged ports: This setting is optional. If you use tags, you can select a specific tag. VMs that are deployed to this storage connectivity group (with a selected tag) can access storage only through FC ports with the specified tag.
 – VIOS Redundancy For Volume Connectivity:
 • At least: Select this option to define the minimum number of VIOSs per host that can connect to the volume. From the drop-down menu, select 1, 2, 3, or 4 VIOSs.
 • Exactly: Select this option to define the exact number of VIOSs per host that can connect to the volume. From the drop-down menu select 1, 2, 3, or 4 VIOSs.
 – NPIV Fabric Access Requirement: This setting controls how the FC paths are created when a VM is created. You can choose Every fabric per VIOS, Every fabric per Host, At least one fabric per VIOS or Exactly one fabric per VIOS.
3. When the information is complete, click Add Member to open the window that is shown in Figure 5-28. You must select which VIOSs become members of the group. If a tag was previously selected, only eligible VIOSs are available to select.
After you select the VIOSs, click Add Member. Selected VIOSs are added to the storage connectivity group.
Then, click Add Group, and the group is created. Now, the group is available for VM deployment.
Figure 5-28 IBM PowerVC Add Member to storage connectivity group window
A ­storage connectivity group can be disabled to prevent the deployment of VMs in this group. To disable a group, you must clear the check box for Allow deployments using storage connectivity group (enable) on the Detailed Properties window of the storage connectivity group, as shown in Figure 5-29. The default storage connectivity group cannot be deleted, but it can be disabled.
Figure 5-29 Disable a storage connectivity group
5.10 Storage template setup
After you configure your storage connectivity group, you can also create storage templates. Storage templates provide a predefined storage configuration to use when you create a disk. You must define different information on the storage templates for different types of storage. For example, as shown in Figure 5-30, this storage template is for the SAN Volume Controller storage device. You do not need any configuration information except the template name and pool name. For a full description, see 3.6.2, “Storage templates” on page 49.
Figure 5-30 Create storage template
A default storage template is automatically created by ­IBM PowerVC for each storage provider. However, if the storage contains several storage pools, create a storage template for each storage pool that you want to use. For IBM Storwize storage, you also must create a storage template for each I/O group that you want to use, and each volume mirroring pool pair that you want to use.
Figure 5-31 on page 127 shows the window to create a storage template for IBM Storwize storage. To access it, from the IBM PowerVC home window, click Configuration → Storage Templates → Create. Then, complete these steps:
1. Select a storage provider.
2. Select a storage pool within the selected storage provider.
3. Provide the storage template name.
4. Select the type of provisioning:
 – Generic means full space allocation (also known as thick provisioning).
 – Thin-provisioned is self-explanatory.
If you select thin-provisioned, the Advanced Settings option is available. If you click Advanced Settings, an additional window (Figure 5-32 on page 128) offers these options:
 • I/O group.
 • Real capacity % of virtual storage.
 • Automatically expand.
 • Warning threshold.
 • Thin-provisioned grain size.
 • Use all available worldwide port names (WWPNs) for attachment.
 • Enable mirroring. You must select another pool to enable mirroring.
For more information about how these settings affect IBM PowerVC disk allocation, see 3.6.2, “Storage templates” on page 49.
 – Compressed for storage arrays that support compression.
Figure 5-31 IBM PowerVC Create Storage Template window
Figure 5-32 shows the advanced settings that are available for thin-provisioned templates. The advanced settings can be configured only for storage that is backed by SAN-accessed devices. When the storage is backed by an SSP in thin-provisioning mode, IBM PowerVC does not offer the option to specify these advanced settings.
Figure 5-32 IBM PowerVC Storage Template Advanced Settings
5. After you click Create, the storage template is created and it is available for use when you create storage volumes. The window that summarizes the available storage templates is shown in Figure 5-33.
Figure 5-33 IBM PowerVC Storage Templates window
5.11 Storage volume setup
After you add the storage providers and define the storage templates, you can create storage volumes.
 
Note: Only data volumes must be created manually. Boot volumes are handled by
­IBM PowerVC automatically. When you deploy a partition as described in 5.15.6, “Deploying a new virtual machine” on page 163, IBM PowerVC automatically creates the boot volumes and data volumes that are included in the images.
When you create a volume, you must select a template that determines where (which storage controller and pool) and what the parameters are (thin or thick provisioning, grain size, and so on) for the volume to create.
When you create a volume, you must select these elements:
A storage template.
The new volume name.
A short description of the volume (optional).
The volume size (GB).
Enable sharing or not. If this option is selected, the volume can be attached to multiple VMs. This option is for PowerHA or similar solutions.
To create a volume, complete the following steps:
1. From the IBM PowerVC home window, click Storage Volumes → the Data Volumes tab → Create to open the window that is shown in Figure 5-34.
Figure 5-34 IBM PowerVC Create Volume window
2. After you click Create Volume, the volume is created. A list of existing volumes is displayed, as shown in Figure 5-35. This figure shows that the provisioned disks are in the available state.
3. From the Storage window, you can manage volumes. Valid operations are the creation, deletion, or unmanaging of already managed volumes or the discovery of volumes that are defined on a storage provider and not yet managed by IBM PowerVC. You also can edit the volumes to enable or disable sharing or resize the volumes. Currently, only increasing the volume capacity is supported.
Figure 5-35 List of IBM PowerVC storage volumes
5.12 Network setup
When you create a VM, you must select a network. If the network uses static IP assignment, you must also select a new IP address for the VM or let IBM PowerVC select a new IP address from the IP pools. For a full description of network configuration in IBM PowerVC, see 3.7, “Network management planning” on page 57.
Initially, IBM PowerVC contains no network definition, so you must create at least one network definition. To create a network definition in IBM PowerVC, from the home window, click Networks → Add Network to open the window that is shown in Figure 5-36.
Figure 5-36 IBM PowerVC network definition
You must provide the following data when you create a network:
Network name
Virtual LAN (VLAN) ID
Maximum transmission unit (MTU) size in bytes
For IP address type, select Dynamic or Static (Select Dynamic if the IP address is assigned automatically by a Dynamic Host Configuration Protocol (DHCP) server.)
Subnet mask
Gateway
Primary/Secondary DNS (This field is optional if you do not use DNS.)
Starting IP address and ending IP address in the IP pool
 
Note: You cannot modify the IP pool after you create the network. Ensure that you enter the correct IP addresses. You can only remove and add a network if you want to update the IP addresses in an IP pool.
Shared Ethernet adapter (SEA) mapping (Select adapters within VIOSs with access to the specific network and that are configured with the correct VLAN ID.)
After you click Add Network, the network is created. From the Networks window, you can also edit the network (change network parameters) and delete networks.
Consider these factors:
IBM PowerVC detects the SEA to use for each host. Verify that IBM PowerVC made the correct choice.
If IBM PowerVC chooses the wrong SEA to use for a specific host, you can change the SEA later.
You can also check the IP address status in the IP Pool on the IP Pool window, as shown in Figure 5-37.
Figure 5-37 IP Pool tab
Important: In the SEA mapping list, the Primary VLAN column refers to the Port Virtual LAN Identifier (PVID) that is attached to the adapter. The VLAN number that you specify does not need to match the primary VLAN.
5.13 Compute template setup
A compute template provides a predefined compute configuration to use when you deploy a new VM. You select a compute template when you deploy a VM. You can change the values that are set in the compute template that is associated with a VM to resize it. You can also create compute templates on the Configuration window.
Figure 5-38 shows the window that opens when you create a compute template. To access the compute template configuration from the IBM PowerVC home window, click Configuration → Compute Templates → Create Compute Template. You must specify the following settings for images that are deployed with the compute template:
For Template settings, select Advanced.
Provide the compute template name.
Provide the number of virtual processors.
Provide the number of processing units.
Provide the amount of memory.
Select the compatibility mode.
If you selected Advanced Settings, additional information is required:
Provide the minimum, desired, and maximum number of virtual processors.
Provide the minimum, desired, and maximum number of processing units.
Provide the minimum, desired, and maximum amounts of memory (MB).
Enter the processor sharing type and weight (0 - 255).
Enter the availability priority (0 - 255).
Figure 5-38 IBM PowerVC Create Compute Template
After you click Create Compute template, the Compute Templates window opens for use when you create a VM. The window that summarizes the available compute templates is shown in Figure 5-39.
Figure 5-39 IBM PowerVC Compute Templates
5.14 Environment verification
After you add the hosts, storage providers, networks, and templates, as a preferred practice, verify your IBM PowerVC environment before you try to capture, deploy, or onboard VMs.
Virtualization management function failures might occur when dependencies and prerequisite configurations are not met.
IBM PowerVC reduces the complexity of virtualization and cloud management. It can check for almost all required dependencies and prerequisite configurations and clearly communicate the failures. It can also accurately pinpoint validation failures and remediation actions when possible.
Figure 5-40 shows the IBM PowerVC Home interface where you start the verification process by clicking Verify Environment. Access the verification report by clicking View Results.
Figure 5-40 IBM PowerVC interface while environment verification in process
The validation of the IBM PowerVC environment takes from a few seconds to a few minutes to complete depending on the size of the environment.
The environment validation function architecture is used to add and change validators to check solution-specific environment dependencies and prerequisite configurations. This architecture is intended to allow the tool to be tuned to improve on performance, reliability, and scalability of validation execution with the increase in the number of endpoints, their configurations, and their inter-connectivity.
5.14.1 Verification report validation categories
After the validation process finishes, you can access a report of the results, as shown in Figure 5-41. This report consists of a table with four columns where you see the following values:
Status
System
Validation Category
Description
Figure 5-41 Verification Results view
The following list shows the validation categories in this report and a description for the types of messages to expect from each of the categories:
Access and Credentials
Validation of reachability and credentials from the management server to the IBM PowerVC domain, including user IDs, passwords, and SSH keys for all resources.
File System, CPU, and Memory on Management Server
Minimum processing and storage requirements for the IBM PowerVC management server.
OS, services, database
This category groups all messages that relate to the availability of the service daemons that are needed for the correct operation and message passing on the IBM PowerVC domain. This category includes operating system services, OpenStack services, platform Enterprise Grid Orchestrator (EGO) services, and MariaDB configuration.
HMC version HMC software level and K2 services are running.
HMC managed Power Systems server resources
Power Systems hosts when they are managed by an HMC. Validation messages include the operating state, PowerVM Enterprise Edition enablement, PowerVM Live Partition Mobility (LPM) capabilities, ability to run a VIOS, maximum number of supported Power Systems servers, firmware level, and processor compatibility. This category is visible from IBM PowerVC Standard Edition.
Virtual I/O Server count, level, and RMC state
Minimum number of configured VIOSs on each managed host, software level, RMC connection and state to the HMC, license agreement state, and maximum number that is required for virtual adapter slots. This category is viewable from IBM PowerVC Standard Edition.
Virtual Network: Shared Ethernet adapter
The SEA is configured on the IBM PowerVC management server network and in the Active state. The maximum number of required virtual slots.
Virtual I/O Server shared Ethernet adapter count, state
This category relates to the validation of at least one SEA on one VIOS. You can view this category from IBM PowerVC Standard Edition.
Host storage LUN Visibility
LUN visibility test. LUNs are created on storage providers and are visible to VIOSs.
Host storage FC Connectivity
Messages that relate to the enabled access to the SAN fabric by the VIOSs and the correct WWPN to validate that VIOS - Fabric - Storage connectivity is established. This category is viewable from
IBM PowerVC Standard Edition.
Storage Model Type and Firmware Level
Messages that relate to the minimum SAN Volume Controller and storage providers’ firmware levels and the allowed machine types and models (MTMs).
Brocade Fabric Validations
Validation for the switch presence, zoning enablement, and firmware level.
Figure 5-42 shows the depth of information that is provided by ­IBM PowerVC. This example shows error messages and then confirmation of an acceptable configuration. By clicking or hovering the mouse pointer over each row of the verification report, you can see windows with extra information. In addition to the entry description, IBM PowerVC suggests a solution to fix the cause of an error or an informational message.
Figure 5-42 Example of a validation message for an error status
5.15 Management of virtual machines and images
The following sections describe the operations that can be performed on VMs and images by using the IBM PowerVC management host:
Most of these operations can be performed from the Virtual Machines window, as shown on Figure 5-43. However, removing a VM, adding an existing VM, and attaching or detaching a volume from a VM are performed from other windows.
Figure 5-43 Operations icons on the Virtual Machines view
5.15.1 Virtual machine onboarding
IBM PowerVC can manage VMs that were not created by IBM PowerVC, such as VMs that were created before the ­IBM PowerVC deployment. To add an existing VM, complete the following steps:
1. From the IBM PowerVC home window, click the hosts icon within the main pane (host icon on the left) or click the Hosts link, as shown in Figure 5-44.
Figure 5-44 Select a host window
2. Click the line of the host on which the VMs that you want to manage are deployed. The background color of the line changes to light blue. Click the host name in the Name column, as shown in Figure 5-45.
Figure 5-45 Selected hosts window
3. The detailed host window opens. In Figure 5-46, the Information and Capacity sections are collapsed for improved viewing. To collapse and expand the sections, click the section names, and you see the collapse and expand buttons. The Virtual Machines section is expanded, but it contains no data because IBM PowerVC does not yet manage any VM on this host.
Figure 5-46 Collapse and expand sections
4. Under the Virtual Machines section (or in the home Hosts section), click Manage Existing to open a window with two options:
 – Manage all fully supported VMs that are not currently being managed by IBM PowerVC. VMs that require preparation must be selected individually.
 – Select specific VMs.
5. Check Select specific virtual machines.
6. After you load data from the HMC, IBM PowerVC displays a new window with two tabs. The Supported tab shows you all of the VMs that can be added to be managed by the
IBM PowerVC. Select one or more VMs that you want to add. The background color changes to light blue for the selected VMs, as shown in Figure 5-47.
 
Note: Checking Manage any supported virtual machines that are not currently being managed by IBM PowerVC and then clicking Manage results in adding all candidate VMs without asking for confirmation.
Figure 5-47 Add existing VMs
 
Note: If a VM does not meet all of the requirements, the VM appears on the Not supported tab. The tab also shows the reason why IBM PowerVC cannot manage the VM.
Note: The detailed eligibility requirements to add a VM into a IBM PowerVC managed PowerVM host are available in the IBM Knowledge Center:
After you click Manage, IBM PowerVC starts to manage the processing of the selected VMs.
7. IBM PowerVC displays a message in the lower-right corner during this process, as shown in Figure 5-48. These messages remain visible for a few seconds.
 
Tip: You can display the messages again by clicking Messages on the black bar with the IBM logo at the top of the window.
Figure 5-48 Example of an informational message
8. After you discover a VM, click the Virtual Machines icon to return to the Manage Existing window. Select the recently added VM. The background color changes to light blue.
Double-click the recently added VM to display its detailed information. The VM’s details window can be accessed by double-clicking Home → Hosts → host name → VM name where host name is the name of the server that contains the VM that you want to view and VM name is the correct VM.
9. For improved viewing, you can collapse sections of the window. Figure 5-49 presents the detailed view of a VM with all sections collapsed. You can collapse and expand each section by clicking the section names: Information, Specifications, Network Interfaces, Collocation Rules, and Details.
Figure 5-49 Virtual machine detailed view with collapsed sections
10. The Information section displays information about the VM status, health, and creation dates. Table 5-1 explains the fields in the Information section.
Table 5-1 Information section fields
Field
Description
Name
The name of the VM.
Description
A user editable field for providing descriptive information.
State
The actual state for the VM.
Health
The actual health status for the VM. The following health statuses are valid:
OK: The target resource, all related resources, and the
IBM PowerVC management services for the resources report zero problems.
Warning: The target resource or a related resource requires user attention.
Important: Nova or cinder host services that manage the resources report problems and require user attention.
Critical: The target resource or a related resource is in an error state.
Unknown: IBM PowerVC cannot determine the health status of the resource.
ID
This internal ID is used by IBM PowerVC management hosts to identify uniquely the VM.
Host
Host server name where the VM is allocated.
Excluded from DRO
Yes or No. Is the host excluded from the DRO?
Created
Creation date and time.
Last updated
Last update date and time.
Expiration date:
Date and time VM will expire. None indicates that no date was specified.
 
Note: Each host, network, VM, and any other resource that is created in the
IBM PowerVC management host has its own ID number. This ID uniquely identifies each resource to the IBM PowerVC management host.
11. In Figure 5-50, the Information section is expanded to display details about the recently added VM.
Figure 5-50 Virtual machine detailed view of expanded Information section
12. Collapse the Information view and expand the Specifications section. This section contains information that relates to the VM capacity and resources. Table 5-2 provides the fields in the Specifications section.
Table 5-2 Specifications section’s fields
Field
Description
Remote restart enabled
Remote restart is enabled or disabled.
Remote restart state
Status of the remote restart.
Memory
Amount of memory (expressed in MB).
Processors
Amount of entitled processing capacity.
Minimum memory (MB)
Amount of minimum wanted memory.
Maximum memory (MB)
Amount of maximum memory.
Minimum processors
Amount of minimum virtual processor capacity.
Maximum processors
Amount of maximum virtual processor capacity.
Availability priority
Priority number for availability when a processor fails.
Processor mode
Shared or dedicated processor mode selected.
Minimum processing units
Amount of minimum entitled processing capacity.
Maximum processing units
Amount of maximum entitled processing capacity.
Sharing mode
Uncapped or capped mode selected.
Shared weight
Weight to request shared resources.
Shared processor pool
The shared processing pool to which the VM belongs.
Processor compatibility mode
The processor compatibility mode is determined when the instance is powered on.
Desired compatibility mode
The processor compatibility mode that is wanted for the VM.
Operating system
The name and level of the operating system that is installed on the partition.
13. Figure 5-51 provides an example of the Specifications section for the recently added VM.
Figure 5-51 Virtual machine detailed view of the expanded Specifications section
14. Collapse the Specifications section and expand the Network Interfaces section. This section contains information that relates to the virtual network connectivity, as shown in Figure 5-52.
Figure 5-52 Virtual machine detailed view of expanded Network Interfaces section
15. Double-click Network Interfaces. Two tabs are shown. The Overview tab displays the Network detailed information, including the VLAN ID, the VIOSs that are involved, the SEAs, and other useful information. The IP Pool tab displays the range of IP addresses that make up the IP pool (if you previously defined it). Figure 5-53 shows the Network Overview tab.
Figure 5-53 Detailed Network Overview tab
16. The Collocation Rules section displays the collocation rules that are used to allocate the VM (if you configured collocation rules).
17. The last section of the Virtual Machine window is the Details section, which presents the status and the hypervisor names for the VM, as listed in Table 5-3.
Table 5-3 Details section’s fields
Field
Description
Power state
Power status for the VM
Task status
Whether a task is running on the VM and the status of the task
Disk config
How the disk was configured in the VM
Hypervisor host name
The name of the host in the hypervisor and the HMC
Hypervisor partition name
The name of the VM in the hypervisor and the HMC
5.15.2 Refreshing the virtual machine view
Refresh reloads the information for the currently selected VM. Click Refresh to reload the information. Figure 5-54 shows the detailed Information section of the Overview tab for the selected VM.
Figure 5-54 Virtual machine Refresh icon
 
Note: On many IBM PowerVC windows, you can see a Refresh icon, as shown by the red highlighting in Figure 5-54 on page 147. Most windows update asynchronously through long polling in the background. Refresh is available if you think that the window does not show the latest data from those updates. (You suspect something went wrong with a network connection, or you want to ensure that the up-to-date data displays.) By clicking the Refresh icon, a Representational State Transfer (REST) call is made to the IBM PowerVC server to get the latest data that is available from IBM PowerVC.
Out-of-band operations
In the context of IBM PowerVC, the term out-of-band operation refers to any operation on an object that is managed by IBM PowerVC that is not performed from the IBM PowerVC tool. For example, an LPM operation that is initiated directly from an HMC is considered an out-of-band operation.
With the default polling interval settings, it might take several minutes for IBM PowerVC to be aware of the change to the environment as a result of an out-of-band operation.
5.15.3 Starting the virtual machine
From the Virtual Machines window, you can use the Start option to power on the currently selected VM. After the VM finishes the start process, the VM is available for operations that are performed through the IBM PowerVC management host. The process takes more time than the boot process of the operating system. IBM PowerVC waits until the RMC service is available to communicate with the VM. Even though the status field is Active (because the VM is powered on), the health field displays a message warning that is similar to “Reason: RMC state of virtual machine vmaix01 is Inactive”. Wait for a few minutes for the health field to display a status of OK before you manage the VM from IBM PowerVC. Figure 5-55 shows the VM after it starts.
Figure 5-55 Virtual machine fully started
5.15.4 Stopping the virtual machine
From the VM’s detailed window, click Stop to shut down the VM.
 
Important: IBM PowerVC shows a window that asks for confirmation that you want to shut down the machine before IBM PowerVC acts.
When the VM completes the shutdown process, the state changes to Shutoff, as shown in Figure 5-56. This process takes a few minutes to complete.
Figure 5-56 Virtual machine powered off
 
Note: If an active RMC connection exists between IBM PowerVC and the target VM, a shutdown of the operating system is triggered. If no active RMC connection exists, the VM is shut down without shutting down the operating system.
5.15.5 Capturing a virtual machine image
You can capture an operating system image of a VM that you created or deployed. This image is used to install the operating system of the future VMs that are created from IBM PowerVC. Before you capture the VM, you must first prepare and enable it.
 
Note: To prepare the VM and to verify that all of the capture requirements are met, see the “Capture requirements” section in the IBM Knowledge Center, found at:
To enable a VM, you can use either the activation engine (AE) or the cloud-init technologies.
Requirements for capture
To be eligible for image capture, a VM must meet several requirements:
The VM must use any of the operating system versions that are supported by
IBM PowerVC.
Your IBM PowerVC environment is configured.
The host on which the VM runs is managed by IBM PowerVC.
The VM uses virtual I/Os and virtual storage; the network and storage devices are provided by the VIOS.
The /var directory on the IBM PowerVC management hosts must have enough space (PowerKVM only).
When you capture VMs that use local storage, the /var directory on the management server is used as the repository for storing the images. The file system that contains the /var directory must have enough space to store the captured images. This amount can be several gigabytes, depending on the VM to capture.
If you plan for a Linux VM with multiple paths to storage, you must configure Linux for multipath I/O (MPIO) on the root device.
 
Tip: Because the default Red Hat Enterprise Linux (RHEL) configuration creates a restricted list for all WWPN entries, you must remove them to enable the deployment of a captured image. The following RHEL link describes how to remove them:
If you want to capture an IBM i VM, multiple boot volumes are supported.
The VM is powered off. When you power off a VM, the status appears as Active until the VM shuts down. You can select the VM for capture even if the status is displayed as Active.
 
Important: When you enable the AE, the VM is powered off automatically. When you use cloud-init, you must shut down the VM manually before the capture.
Operating systems that use a Linux Loader (LILO) or Yaboot boot loader, such as SUSE Linux Enterprise Server 10, SUSE Linux Enterprise Server 11, RHEL 5, and RHEL 6, require special steps when you use VMs with multiple disks. These operating systems must be configured to use a Universally Unique Identifier (UUID) to reference their boot disk. SUSE Linux Enterprise Server 11 virtual servers mount devices by using the -id option by default, which means that they are represented by symbolic links. To address this issue, you must perform one of the following configurations before you capture a SUSE Linux Enterprise Server VM for the first time:
 – Configure Linux for MPIO on the root device on VMs that will be deployed to multiple VIOSs or multipath environments.
 – Update /etc/fstab and /etc/lilo.conf to use UUIDs instead of symbolic links.
To change the devices so that they are mounted by UUID, complete the following steps:
a. Search the file system table /etc/fstab for the presence of symbolic links. Symbolic links look like this example:
/dev/disk/by-*
b. Store the mapping of /dev/disk/by-* symlinks to their target devices in a scratch file and ensure that you use the device names in it, for example:
ls -l /dev/disk/by-* > /tmp/scratchpad.txt
c. The contents of the scratchpad.txt file are similar to Example 5-1.
Example 5-1 The scratchpad.txt file
/dev/disk/by-id:
total 0
lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-360050768028180ee380000000000603c -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-360050768028180ee380000000000603c-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 wwn-0x60050768028180ee380000000000603c-part3 -> ../../sda3
total 0
lrwxrwxrwx 1 root root 9 Apr 10 12:07 scsi-0:0:1:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 scsi-0:0:1:0-part3 -> ../../sda3
/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 root root 10 Apr 10 12:07 3cb4e486-10a4-44a9-8273-9051f607435e -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 10 12:07 c6a9f4e8-4e87-49c9-b211-89086c2d1064 -> ../../sda3
d. Edit the /etc/fstab file. Replace the /dev/disk/by-* entries with the device names to which the symlinks point, as laid out in your scratchpad.txt file. Example 5-2 shows how the lines look before you edit them.
Example 5-2 The scratchpad.txt file
/dev/disk/by-id/scsi-360050768028180ee380000000000603c-part2 swap swap defaults 0 0
/dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3 / ext3 acl,user_xattr 1 1
In this example, those lines are changed to refer to the specific device names, as shown in Example 5-3.
Example 5-3 Specific device names for the /etc/fstab file
/dev/sda2 swap swap defaults 0 0
/dev/sda3 / ext3 acl,user_xattr 1 1
e. Edit the /etc/lilo.conf file so that the root lines correspond to the device UUID and the boot line corresponds to the device names. Example 5-4 shows how the lines look before you edit them.
Example 5-4 The /etc/lilo.conf file
boot = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part1
root = /dev/disk/by-id/scsi-360050768028180ee380000000000603c-part3
In Example 5-5, those lines were changed to refer to the specific device names.
Example 5-5 Specific devices names for the /etc/lilo.conf file
boot = /dev/sda1
root = /dev/sda3
f. Run the lilo command.
g. Run the mkinitrd command.
Preparing a virtual machine with cloud-init
The cloud-init script enables VM activation and initialization, and it is widely used for OpenStack. Before you capture a VM, install the cloud-init initialization package. This package is available in the /opt/ibm/IBM PowerVC/images/cloud-init path in the IBM PowerVC host.
 
Important: If you are installing the cloud-init package to capture a VM on which the AE is already installed, you must first uninstall the AE. To check whether the AE Red Hat Package Managers (RPMs) are installed, run the following command on the VM:
# rpm -qa | grep activation
Complete the following steps:
1. Before you install cloud-init, you must install the dependencies for cloud-init. These dependencies are not included with the operating systems:
 – For SUSE Linux Enterprise Server, install the dependencies that are provided in the SUSE Linux Enterprise Server repository, found at:
 – For RHEL, add the EPEL yum repository for the latest level of the dependent RPMs.
Use these commands for RHEL6, for example:
 
rpm -Uvh epel-release-6*.rpm
Use these commands for RHEL 7, for example:
 
rpm -Uvh epel-release-7*.rpm
 – For AIX, follow the instructions that are found at the following website to download the cloud-init dependencies:
2. Install the appropriate cloud-init RPM for your operating system, which is in /opt/ibm/IBM PowerVC/images/cloud-init.
However, if the VM already has an installed cloud-init RPM, you must uninstall the existing RPM first.
 – For RHEL, install the appropriate RPM from /opt/ibm/IBM PowerVC/images/cloud-init/rhel:
 • RHEL 6: cloud-init-0.7.4-5.el6.noarch.rpm
 • RHEL 7: cloud-init-0.7.4-5.el7.noarch.rpm
 – For SUSE Linux Enterprise Server, install the appropriate RPM from /opt/ibm/IBM PowerVC/images/cloud-init/sles:
 • SUSE Linux Enterprise Server 11: cloud-init-0.7.4-2.4.ppc64.rpm
 • SUSE Linux Enterprise Server 12: cloud-init-0.7.5-8.10.ppc64le.rpm
 – For Ubuntu Linux, install the appropriate RPM from /opt/ibm/IBM PowerVC/images/cloud-init/ubuntu:
Ubuntu 15: cloud-init_0.7.7~bzr1091-0ubuntu1_all.deb
 – For AIX, download the AIX cloud-init RPM from the following website:
3. After you install cloud-init, modify the cloud.cfg file, which is in opt/freeware/etc/cloud/cloud.cfg, by using the following values:
 – For RHEL, set the following values:
disable_root: 0
ssh_pwauth: 1
ssh_deletekeys: 1
 – For SUSE Linux Enterprise Server, perform these tasks:
 • Remove the following field:
users: -root
 • Add the following fields:
ssh_pwauth: true
ssh_deletekeys: true
 – For both RHEL and SUSE Linux Enterprise Server, add the following new values to the cloud.cfg file:
disable_ec2_metadata: True
datasource_list: ['ConfigDrive']
 – For SUSE Linux Enterprise Server only, after you update and save the cloud.cfg file, run the following commands:
chkconfig -s cloud-init-local on
chkconfig -s cloud-init on
chkconfig -s cloud-config on
chkconfig -s cloud-final on
 – For RHEL 7.0 and 7.1, ensure that the following conditions are set on the VM that you are capturing:
 • Set SELinux to permissive or disabled on the VM that you are capturing or deploying.
 • The Network Manager must be installed and enabled.
 • Ensure that the net-tools package is installed.
 
Note: This package is not installed by default when you select the Minimal Install software option during the installation of RHEL 7.0 and 7.1 from an International Organization for Standardization (ISO) image.
 • Edit all of the /etc/sysconfig/network-scripts/ifcfg-eth* files to update their NM_CONTROLLED = no settings.
4. Remove the Media Access Control (MAC) address information. For more information about how to remove the MAC address information, see the OpenStack website:
 
Important: The /etc/sysconfig/network-scripts file path that is mentioned in the OpenStack website about the HWADDR must be applied only to RHEL. For SUSE Linux Enterprise Server, the HWADDR path is /etc/sysconfig/network.
For example, for the ifcfg-eth0 adapter, on RHEL, remove the HWADDR line from /etc/sysconfig/network-scripts/ifcfg-eth0, and on SUSE Linux Enterprise Server, remove the HWADDR line from /etc/sysconfig/network/ifcfg-eth0.
The 70-persistent-net.rules and 75-persistent-net-generator.rules files are required to add or remove network interfaces on the VMs after deployment. Ensure that you save these files so that you can restore them after the deployment is complete. These rules files are not supported by RHEL 7.0 and 7.1. Therefore, after you remove the adapters, you must update the adapter configuration files manually on the VM to match the current set of adapters.
5. Enable and configure the modules (Table 5-4) and host name behavior by modifying the cloud.cfg file as follows:
 – Linux: /etc/cloud/cloud.cfg
 – AIX: /opt/freeware/etc/cloud/cloud.cfg
 – As a preferred practice, enable reset-rmc and update-bootlist on Linux.
 – Host name: If you want to change the host name after the deployment, remove "-update_hostname" from the list of cloud_init_modules. If you do not remove it, cloud-init resets the host name to the original host name deployed value when the system is restarted.
Table 5-4 Modules and descriptions
Module
Description
restore_volume_group
This module restores non-rootVG volume groups when you deploy a new VM.
For AIX, run the /opt/freeware/lib/cloud-init/create_pvid_to_vg_mappings.sh command to save the information that is used to restore custom volume groups on all VMs that are deployed from the image that is captured. Saving this information is useful if you have a multidisk VM that has a dataVG volume group defined. The module restores the dataVG after the deployment.
set_multipath_hcheck_interval
Use this module to set the hcheck interval for multipath. If you deploy a multidisk VM and this module is enabled, you can deploy specifying a cloud-config data entry that is named "multipath_hcheck_interval" and give it an integer value that corresponds to seconds. Post-deployment, each of the VM’s disks must have their hcheck_interval property set to the value that was passed through the cloud-config data. Run the lsattr -El hdisk# -a hcheck_interval command for verification. If you do not specify the value within the cloud-config data, the module sets each disk’s value to 60 seconds.
set_hostname_from_dns
Use this module to set your VM’s host name by using the host name values from your Domain Name Server (DNS). To enable this module, add this line to the cloud_init_modules section:
- set_hostname_from_dns
Then, remove these lines:
- set_hostname
- update_hostname
set_hostname_from_interface
Use this module to choose the network interface and IP address to be used for the reverse lookup. The valid values are interface names, such as eth0 and en1. On Linux, the default value is eth0. On AIX, the default value is en0.
set_dns_shortname
This module specifies whether to use the short name to set the host name. Valid values are True to use the short name or False to use the fully qualified domain name. The default value is False.
You can also deploy with both static and DHCP interfaces on SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise Server 12:
 – If you want cloud-init to set the host name, in the /etc/sysconfig/network/dhcp file, set the DHCLIENT_SET_HOSTNAME option to no.
 – If you want cloud-init to set the default route by using the first static interface, which is standard, set the DHCLIENT_SET_DEFAULT_ROUTE option in the /etc/sysconfig/network/dhcp file to no.
If you do not set these settings to no and then deploy with both static and DHCP interfaces, the DHCP client software might overwrite the value that cloud-init sets for the host name and default route, depending on how long it takes to get DHCP leases for each DHCP interface.
 – reset-rmc: This module automatically resets RMC. This action is enabled by default on AIX. It can be enabled on Linux by adding - reset-rmc to the cloud_init_modules: section.
 – update-bootlist: This module removes the temporary virtual optical device, which is used to send configuration information to the VM, from the VM’s bootlist. This action is enabled by default on AIX. It can be enabled on Linux by adding - update-bootlist to the cloud_init_modules: section.
6. For AIX, run the /opt/freeware/lib/cloud-init/create_pvid_to_vg_mappings.sh command to save the information that is used to restore custom volume groups on all VMs that are deployed from the image that will be captured.
7. Manually shut down the VM.
Preparing a virtual machine with activation-engine
To install and enable the AE, complete the following steps:
1. Look for the vmc.vsae.tar AE package on the IBM PowerVC management host in the /opt/ibm/IBM PowerVC/activation-engine directory.
2. Copy the vmc.vsae.tar file to the VM that you will capture. This file can be stored in any directory that matches your environment’s guidelines.
3. On the VM that you will capture, extract the contents of the vmc.vsae.tar file.
4. For AIX, perform these tasks:
 – Ensure that the JAVA_HOME environment variable is set and points at a Java runtime environment (JRE), for example:
# export JAVA_HOME=/usr/java5/jre
 – Run the AE installation command:
./aix-install.sh
5. For Linux, run the following command, which was included in the vmc.vsae.tar file:
linux-install.sh
When you run this command on Linux, you are asked whether the operating system is running on a kernel-based VM (KVM) hypervisor. Answer no to this question.
6. You can remove the .tar file and extracted files now, unless you want to remove the AE later.
Before you capture a VM, you must enable the AE that is installed on it. To enable the AE, complete the following steps:
1. If you previously captured the VM and want to capture it again, run the commands that are shown in Example 5-6.
Example 5-6 Commands to enable the activation engine
rm /opt/ibm/ae/AP/*
cp /opt/ibm/ae/AS/vmc-network-restore/resetenv /opt/ibm/ae/AP/ovf-env.xml
 
Important: The following step shuts down the VM. Ensure that no users or programs are active and that the machine can be stopped before you run this step.
2. Prepare the VM to be captured by running the following command:
/opt/ibm/ae/AE.sh -R
3. Wait until the VM is powered off. See Example 5-7 for an example of the output of the command.
 
 
Note: When this command finishes, the VM is powered off and ready to be captured.
Example 5-7 Output from the /opt/ibm/ae/AE.sh -R command
# /opt/ibm/ae/AE.sh -R
JAVA_HOME=/usr/java5/jre
[2013-11-01 16:44:55,831] INFO: Looking for platform initialization commands
[2013-11-01 16:44:55,841] INFO: OS: AIX Version: 7.1
[2013-11-01 16:44:56,315] INFO: No initialization commands found....continuing
[2013-11-01 16:44:56,319] INFO: Base PA: /opt/ibm/ae/ovf-env-base.xml
[2013-11-01 16:44:56,322] INFO: VSAE Encryption Level: Disabled
[2013-11-01 16:44:56,323] INFO: CLI parameters are '['AE/ae.py', '-R']'
[2013-11-01 16:44:56,325] INFO: AE base directory is /opt/ibm/ae/
[2013-11-01 16:44:56,345] INFO: windowting system. AP file: None. Interactive: False
[2013-11-01 16:44:56,513] INFO: In window
[2013-11-01 16:44:56,513] INFO: windowting products
[2013-11-01 16:44:56,515] INFO: Start to window com.ibm.ovf.vmcontrol.system
0821-515 ifconfig: error loading /usr/lib/drivers/if_eth: A file or directory in the path name does not exist.
[2013-11-01 16:44:56,846] INFO: Start to window com.ibm.ovf.vmcontrol.restore.network
0821-515 ifconfig: error loading /usr/lib/drivers/if_eth: A file or directory in the path name does not exist.
[2013-11-01 16:44:59,917] INFO: windowting the operating system
[2013-11-01 16:44:59,947] INFO: Cleaning AR and AP directories
[2013-11-01 16:44:59,957] INFO: Shutting down the system
 
SHUTDOWN PROGRAM
Fri Nov 1 16:45:01 CDT 2013
 
Broadcast message from root@vmaix01 (tty) at 16:45:01 ...
 
shutdown: PLEASE LOG OFF NOW !!!
System maintenance is in progress.
All processes will be killed now.
 
Broadcast message from root@vmaix01 (tty) at 16:45:01 ...
 
shutdown: THE SYSTEM IS BEING SHUT DOWN NOW
 
JAVA_HOME=/usr/java5/jre
[2013-11-01 16:45:10,040] INFO: Looking for platform initialization commands
[2013-11-01 16:45:10,049] INFO: OS: AIX Version: 7.1
 
[2013-11-01 16:45:10,424] INFO: No initialization commands found....continuing
[2013-11-01 16:45:10,428] INFO: Base PA: /opt/ibm/ae/ovf-env-base.xml
[2013-11-01 16:45:10,430] INFO: VSAE Encryption Level: Disabled
[2013-11-01 16:45:10,433] INFO: CLI parameters are '['AE/ae.py', '-d', 'stop']'
[2013-11-01 16:45:10,434] INFO: AE base directory is /opt/ibm/ae/
[2013-11-01 16:45:10,453] INFO: Stopping AE daemon.
[2013-11-01 16:45:10,460] INFO: AE daemon was not running.
0513-044 The sshd Subsystem was requested to stop.
 
Wait for '....Halt completed....' before stopping.
Error reporting has stopped.
If you must uninstall the AE from a VM, log on to the VM command-line interface (CLI). Change your working directory to the directory where you unpacked (run tar -x) the vmc.vsae.tar AE package. Run the following commands:
For AIX, run this command:
aix-install.sh -u
For Linux, run this command:
linux-install.sh -u
Capturing the virtual machine image
To capture a VM image, complete the following steps:
1. After you complete the previous steps to install and prepare the VM for capture, log on to the IBM PowerVC GUI. Go to the Virtual Machines view. Select the VM that you want to capture, as shown in Figure 5-57. Click Continue.
Figure 5-57 Capture window
2. Use IBM PowerVC to choose the name for your future image and select the volumes (either boot volumes or data volumes) that you want to capture.
3. When you capture a VM, all volumes that belong to its boot set are included in the image that is generated by the capture. If the VM is brought into IBM PowerVC management, the boot set consists of all volumes that are marked as the boot set when IBM PowerVC manages the VM.
If the VM is deployed from an image that is created within IBM PowerVC, the boot set consists of all volumes that the user chooses as the boot set when the user creates the image. Unlike the volumes that belong to the VM’s boot set, the user can choose which data volumes to include in the image that is generated by the capture. Figure 5-58 shows an example of choosing to capture both boot volumes and data volumes. Click Capture.
Figure 5-58 Capture boot and data volumes
4. IBM PowerVC shows a confirmation window that lists all of the VM volumes that were chosen for capture (Figure 5-59). Click Capture again to start the capture process.
Figure 5-59 Capture window confirmation
5. On Figure 5-60, the Task column displays a “Pre-capture processing started” message. In addition, a message, which states that IBM PowerVC is taking a snapshot of the VM image, appears for a few seconds in the lower-right corner of the window, as shown in Figure 5-60.
Figure 5-60 Image snapshot in progress
6. If you open the Images window while an image capture is ongoing, you see the image state displayed as Queued, as shown in Figure 5-61.
Figure 5-61 Image creation in progress
7. When the image capture is complete, the state in the Images view changes to Active, which is shown in Figure 5-61.
8. Look at the Storage volumes window. You can see the storage volumes that were created to hold the VM images. For example, Figure 5-62 shows two volumes that contain the images that were captured on the same VM.
Figure 5-62 Storage volumes view
9. The IBM PowerVC management host captures the image in the same way that it manages adding a volume to the system, but it adds information to use this volume as an image. This information enables the image to appear in the Image view to deploy new VMs.
10. Click the Images icon on the left bar to return to the Images view. Select the image to display its information in detail. Double-click the image to open a window that is similar to the window that is shown in Figure 5-63.
Figure 5-63 Expanded information for a captured image
11. Table 5-5 explains each field in the Information section.
Table 5-5 Description of the fields in the Information section
Field
Description
Name
Name of the image capture
State
Current state of the image capture
ID
Unique identifier number for the resource
Description
Quick description of the image
Checksum
Verification sum for the resource
Captured VM
Name of the VM that was used to create the image
Created
Created date and time
Last updated
Last updated date and time
12. Table 5-6 explains each field of the Specifications section.
Table 5-6 Description of the fields in the Specifications section
Field
Description
Image type
Description of the image type
Container format
Type of container for the data
Disk format
The specific format for the disk
Operating system
The operating system on the image
Hypervisor type
The name of the hypervisor that is managing the image
Architecture
Architecture of the image
Endianness
Big Endian or Little Endian
13. The Volumes section displays all of the storage information about the image.
14. The Virtual Machines section displays the list of VMs that were deployed by using this image. The Virtual Machines section is shown in Figure 5-64.
Figure 5-64 Volumes section and Virtual Machines section
5.15.6 Deploying a new virtual machine
You can deploy a new VM by reusing one of the images that was captured, as described in 5.15.5, “Capturing a virtual machine image” on page 149. You can deploy to a specific host, or the placement policy can choose the best location for the new VM. For more information about the placement policy function, see 3.4, “Placement policies and templates” on page 32.
 
Important: Before you deploy an image, you can set a default domain name that
IBM PowerVC uses when it creates VMs by using the IBM PowerVC-domainname command. This domain name is used to create the fully qualified name of the new VM. If you set the domain name to ibm.com and you create a partition with the name new_VM, its fully qualified host name will be new_VM.ibm.com.
If you do not set a default domain name in the nova.conf file, IBM PowerVC uses the domain that is set for the VIOS on the host to which you are deploying. If IBM PowerVC cannot retrieve that value, it uses the domain name of the IBM PowerVC management host. If it cannot retrieve that value, no domain name is set and you must set the domain name manually after you deploy the image.
For details about the IBM PowerVC CLI and the IBM PowerVC-domainname command, see 4.7, “IBM PowerVC command-line interface” on page 93.
IBM PowerVC Version 1.2.3 has the following limits regarding deployments:
IBM PowerVC supports a maximum of 50 concurrent deployments. As a preferred practice, do not exceed eight concurrent deployments for each host.
Running more than 10 concurrent deployment operations might require additional memory and processor capacity on the IBM PowerVC management host.
If you use only SAN storage and you plan to batch-deploy over 100 VMs that are based on one image, you must make multiple copies of that image and deploy the VMs in batches of 10.
The following settings might increase the throughput and decrease the duration of deployments:
Use the striping policy instead of the packing policy.
Limit the number of concurrent deployments to match the number of hosts.
The host group and storage connectivity group that you select determine the hosts that are available as target hosts in the deployment operation. For more information, see 3.6.3, “Storage connectivity groups and tags” on page 52.
You can initiate a new deployment from the Images window to list the available images. Complete the following steps:
1. Select the image that you want to install on the VM that you create. The selected image background changes to light blue. Then, click Deploy, as shown in Figure 5-65.
Figure 5-65 Image capture that is selected for deployment
2. IBM PowerVC opens a new window where you must define information about the new VM. Figure 5-66 presents an example of this window.
Figure 5-66 Information to deploy an image
In advance, during the planning phase of the partition creation, you defined the following information:
 – VM name
 – Instances
If you have a DHCP server or an IP pool that is configured, you can deploy several VMs simultaneously.
 – Host or host group
Manually select the target host where the new VM will be deployed, or select the host group so that IBM PowerVC selects the host based on the configured policy. For details about the automatic placement of partitions, see 3.4, “Placement policies and templates” on page 32.
 – Storage connectivity group
Select one storage connectivity group for the new VM to access its storage. IBM PowerVC can use a storage connectivity group to determine the use of vSCSI or NPIV to access SAN storage. For details about the selection of the storage path and FC ports to use, see 3.6.3, “Storage connectivity groups and tags” on page 52.
 – Compute template
Select the compute template that you want to use to deploy the new VM with standard resource definitions. For detailed information about planning for CPU and memory resources by using templates, see 3.4.4, “Information that is required for compute template planning” on page 36.
You can see in Figure 5-66 on page 165 that IBM PowerVC displays the values pre-set in the template in fields that can be overwritten. You can change the amount of resources that you need for this new VM.
 – Image volumes
Since IBM PowerVC Version 1.2.3, you can capture a multiple-volume image. In this case, two volumes are included in the image. You must select the storage template that you want for each volume to deploy the new VM with predefined storage capacity. You can select different storage templates for those volumes to meet your business needs. IBM PowerVC presents a drop-down menu that lists the storage templates that are available in the storage provider in which the image volumes are stored.
 – New and existing volumes
You can add new or existing volumes in addition to the volumes that are included in the image. To add volumes, click Add volume. The Add Volume window, where you attach a volume to the VM, opens.
 – Network:
 • Primary network
Select the network. If the selected network does not have a configured DHCP server, you must also manually provide an IP address, or IBM PowerVC selects an IP address from the IP pool.
 • Additional networks
If two or more networks were defined in IBM PowerVC, you can click the plus sign (+) icon to add more networks. Select the network. Get the IP address from the DHCP server, provide the IP address manually, or select one from the IP pool automatically.
 
Note: IBM PowerVC verifies that the IP address that you provide is not already used for another VM, even if the IP address is used in a VM that is powered off.
 – Activation input:
You can upload configuration scripts or add configuration data at the time of deploying a VM by using the activation input option. This script or data automatically configures your VM according to your requirements after it is deployed. For more information about the accepted data formats in cloud-init and examples of commonly used cloud configuration data formats, see the cloud-init documentation.
 
Note: The file or scripts that you upload and add here are used by the cloud-init initialization package and the AE for AIX VMs only. The AE for AIX VMs supports shell scripts that start with #! only, and it does not support the other cloud-init data formats. For any other operating system, the AE does not use the data that you upload for activation.
Note: On the right part of the window, IBM PowerVC displays the amount of available resources on the target host and the amount of additional resources that are requested for the new partition. So, you can see the amount of resources that are used and free on this host after the installation of the new partitions.
For more information about activation input, see the IBM Knowledge Center:
3. Click Deploy in the lower part of the window to start the deployment of the new VM. This process might take a few minutes to finish.
 
Important: For other vendor’s storage devices, no technique is available that is like the IBM FlashCopy® service in IBM Storwize storage. They use LUN migration instead. A deployment might take an hour to complete. The amount of time depends on the volumes’ sizes and the storage device performance. Contact your storage administrator before you design your IBM PowerVC infrastructure.
4. When the deployment finishes, you can see a new VM in the Virtual Machines window. This new VM is a clone of the captured image. The new VM is already configured and powered on, as shown in Figure 5-67.
Figure 5-67 Newly deployed virtual machine
Tip: The new VM is a clone of the image, so you can log on to this VM with the same user ID and password combination that is defined in the VM from which the image was captured.
Adding virtual Ethernet adapters for virtual machines
After the VM was deployed successfully, you can add more virtual Ethernet adapters for the VM if you defined more networks in IBM PowerVC. In a VM, IBM PowerVC allows only one virtual Ethernet adapter for each network. Complete the following steps:
1. To add a virtual Ethernet adapter for a VM, select the VM name on the Virtual Machines window.
2. Go to the VM’s details window. As shown in Figure 5-68, in the Network Interfaces section, click Add.
3. Select the network that you want to connect. Assign an IP address or IBM PowerVC selects an IP address from the IP pool.
4. Click Add Interface. A new virtual Ethernet adapter is added for the VM.
Figure 5-68 Add an Ethernet adapter for a virtual machine
 
Note: After you add the virtual Ethernet adapter, you must refresh the hardware list in the partition. For example, run the cfgmgr command in AIX to assign the IP address to the newly discovered Ethernet adapter manually.
Adding collocation rules
Use collocation rules to specify that selected VMs must always be kept on the same host (affinity) or that they can never be placed on the same host (anti-affinity). These rules are enforced when a VM is relocated. For example, in PowerHA scenarios, you must force the pair of high availability (HA) VMs to exist on different physical machines. Otherwise, a single point of failure (SPOF) risk exists. Use the anti-affinity collocation rule to create this scenario.
To create a collocation rule, click Configuration → Collocation Rules  Create Collocation Rule, as shown in Figure 5-69. Enter the collocation rule name, select the policy type (either Affinity and Anti-Affinity), select the VMs, and click Create. The collocation rule creation is complete.
Figure 5-69 Create Collocation Rule
 
Important: When VMs are migrated or restarted remotely, one VM is moved at a time, which has the following implications for VMs in collocation rules that specify affinity:
The VMs cannot be migrated or restarted remotely on another host.
When you put a host into maintenance mode, if that host has multiple VMs in the same collocation rule, you cannot migrate active VMs to another host.
To migrate a VM or restart a VM remotely in these situations, the VM must first be removed from the collocation rule. After the VM is migrated or restarted remotely, the VM can be added to the correct collocation rule.
5.15.7 Resizing the virtual machine
The IBM PowerVC management host can resize the managed VMs dynamically. Complete the following steps:
1. From the Virtual Machines window, click Resize on the upper bar on the window, as shown in Figure 5-70.
Figure 5-70 Virtual Machines: Resize
2. In the next window (Figure 5-71), enter the new values for the resources or choose an existing compute template. Select the option that fits your business needs.
Figure 5-71 VM Resize window to select a compute template
When you enter the new value, it is verified and checked against the minimum and maximum values that are defined in the partition profile. If the requested new values exceed these limits for the VM, IBM PowerVC rejects the request, highlights the field with a red outline, and issues an error notice, as shown in Figure 5-72.
Figure 5-72 Exceeded value for resizing
3. After you complete the information that is required in this window, click Resize to start the resizing process. You see a message window in the lower-right part of the window and a “Complete” message in the message view.
 
Tip: The IBM PowerVC management server compares the entered values with the values in the profile of the selected VM. If you modify the VM profile, you must shut down and restart the VM for the changes to take effect.
Important: To refresh the profile, shut down and restart the VM rather than just restart it. Restarting the VM keeps the current values rather than reading the new values that you set in the profile.
4. The resize process can take a few minutes. When it finishes, you can see the new sizes in the Specifications section of the VM.
 
Note: With the IBM PowerVC resize function, you can change the current settings of the machine only. You cannot use the resize function to change the minimum and maximum values that are set in the partition profile or to change a partition from shared to dedicated.
5.15.8 Migration of virtual machines
IBM PowerVC can manage the LPM feature. Use the LPM feature to migrate VMs from one host to another host.
Migration requirements
To migrate VMs by using the IBM PowerVC management server, ensure that the source and destination hosts and the VMs are configured correctly.
To migrate a VM, the following requirements must be met:
The VM is in Active status in the IBM PowerVC management host.
The PowerVM Enterprise Edition or PowerVM for IBM Linux on Power hardware feature is activated on your hosts. This feature enables the use of the LPM feature.
The networks for both source and target hosts must be mapped to SEAs by using the same virtual Ethernet switch.
As a preferred practice, set the maximum number of virtual resources (virtual adapters) to at least 200 on all of the hosts in your environment. This value ensures that you can create enough VMs on your hosts.
The logical memory block size on the source host and the destination host must be the same.
Both the source and destination hosts must have an equivalent configuration of VIOSs that belong to the same storage connectivity group.
 
Note: If the source host has two VIOSs and the target host has only one VIOS, it is not possible to migrate a partition by accessing its storage through both VIOSs on the source. However, if a partition on the source host is using only one VIOS to access its storage, it can be migrated (assuming that other requirements, such as port tagging, are met).
The processor compatibility mode on the VM that you want to migrate must be supported by the destination host.
The VM must have an enabled RMC connection.
To migrate a VM with a vSCSI attachment, the destination VIOS must be zoned to the backing storage.
At least one pair of VIOS VMs must be storage-ready and members of the storage connectivity group. Each of these VIOS VMs must have at least two physical FC ports ready.
Each of the two physical FC ports must be connected to a distinct fabric, and the fabric must be set correctly on the FC ports’ Configuration pages.
The following restrictions apply when you migrate a VM:
You cannot migrate a VM to a host that is a member of a different host group.
If the VM is running a Little Endian guest, the target host must support Little Endian guests.
If the VM was created as remote restart-capable, the target host must support remote restart.
Certain IBM Power System servers can run only Linux workloads. When you migrate an AIX or IBM i VM, these hosts are not considered for placement.
You cannot exceed the maximum number of simultaneous migrations that are designated for the source and destination hosts. The maximum number of simultaneous migrations depends on the number of migrations that are supported by the VIOSs that are associated with each host.
A source host in a migration operation cannot serve concurrently as a destination host in a separate migration operation.
If you deployed a VM with a processor compatibility mode of POWER7 and later changed the mode to POWER6, you cannot migrate the VM to a POWER6 host. The MAC address for a POWER7 VM is generated by IBM PowerVC during the deployment.
To migrate to a POWER6 host, the MAC address of the VM must be generated by the HMC. To migrate from a POWER7 to a POWER6 host, you must initially deploy to a POWER7 system with the processor compatibility mode set to a POWER6 derivative, or you must initially deploy to a POWER6 host.
PowerVM does not support the migration of a VM whose attachment type changes its multipathing solution between the source and destination VIOSs. For example, a VM on a path-control module (PCM)-attached VIOS can be successfully migrated only to a PCM-attached VIOS. However, PowerVM does not enforce this requirement. To avoid unsupported migrations, create separate storage connectivity groups for PCM and PowerPath multipathing solutions.
Collocation rules are enforced during migration:
 – If the VM is a member of a collocation rule that specifies affinity and multiple VMs are in that collocation rule, you cannot migrate it. Otherwise, the affinity rule is broken. To migrate a VM in this case, remove it from the collocation rule and then add it to the correct group after the migration.
 – If the VM is a member of a collocation rule that specifies anti-affinity, you cannot migrate it to a host that has a VM that is a member of the same collocation rule. For example, assume the following scenario:
 • VM A is on Host A.
 • VM B is on Host B.
 • VM A and VM B are in a collocation rule that specifies anti-affinity.
Therefore, VM A cannot be migrated to Host B.
 – Only one migration or remote restart at a time is allowed for VMs in the same collocation rule. Therefore, if you try to migrate a VM or restart a VM remotely and any other VMs in the same collocation rule are being migrated or restarted remotely, that request fails.
Migrating the virtual machine
To migrate a VM, complete the following steps:
1. Open the Virtual Machines window and select the VM that you want to migrate. The background changes to light blue.
2. Click Migrate, as shown in Figure 5-73.
Figure 5-73 Migrate a selected virtual machine
3. You can select the target host or the placement policy can determine the target, as shown in Figure 5-74.
Figure 5-74 Select the target server before the migration
4. Figure 5-75 shows that during the migration, the Virtual Machines window shows the partition with the state and task both set to Migrating.
Figure 5-75 Virtual machine migration in progress
5. After the migration completes, you can check the Virtual Machines window to verify that the partition is now hosted on the target host, as shown in Figure 5-76.
 
Note: A warning message in the Health column is normal. It takes a few minutes to change to OK.
Figure 5-76 Virtual machine migration finished
5.15.9 Host maintenance mode
You move a host to maintenance mode to perform maintenance activities on a host, such updating firmware or replacing hardware.
Maintenance mode requirements
Before you move the host into maintenance mode, check whether the following requirements are met:
If the request was made to migrate active VMs when the host entered maintenance mode, the following conditions must also be true:
 – The hypervisor must be licensed for LPM.
 – The VMs on the host cannot be in the Error, Paused, or Building states.
 – On all active VMs, the health must be OK and the RMC connections must be Active.
 – All requirements for live migration must be met, as described in “Migration requirements” on page 173.
The host’s hypervisor state must be Operating. If it is not, VM migrations might fail.
If the request was made to migrate active VMs when the host entered maintenance mode, the following conditions cannot also be true, or the request fails:
 – A VM on the host is a member of a collocation rule that specifies affinity and has multiple members.
 – The collocation rule has a member that is already undergoing a migration or is being restarted remotely.
Putting the host in maintenance mode
If all of the requirements are met, you can put a host in maintenance mode by following these steps:
1. On the Hosts window, select the host that you want to put into maintenance mode, and click Enter Maintenance Mode, as shown in Figure 5-77.
Figure 5-77 Enter Maintenance Mode
2. If you want to migrate the VMs to other hosts, select Migrate active virtual machines to another host, as shown in Figure 5-78. This option is unavailable if no hosts are available for the migration.
Figure 5-78 Migrate virtual machines to other hosts
3. Click OK.
After maintenance mode is requested, the host’s maintenance state is Entering Maintenance while the VMs are migrated to another host, if requested. This status changes to Maintenance On after the migration is complete and the host is fully in the maintenance state.
To remove a host from maintenance mode, select the host and select Exit Maintenance Mode. Click OK on the confirmation window, as shown in Figure 5-79.
Figure 5-79 Exit Maintenance Mode
You can add VMs again to the host after it is brought out of maintenance mode.
 
Tip: You can edit the period after which the migration operation times out and the maintenance mode enters an error state by running the following commands:
/usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT prs_ha_timeout_seconds <duration_in_seconds>
For example, to set the timeout for two hours, run this command:
/usr/bin/openstack-config --set /etc/nova/nova.conf DEFAULT prs_ha_timeout_seconds 7200
Then, restart openstack-nova-ibm-ego-ha-service:
service openstack-nova-ibm-ego-ha-service restart
5.15.10 Restarting virtual machines remotely from a failed host
IBM PowerVC can restart VMs remotely from a failed host to another host. To successfully restart VMs remotely by using IBM PowerVC, you must ensure that the source host and destination host are configured correctly.
Remote restart requirements
To restart a VM remotely, the following requirements must be met:
The source and destination hosts must have access to the storage that is used by the VMs.
The source and destination hosts must have all of the appropriate virtual switches that are required by networks on the VM.
The hosts must be running firmware 820 or later.
The HMC must be running with HMC 820 Service Pack (SP) 1 or later, with the latest program temporary fix (PTF).
The hosts must support the simplified remote restart capability.
Both hosts must be managed by the same HMC.
The service processors must be running and connected to the HMC.
The source host must be in the Error, Power Off, or Error - dump in progress state on the HMC.
The VM must be created with the simplified remote restart capability enabled.
The remote restart state of the VM must be Remote restartable.
SSPs are not officially supported through PowerVM simplified remote restart.
Restarting a virtual machine remotely
Before you can restart a VM on PowerVM remotely, you must deploy or configure the VM with remote restart capability. You can deploy or configure the VM with remote restart capability in two ways:
Create a compute template with the enabled remote restart capability and deploy a VM with that compute template, as shown in Figure 5-80.
Figure 5-80 Create a compute template with enabled remote restart capability
Modify the remote restart property after the VM is deployed. In Figure 5-81, you can see a VM with the correct remote restart state, which is Remote restartable.
Figure 5-81 Correct remote restart state under the Specifications section
 
Note: You can change the remote restart capability of a VM only if the VM is shut off.
Important: A VM can be restarted remotely in PowerVM only if the Remote Restart state is Remote restartable. When a VM is deployed initially, the HMC must collect partition and resource configuration information. The remote restart state changes from Invalid to different states. When it changes to Remote restartable, IBM PowerVC can initiate the remote restart operation for that VM.
The Remote Restart task is available under the Hosts view, as shown in Figure 5-82.
Figure 5-82 Remotely Restart Virtual Machines option
To restart a VM remotely, select the failed host and then select Remotely Restart Virtual Machines. Then, you can select to either restart a specific VM remotely or restart all of the VMs on the failed host remotely, as shown in Figure 5-83.
Figure 5-83 Remotely Restart Virtual Machines
The scheduler can choose a destination host automatically by placement policy, or you can choose a destination host (Figure 5-84).
Figure 5-84 Destination host
A notification on the user interface indicates that a VM was successfully restarted remotely.
When you select to restart all VMs on a failed host remotely, the host experiences several transitions. Table 5-7 shows the host states during the transition.
Table 5-7 Host states during the transition
State
Description
Remote Restart Started
IBM PowerVC is preparing to rebuild the VMs. This process can take up to one minute.
Remote Restart Rebuilding
IBM PowerVC is rebuilding VMs. After the VMs are restarted remotely on the destination host, the source host goes back to displaying its state.
Remote Restart Error
An error occurred while one or more VMs were moved to the destination host. You can check the reasons for the failure in the corresponding compute log file in the /var/log/nova directory.
5.15.11 Attaching a volume to the virtual machine
The IBM PowerVC management server can handle storage volumes. By using the management server, you can attach a new or existing volume to a VM. Complete the following steps:
1. Click the Virtual Machines icon on the left, and then select the VM to which you want to attach a volume and select Attach Volume. Attach Volume offers a choice to attach an existing volume to a VM or create a volume, as shown in Figure 5-85.
Figure 5-85 Attaching a new volume to a virtual machine
 
Note: You can select the Enable sharing check box so that other VMs can use the volume also, if needed.
2. Select the storage template to select the backing device, choose the volume name, and choose the volume size in GBs. You can add a short description for the new volume. The Storage bar on the right side of the window changes dynamically when you change the size. Click Attach. IBM PowerVC creates a volume, attaches it to the VM, and then displays a message at the bottom of the window to confirm the creation of the disk.
3. To see the new volume, open the VM’s detailed information window and select the Attached Volumes tab. This tab displays the current volumes that are attached to the VM, as shown in Figure 5-86.
Figure 5-86 Attached Volumes tab view
4. To complete the process, you must run the correct command on the VM command line:
 – For IBM AIX operating systems, run this command as root:
cfgmgr
 – For Linux operating systems, run this command as root, where host_N is the controller that manages the disks on the VM:
echo “- - -” > /sys/class/scsi_host/host_N/scan
5.15.12 Detaching a volume from the virtual machine
To detach a volume from the VM, you must first remove it from the operating system.
 
Note: As preferred practice, cleanly unmount all file systems from the disk, remove the logical volume, and remove the disk from AIX before you detach the disk from
IBM PowerVC.
Removing the volume from the operating system
For the IBM AIX operating system, run this command as root, where hdisk_N is the disk that you want to remove:
rmdev -dl hdisk_N
For the Linux operating system, restart after you detach the volume.
Detaching the volume from a virtual machine
The IBM PowerVC management server can handle storage volumes. By using the
IBM PowerVC management server, you can detach an existing volume from a VM. Complete the following steps:
1. Click the Virtual Machines icon, and then double-click the VM from which you want to detach a volume.
2. Click the Attached Volumes tab to display the list of volumes that are attached to this VM. Select the volume that you want to detach.
3. Click Detach, as shown in Figure 5-87.
Figure 5-87 Detach a volume from a virtual machine
4. IBM PowerVC shows a confirmation window (Figure 5-88).
Figure 5-88 Confirmation window
5. You see a Detaching status in the State column. When the process finishes, the volume is detached from the VM.
The detached volume is still managed by the IBM PowerVC management host. You can see the volume from the Storage window.
5.15.13 Resetting the state of a virtual machine
In certain situations, a VM becomes unavailable or it is in an unrecognized state for the
IBM PowerVC management server. When these situations occur, you can run a Reset State procedure. This process sets the machine back to an Active state.
Figure 5-89 shows a VM’s detailed information window with a Reset State hot link that appears on the State line of the Information section. Click Reset State to start the reset process.
 
Note: No changes are made to the connection or database.
Figure 5-89 Reset the virtual machine’s state
The IBM PowerVC management server displays a confirmation window. Click OK to continue or Cancel to abort.
 
Note: This process can take a few minutes to complete. If the state does not change, try to restore the VM or deploy the VM again from an image.
5.15.14 Deleting images
To delete an image that is not in use, open the Images window, select the image that you want to delete, and click Delete, as shown in Figure 5-90.
Figure 5-90 Image that is selected for deletion
The IBM PowerVC management server displays a confirmation window to delete this image, as shown in Figure 5-91.
Figure 5-91 Delete an image confirmation window
IBM PowerVC shows a message that the image is being deleted and confirmation of a successful completion of the operation. The image is deleted from IBM PowerVC and the original volume that is used by this is image is left untouched.
5.15.15 Unmanaging a virtual machine
The Unmanage function is used to discontinue the management of a VM from IBM PowerVC. After a VM becomes unmanaged, the VM is no longer listed in the Virtual Machines window, but the VM still exists. The VM and its resources remain configured on the host. The VM can still be managed from the HMC. The VM remains running.
To unmanage a VM, open the Virtual Machines window, and select the VM that you want to remove from IBM PowerVC. The Unmanage option is enabled. Click Unmanage to remove this VM from the IBM PowerVC environment.
Figure 5-92 shows the Unmanage option to unmanage a VM.
Figure 5-92 Unmanage an existing virtual machine
5.15.16 Deleting a virtual machine
IBM PowerVC can delete VMs completely from your systems.
 
Important: By deleting a VM, you completely remove the VM from the host system and from the HMC or PowerVM Novalink configuration, and IBM PowerVC no longer manages it. Volumes that are used by VM can be deleted with granularity.
To remove a VM, open the Virtual Machines window and select the VM that you want to remove. Click Delete, as shown in Figure 5-93.
Figure 5-93 Delete a virtual machine
The IBM PowerVC management server displays a confirmation window to select the volumes with the virtual machine that will be deleted. Volumes that will be deleted can be selected with a granularity of Boot volumes, All Volume, or Select Volumes as shown in (Figure 5-93). To select Volumes manually, select Selected Volumes, click Select Volumes, and delete volumes from the Select Volumes windows, as shown in Figure 5-94. To permanently delete the VM, click Delete. IBM PowerVC shows a confirmation to delete VM and its selected resources.
Figure 5-94 Select Volumes window to delete selected volumes
 
Important: You can delete a VM while it is running. The process stops the running VM and then deletes it.
When IBM PowerVC deletes storage, it behaves differently, depending on how volumes were created:
Volumes that were created by IBM PowerVC (the boot volumes) are deleted and removed from the VIOS and storage back ends.
Volumes that were attached to the partition are detached only during the partition deletion.
The zoning to storage is removed by the deletion operation.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset