IBM PowerVC installation planning
This chapter describes the key aspects of IBM® Power Virtualization Center (IBM PowerVC) installation planning:
Section 3.1, “IBM PowerVC requirements” on page 22 presents the hardware and software requirements for the various components of a IBM PowerVC environment: management station, managed hosts, network, storage area network (SAN), and storage devices.
Sections 3.3, “Host and partition management planning” on page 29 through 3.10, “Product information” on page 72 provide detailed planning information for various aspects of the environment’s setup:
 – Hosts
 – Partitions
 – Placement policies
 – Templates
 – Storage and SAN
 – Storage connectivity groups and tags
 – Networks
 – User and group management
 – Security
3.1 IBM PowerVC requirements
This section describes the necessary hardware and software to implement IBM PowerVC to manage AIX, Linux, and IBM i platforms.
Beginning with IBM PowerVC Version 1.3.1, a new IBM PowerVC offering, IBM Cloud PowerVC Manager, is included along with IBM PowerVC Standard in the IBM PowerVC installation media. For information about available releases, see this website:
IBM Cloud PowerVC Manager provides a self-service portal that allows for the provisioning of new virtual machines (VMs) in a PowerVM-based private cloud without direct system administrator intervention.
3.1.1 Hardware and software requirements
The following sections describe the hardware, software, and resource minimum requirements at the time of writing for versions 1.3.1 of IBM Cloud PowerVC Manager and IBM PowerVC Standard Edition. For the complete requirements, see the IBM Knowledge Center:
IBM Cloud PowerVC Manager:
Click IBM Cloud PowerVC Manager 1.3.1 → Planning.
IBM PowerVC managing PowerVM:
Click IBM PowerVC Standard Edition 1.3.1 → Managing PowerVM → Planning for IBM PowerVC standard Managing PowerVM.
IBM PowerVC managing PowerKVM:
Click IBM PowerVC Standard Edition 1.3.1 → Managing PowerKVM → Planning for IBM Virtualization Center.
3.1.2 Hardware and software requirements for IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager
The following information provides a consolidated view of the hardware and software requirements for both IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager.
 
Note: For Version 1.3.1, both the hardware and software requirements are the same for IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager.
IBM PowerVC management and managed hosts
The IBM PowerVC architecture supports a single management host for each managed domain. It is not possible to configure redundant IBM PowerVC management hosts that control the same objects.
The VMs that host the IBM PowerVC management host must be dedicated to this function. No other software or application can be installed on this VM. However, you can install software for the management of this VM, such as monitoring agents and data collection tools for audit or security. Table 3-1 lists the IBM PowerVC Standard Edition hardware and software requirements.
Table 3-1 Hardware and OS requirements
Host type
Supported hardware
Supported operating systems
IBM PowerVC management host
IBM POWER7, POWER7+, or POWER8 processor-based server models, or any x86_64 server meeting the recommended CPU, memory, and storage requirements.
Red Hat Enterprise Linux (RHEL) Version 7.1 or later for IBM Power for ppc64 and ppc64le
RHEL Server and Version 7.1 or later for x86_64
RHEL Server and Version 7.2 and later (BE/LE)
Managed hosts
PowerVM:
IBM Power processor-based servers:
IBM POWER6, POWER7, POWER7+, and POWER8 servers
PowerKVM:
POWER8 servers with IBM PowerKVM V2.1.1.2 or later
Guest operating systems that are supported for deployment:
PowerVM and PowerKVM:
RHEL 5.9, 5.10, 6.4, 6.5, 6.6, 7.0, and 7.1 (Little Endian)
SUSE Linux Enterprise Server, version 11SP3 and SP4, SUSE Linux Enterprise Server version 12 (Little Endian)
Ubuntu 14.04.1 or later
Ubuntu 15.10.1 or later (LE)
Ubuntu 16.4 or later
PowerVM only:
IBM AIX 6.1 and 7.1
IBM i 7.1 and 7.2
Table 3-2 describes the minimum and recommended resources that are required for IBM PowerVC VMs. In the table, the meaning of the processor capacity row depends on the type of host that is used as the IBM PowerVC management host:
If the IBM PowerVC management host is PowerVM, processor capacity refers to either the number of processor units of entitled capacity or the number of dedicated processors.
If the IBM PowerVC management host is PowerKVM or x86, processor capacity refers to the number of physical cores.
Table 3-2 Minimum resource requirements for the IBM PowerVC VM
Item
Minimum
Recommended
Number of VMs
Up to 100
Up to 400
401 - 1000
1001 - 2000
2001 - 3000
3001-5000
Processor capacity
1
2
4
8
8
12
Virtual CPUs
2
2
4
8
8
12
Memory (GB)
10
10
12
20
28
44
Swap space (GB)
10
10
12
20
28
44
Disk space (GB)
40
40
60
80
100
140
The installer has the following space requirements:
/tmp: 250 MB.
/usr: 250 MB.
/opt: 2.5 GB.
/home: 3 GB (minimum). As a preferred practice, assign 20% of the space to /home. For example, for 400 VMs, 8 GB are preferable. For 1,000 VMs, 20 GB are preferable. For 2,000 VMs, 30 GB are preferable.
The remaining space is used for /var and swap space.
Supported activation methods
Table 3-3 lists the supported activation methods for VMs on managed hosts.
Virtual Solutions Activation Engine (VSAE) is deprecated, and it might be withdrawn from support in subsequent releases. As a preferred practice, construct new images with cloud-init, which is the strategic image activation technology of IBM. It offers a rich set of system initialization features and a high degree of interoperability.
Table 3-3 Supported activation methods for managed hosts
Operating system
Little Endian (LE) or Big Endian (BE)
Version
Initialization
AIX
BE
6.1 TL0 SP0 or later
7.1 TL0 SP0 or later
VSAE
AIX
BE
6.1 TL9 SP5 or later
7.1 TL3 SP5 or later
cloud-init
IBM i
BE
7.1 TR10 or later
7.2 TR2 or later
IBM i AE (deprecated)
RHEL
BE
6.4 or later
cloud-init1
RHEL
BE
7.0 or later
cloud-init
RHEL
LE
7.1 or later
cloud-init
RHEL
BE
7.2 or later
cloud-init
RHEL
LE
7.2 or later
cloud-init
SUSE Linux Enterprise Server
BE
11 SP3 or later
cloud-inita
SUSE Linux Enterprise Server
LE
12 SP0 or later
cloud-init
Ubuntu
LE
14.04.1 or later
cloud-init
Ubuntu
LE
15.04.1 or later
cloud-init
Ubuntu
LE
16.04.1 or later
cloud-init

1 VSAE is no longer available for these releases.
Hardware Management Console
Table 3-4 shows the Hardware Management Console (HMC) version and release requirements to support IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager managing PowerVM. This section does not apply for managing systems that are controlled by PowerKVM.
Table 3-4 HMC requirements
Item
Requirement
Software level
8.4.0
8.5.0
Hardware-level requirements
Requirements:
Up to 300 VMs: CR5 with 4 GB memory
More than 300 VMs: CR6, CR7, or CR8 with 8 GB memory
Recommendations:
Up to 300 VMs: CR6, CR7, or CR8 with 8 GB memory
More than 300 VMs: CR6, CR7, or CR8 with 16 GB memory
As a preferred practice, update to the latest HMC fix pack for the specific HMC release. You can check the fixes for HMC by using the IBM Fix Level Recommendation Tool: at the following website:
You can get the latest fix packages from IBM Fix Central at the following website:
Virtualization platform
Table 3-5 includes the Virtual I/O Server (VIOS) version requirements for IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager managing PowerVM.
Table 3-5 Supported virtualization platforms
Platform
Supported versions
VIOS for POWER7 hosts and earlier
Version 2.2.4.10, 2.2.4.20, and 2.2.5.x
VIOS for POWER8 hosts
Version 2.2.4.10, 2.2.4.20, and 2.2.5.x
 
Tip: Set the Maximum Virtual Adapters value to at least 200 on the VIOSs because IBM PowerVC gives you warning messages below 200. However, VIOSs that are managed by IBM PowerVC can serve more than 100 VMs, and each VM can require four or more virtual I/O devices from the VIOS. When you plan the VIOS configuration, base the size of the Maximum Virtual Adapters value on real workload requirements.
Network resources
Table 3-6 lists the network infrastructure that is supported by IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager.
Table 3-6 Supported network hardware and software
Item
Requirement
Network switches
IBM PowerVC does not manage network switches, but it supports network configurations that use virtual LAN (VLAN)-capable switches.
Virtual networks
PowerVM: Shared Ethernet adapters (SEAs) for VM networking.
PowerKVM: Supports Open vSwitch 2.0. The backing adapters for the virtual switch can be physical Ethernet adapters, bonded adapters (Open vSwitch also supports bonding), or Linux bridges (not preferred).
Storage providers
Table 3-7 lists the hardware that is supported by IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager managing PowerVM.
Table 3-7 Supported storage hardware for PowerVM
Item
Requirement
Storage systems
IBM Storwize family of controllers.
IBM XIV Storage System.
DS8000 7.5.0 and 8.0.0
EMC VNX.
EMC VNX Series is supported on RHEL Server for x86_64 management hosts only due to EMC limitations.
EMC VMAX.
SAN switches
Brocade Fibre Channel (FC) switches are supported by the Brocade OpenStack Cinder zone manager driver.
Cisco SAN FC switches are supported by the Cisco Cinder zone manager driver.
Storage connectivity
FC attachment through at least one N_Port ID Virtualization (NPIV)-capable host bus adapter (HBA) on each host.
Virtual SCSI (vSCSI).
Share Storage Pools.
Table 3-8 lists the hardware that is supported by IBM PowerVC Standard Edition managing PowerKVM.
Table 3-8 Supported storage hardware for PowerKVM
Item
Requirement
Storage systems
File-level storage.
Network File System (NFS) V3 or V4 is required for migration. It must be manually configured on the kernel-based VM (KVM) host before it is registered on IBM PowerVC.
Storage connectivity
Internet Small Computer System Interface (iSCSI): Data volumes on the IBM Storwize family of controllers only.
Note:
IBM i hosts on IBM XIV Storage Systems must be attached by vSCSI due to IBM i and IBM XIV storage limitations.
IBM i hosts on EMC VNX and VMAX storage systems must be attached by vSCSI due to IBM i and EMC storage limitations.
Supported storage connectivity options
Table 3-9 shows the supported storage connectivity options.
 
Note: Both NPIV and vSCSI data volumes are supported for a vSCSI boot volume, but because storage connectivity groups support only one type of connectivity, if you have a vSCSI boot volume, you can have either NPIV or vSCSI data volumes, but not both.
Table 3-9 Supported storage connectivity options
Boot volume /
Data volume
Shared storage
pool
 
NPIV
 
vSCSI
Shared storage pool (SSP)
X
X
 
NPIV
 
X
 
vSCSI
 
X
X
Security
Table 3-10 includes the supported security features.
Table 3-10 Supported security software
Item
Requirement
Lightweight Directory Access Protocol (LDAP) server (optional)
OpenLDAP version 2.0 or later
Microsoft Active Directory 2003 or later
3.1.3 Other hardware compatibility
IBM PowerVC is based on OpenStack, so rather than being compatible with specific hardware devices, IBM PowerVC is compatible with drivers that conform to OpenStack standards. They are called pluggable devices in IBM PowerVC. Therefore, IBM PowerVC can take advantage of hardware devices that are available from vendors that provide OpenStack-compatible drivers for their products. IBM cannot state the support of other hardware vendors for their specific devices and drivers that are supported by IBM PowerVC, so check with the vendors to learn about their drivers. For more information about pluggable devices, see the IBM Knowledge Center:
3.2 IBM PowerVM Novalink requirements
PowerVM Novalink is a software interface that is used for virtualization management. You can install PowerVM Novalink on a PowerVM server. PowerVM Novalink enables highly scalable modern cloud management and deployment of critical enterprise workloads. You can use PowerVM Novalink to provision large numbers of VMs on PowerVM servers quickly and at a reduced cost.
3.2.1 PowerVM Novalink system requirements
For successful operation, PowerVM Novalink requires hardware and software to meet specific criteria.
POWER8 server requirements
PowerVM Novalink can be installed only on POWER8 processor-based servers with firmware level FW840 or later. If the server does not have firmware level FW840 or later, you must update the server firmware before installing PowerVM Novalink.
PowerVM Novalink partition requirements
PowerVM Novalink requires its own partition on the managed system. The PowerVM Novalink partition requires the following system resources:
0.5 shared processors that are uncapped with a non-zero weight and two virtual processors.
4.5 GB of memory, which you can adjust to 2.5 GB after installation. See Table 3-11 for the memory requirements for scaling VMs.
At least 30 GB of storage.
A virtualized network that is bridged through the SEA.
Maximum virtual slots that are set to 200 or higher.
Table 3-11 PowerVM Novalink memory requirement for scaling
Number of VMs
Up to 250
251-500
More than 500
Memory need (GB)
2.5
5
10
If you install the PowerVM Novalink environment on a new managed system, the PowerVM Novalink installer creates the PowerVM Novalink partition automatically. When the PowerVM Novalink installer creates the PowerVM Novalink partition on a new managed system, the PowerVM Novalink installer always uses storage that is virtualized from the VIOS. You can set the installer to use either physical volume vSCSI or logical volumes for the PowerVM Novalink partition. If you set the PowerVM Novalink installer to use I/O redundancy, the storage for the PowerVM Novalink partition is automatically mirrored for redundancy by using RAID 1.
If you install the PowerVM Novalink software on a system that is managed by an HMC, use the HMC to create a Linux logical partition (LPAR) with the required resources. When you use the HMC to create the Linux LPAR, set the powervm_mgmt_capable flag to true.
Ubuntu Linux Version 15.10 is supported on the PowerVM Novalink partition. Ubuntu Linux Version 15.10 is installed with the PowerVM Novalink software on the PowerVM Novalink partition.
Supported operating systems for hosted logical partitions
PowerVM Novalink supports all operating systems that are supported on the machine type and model of the managed system.
Virtual I/O Server partition requirements
PowerVM Novalink requires VIOS Version 2.2.4 or later.
If you install the PowerVM Novalink environment on a new managed system, configure one disk with at least 60 GB of storage for each VIOS instance that you plan to create on the server. You can configure the disks in your local serial-attached Small Computer System Interface (SAS) storage or on your SAN. If you create two instances of VIOS, create each disk on a separate SAS controller or FC card for redundancy. Otherwise, the resource requirements for VIOSs that are installed by the PowerVM Novalink installer are the same as the resource requirements for VIOSs that are not installed by the PowerVM Novalink.
Reliable Scalable Cluster Technology for Resource Monitoring and Control connections
To enable IPv6 link-local address support for Resource Monitoring and Control (RMC) connections, update the Reliable Scalable Cluster Technology (RSCT) packages on AIX and Linux LPARs to be at Version 3.2.1.0 or later.
IBM PowerVC requirement
IBM PowerVC Version 1.3 or later is required to manage a PowerVM Novalink host with IBM PowerVC.
Hardware Management Console requirement
HMC Version 8.4.0 or later is required to co-manage a system with PowerVM Novalink.
3.3 Host and partition management planning
When you plan for the hosts in your IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager managing PowerVM, you must consider the limitations in the number of hosts and VMs that can be managed by IBM PowerVC, and the benefits of using multiple VIOSs.
3.3.1 Physical server configuration
If you plan to use Live Partition Mobility (LPM), you must ensure that all servers are configured with the same logical-memory block size. This logical-memory block size can be changed from the Advanced System Management Interface (ASMI) interface.
3.3.2 HMC or PowerKVM planning
Data centers can contain hundreds of hosts and thousands of VMs. For IBM PowerVC Version 1.3.1, the following maximums are suggested:
­IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager V1.3.1 managing PowerVM:
 – A maximum of 30 managed hosts is supported.
 – Each host can have a maximum of 500 VMs on it.
 – A maximum of 3,000 VMs can be on all of the combined hosts.
 – Each HMC can have a maximum of 500 VMs on it.
IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager V1.3.1 managing PowerVM by using PowerVM Novalink:
 – A maximum of 200 PowerVM Novalink-managed hosts is supported.
 – A maximum of 1000 VMs (PowerVM Novalink, VIOSs, or client workloads) per PowerVM host are supported. This limit is determined by the PowerVM platform firmware versions that are available when IBM PowerVC Version 1.3.1 was released. For information about configuring your system for scale, see IBM PowerVM Best Practices, SG24-8062.
 – A maximum of 5000 VMs can be on all of the PowerVM Novalink-managed hosts combined.
IBM PowerVC Standard Edition V1.3.1 managing PowerKVM:
 – A maximum of 30 managed hosts is supported.
 – Each host can have a maximum of 225 VMs on it.
 – A maximum of 3,000 VMs can be on all of the combined hosts.
 
Note: No hard limitations exist in IBM PowerVC. These maximums are suggested from a performance perspective only.
Therefore, you must consider how to partition your HMC, and KVM in subsets, where each is managed by a IBM PowerVC management host.
Advanced installations typically use redundant HMCs to manage the hosts. With Version 1.2.3, redundant HMCs are supported. If one HMC that you selected for IBM PowerVC becomes unavailable, change to the working HMC through the IBM PowerVC graphical user interface (GUI).
 
Note: With redundant HMCs, IBM PowerVC uses only one HMC to manage each specific host. If the original HMC is unavailable, manually switch to the other HMC on the IBM PowerVC GUI.
3.3.3 Virtual I/O Server planning
IBM PowerVC supports more than one VIOS server.
Consider a second VIOS to provide redundancy and I/O connectivity resilience to the hosts. Use two VIOSs to avoid outages to the hosts when you must perform maintenance, updates, or changes in the VIOS configuration.
If you plan to make partitions mobile, define the VIOS that provides the mover service on all hosts, and ensure that the Mover service partition option is enabled in the profile of these VIOSs.
The VIOS must be configured with “Sync current configuration Capability” turned ON. On the HMC, verify the settings of the VIOSs, as shown in Figure 3-1.
Figure 3-1 VIOS settings that must be managed by IBM PowerVC
Important: Configure the maximum number of virtual resources (virtual adapters) for the VIOS to at least 200. This setting provides sufficient resources on your hosts while you create and migrate VMs throughout your environment. Otherwise, IBM PowerVC indicates a warning during the verification process.
Changing the maximum virtual adapters in a Virtual I/O Server
From the HMC, in the left pane, click Server Management → Servers  managed_server, select the VIOS, and then click Configuration  Manage Profiles from the drop-down menu.
Select the profile that you want to use, and click Actions  Edit. Then, select the Virtual Adapters tab.
Replace the value in the Maximum virtual adapters field with a new value. See Figure 3-2.
Figure 3-2 Modify the maximum virtual adapters
3.4 Placement policies and templates
One goal of IBM PowerVC is to simplify the management of VMs and storage by providing the automated creation of partitions and virtual storage disks and the automated placement of partitions on physical hosts. This automation replaces the manual steps that are needed when you use PowerVM directly. In the manual steps, you must create disks, select all parameters that define each partition to deploy, and configure the mapping between the storage units and the partitions in the VIOSs.
This automation is performed by using deploy templates and placement policies.
3.4.1 Host groups
Use host groups to group hosts logically regardless of any features that they might share. For example, the hosts do not need the same architecture, network configuration, or storage. Host groups have these important features:
Every host must be in a host group.
Any hosts that do not belong to a user-defined host group are members of the default host group. The default host group cannot be deleted.
VMs are kept within the host group.
A VM can be deployed to a specific host or to a host group. After deployment, if that VM is migrated, it must always be migrated within the host group.
Placement policies are associated with host groups.
Every host within a host group is subject to the host group’s placement policy. The default placement policy is striping.
An enterprise client can group its hosts to meet different business needs, for example, for test, development, and production, as shown in Figure 3-3. With different placement policies, even with different hardware, the client can archive at different service levels.
Figure 3-3 Host group sample
3.4.2 Placement policies
When you want to deploy a new partition, you can indicate to IBM PowerVC the host on which you want to create this partition. You can also ask IBM PowerVC to identify the hosts on which the partitions will best fit in a host group, based on a policy that matches your business needs. If you ask IBM PowerVC to identify the hosts on which the partitions will best fit in a host group, IBM PowerVC compares the requirements of the partitions with the availability of resources on the possible set of target hosts. IBM PowerVC considers the selected placement policy to make a choice.
IBM PowerVC Version 1.3.1offers five policies to deploy VMs:
Striping placement policy
The striping placement policy distributes your VMs evenly across all of your hosts. For each deployment, IBM PowerVC determines the hosts with enough processing units and memory to meet the requirements of the VM. Other factors for determining eligible hosts include the storage and network connectivity that are required by the VM. From the group of eligible hosts, IBM PowerVC chooses the host that contains the fewest number of VMs and places the VM on that host.
Packing placement policy
The packing placement policy places VMs on a single host until its resources are fully used, and then it moves on to the next host. For each deployment, IBM PowerVC determines the hosts with enough processing units and memory to meet the requirements of the VM. Other factors for determining eligible hosts include the storage and network connectivity that are required by the VM. From the group of eligible hosts, IBM PowerVC chooses the host that contains the most VMs and places the VM on that host. After the resources on this host are fully used, IBM PowerVC moves on to the next eligible host that contains the most VMs.
This policy can be useful when you deploy large partitions on small servers. For example, you must deploy four partitions that require eight, nine, and seven cores on two servers, each with 16 cores. If you use the striping policy, the first two partitions are deployed on the two servers, which leaves only eight free cores on each. IBM PowerVC cannot deploy the 9-core partition because an LPM operation must be performed before the 9-core partition can be deployed.
By using the packing policy, the first two 8-core partitions are deployed on the first hosts, and IBM PowerVC can then deploy the 9-core and 7-core partitions on the second host. This example is simplistic, but it illustrates the difference between the two policies: The striping policy optimizes performance, and the packing policy optimizes human operations.
CPU utilization balance placement policy
This placement policy places VMs on the host with the lowest CPU utilization in the host group. The CPU utilization is computed as a running average over the last 15 minutes.
CPU allocation balance placement policy
This placement policy places VMs on the host with the lowest percentage of its CPU that is allocated post-deployment or after relocation.
For example, consider an environment with two hosts:
 – Host 1 has 16 total processors, four of which are assigned to VMs.
 – Host 2 has four total processors, two of which are assigned to VMs.
Assume that the user deploys a VM that requires one processor. Host 1 has (4+1)/16, or 5/16 of its processors that are allocated. Host 2 has (2+1)/4, or 3/4 of its processors that are allocated. Therefore, the VM is scheduled to Host 1.
Memory allocation balance placement policy
This placement policy places VMs on the host with the lowest percentage of its memory that is allocated post-deployment or after relocation.
For example, consider an environment with two hosts:
 – Host 1 has 16 GB total memory, 4 GB of which is assigned to VMs.
 – Host 2 has four GB total memory, 2 GB of which is assigned to VMs.
Assume that the user deploys a VM that requires 1 GB of total memory. Host 1 has (4+1)/16, or 5/16 of its memory that is allocated. Host 2 has (2+1)/4, or 3/4 of its memory that is allocated. Therefore, the VM is scheduled to Host 1.
 
Note: A default placement policy change does not affect existing VMs. It affects only new VMs that are deployed after the policy setting is changed. Therefore, changing the placement policy for an existing environment does not result in moving existing partitions.
Tip: The following settings might increase the throughput and decrease the duration of deployments:
Use the striping policy rather than the packing policy.
Limit the number of concurrent deployments to match the number of hosts.
When a new host is added to the host group that is managed by IBM PowerVC, if the placement policy is set to the striping mode, new VMs are deployed on the new host until it catches up with the existing hosts. IBM PowerVC allocates partitions only on this new host until the resources use of this host is about the same as on the previously installed hosts.
When a new partition is deployed, the placement algorithm uses several criteria to select the target server for the deployment, such as availability of resources and access to the storage that is needed by the new partitions. By design, the IBM PowerVC placement policy is deterministic. Therefore, the considered resources are the amounts of processing power and memory that are needed by the partition, as defined in the partition profile (virtual processors, entitlement, and memory). Dynamic resources, such as I/O bandwidth, are not considered, because they result in a non-deterministic placement algorithm.
 
Note: The placement policies are predefined. You cannot create your own policies.
The placement policy can also be used when you migrate a VM. Figure 3-4 shows the IBM PowerVC user interface for migrating a partition. Use this interface to select between specifying a specific target or letting IBM PowerVC select a target according to the current placement policy.
Figure 3-4 Migration of a partition by using a placement policy
3.4.3 Template types
Rather than define all characteristics for each partition or each storage unit that must be created, the usual way to create them in IBM PowerVC is to instantiate these objects from a template that was previously defined. The amount of effort that is needed to define a template is similar to the effort that is needed to define a partition or storage unit. Therefore, reusing templates saves significant effort for the system administrator, who must deploy many objects.
IBM PowerVC provides a GUI to help you create or customize templates. Templates can be easily defined to accommodate your business needs and your IT environment.
Three types of templates are available:
Compute templates These templates are used to define processing units, memory, and disk space that are needed by a partition. They are described in 3.4.4, “Information that is required for compute template planning” on page 36.
Deploy templates These templates are used to allow authorized self-service users to quickly, easily, and reliably deploy an image. They are described in 3.4.5, “Information that is required for deploy template planning” on page 41.
Storage templates These templates are used to define storage settings, such as a specific volume type, storage pool, and storage provider. They are described in 3.6.2, “Storage templates” on page 49.
Use the templates to deploy new VMs. This approach propagates the values for all of the resources into the VMs. The templates accelerate the deployment process and create a baseline for standardization.
Templates can be defined by using the Standard view or, for a more detailed and specific configuration, you can use the Advanced view, which is shown in “Advanced compute templates” on page 38.
3.4.4 Information that is required for compute template planning
The IBM PowerVC management host provides 11 predefined compute templates. Your redefined templates can be edited and removed. You can create your own templates.
Before you create templates, plan for the amount of resources that you need for the classes of partitions that you need. For example, different templates can be used for partitions that are used for development, test, and production, or you can have different templates for database servers, application servers, and web servers.
IBM PowerVC offers two template options:
Basic Create micropartitions (shared partitions) by specifying the minimum amount of information.
Advanced Create dedicated partitions or micropartitions, with the level of detail that is available on the HMC.
Basic compute templates
The following information helps your planning efforts regarding basic compute templates:
Template name The name to use for the template.
Virtual processors Number of virtual processors. A VM usually performs best if the number of virtual processors is close to the number of processing units that is available to the VM.
Memory (MB) Amount of memory, in MB. The value for memory must be a multiple of the memory region size that is configured on your host. You can also specify Active Memory Expansion (AME) Factor. To see the region size for your host, open the Properties window for the selected host in the HMC, and then open the Memory tab and record the “memory region size” value. Figure 3-5 on page 39 shows an example.
Processing units Number of entitled processing units. A processing unit is the minimum amount of processing resource that the VM can use. For example, a value of 1 (one) processing unit corresponds to 100% use of a single physical processor. Processing units are split between virtual processors, so a VM with two virtual processors and one processing unit appears to the VM user as a system with two processors, each running at 50% speed.
Disk (GB) Disk space that is needed, in GB.
Compatibility mode Select the processor compatibility that you need for your VM. Table 3-12 describes each compatibility mode and the servers on which the VMs that use each mode can operate.
Table 3-12 Processor compatibility modes
Processor compatibility mode
Description
Supported servers
POWER6
Use the POWER6 processor compatibility mode to run operating system versions that use all of the standard features of the POWER6 processor.
VMs that use the POWER6 processor compatibility mode can run servers that are based on POWER6, IBM POWER6+™, POWER7, or POWER8 processors.
POWER6+
Use the POWER6+ processor compatibility mode to run operating system versions that use all of the standard features of the POWER6+ processor.
VMs that use the POWER6+ processor compatibility mode can run on servers that are based on POWER6+, POWER7, or POWER8 processors.
POWER7, including POWER7+
Use the POWER7 processor compatibility mode to run operating system versions that use all of the standard features of the POWER7 processor.
VMs that use the POWER7 processor compatibility mode can run servers that are based on POWER7 or POWER8 processors.
POWER8
Use the POWER8 processor compatibility mode to run operating system versions that use all of the standard features of the POWER8 processor.
VMs that use the POWER8 processor compatibility mode can run servers that are based on POWER8 processors.
Default
The default processor compatibility mode is a preferred processor compatibility mode that enables the hypervisor to determine the current mode for the VM. When the preferred mode is set to Default, the hypervisor sets the current mode to the most fully featured mode that is supported by the operating environment. In most cases, this mode is the processor type of the server on which the VM is activated. For example, assume that the preferred mode is set to Default and the VM is running on a POWER8 processor-based server. The operating environment supports the POWER8 processor capabilities, so the hypervisor sets the current processor compatibility mode to POWER8.
The servers on which VMs with the preferred processor compatibility mode of Default can run depend on the current processor compatibility mode of the VM. For example, if the hypervisor determines that the current mode is POWER8, the VM can run on servers that are based on POWER8 processors.
 
 
Note: For a detailed explanation of processor compatibility modes, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.
Advanced compute templates
The following information about advanced compute templates helps you plan for their use:
Template name The name for the template.
Virtual processors The number of virtual processors. A VM usually performs best if the number of virtual processors is close to the number of processing units that is available to the VM. You can specify the following values:
Minimum The smallest number of virtual processors that you accept for deploying a VM.
Desired The number of virtual processors that you want for deploying a VM.
Maximum The largest number of virtual processors that you allow when you resize a VM. This value is the upper limit to resize a VM dynamically. When it is reached, you need to power off the VM, edit the profile, change the maximum to a new value, and restart the VM.
Memory (MB) Amount of memory, expressed in MB. The value for memory must be a multiple of the memory region size that is configured on your host. The minimum value is 16 MB. To see the region size for your host, open the Properties window for the selected host on the HMC, and then open the Memory tab to view the memory region size. Figure 3-5 on page 39 shows an example. You can specify the following values:
Minimum The smallest amount of memory that you want for deploying a VM. If the value is not available, the deployment does not occur.
Desired The total memory that you want in the VM. The deployment occurs with an amount of memory less than or equal to the wanted amount and greater than or equal to the minimum amount that is specified.
Maximum The largest amount of memory that you allow when you resize a VM. This value is the upper limit to resize a VM dynamically. When it is reached, you must power off the VM, edit the profile, change the maximum to a new value, and restart the VM.
Figure 3-5 Memory region size view on the HMC
Processing units Number of entitled processing units. A processing unit is the minimum amount of processing resource that the VM can use. For example, a value of 1 (one) processing unit corresponds to 100% use of a single physical processor. The setting of processing units is available only for shared partitions, not for dedicated partitions. You can specify the following values:
Minimum The smallest number of processing units that you accept for deploying a VM. If this value is not available, the deployment does not occur.
Desired The number of processing units that you want for deploying a VM. The deployment occurs with a number of processing units that is less than or equal to the wanted value and greater than or equal to the minimum value.
Maximum The largest number of processing units that you allow when you resize a VM. This value is the upper limit to which you can resize dynamically. When it is reached, you must power off the VM, edit the profile, change the maximum value to a new value, and restart the VM.
 
Important: Processing units and virtual processor are values that work closely and must be calculated carefully. For more information about virtual processor and processing units, see IBM PowerVM Virtualization Managing and Monitoring, SG24-7590.
Disk (GB) Disk space that is needed in GB.
 
Note: Use the advanced template to define only the amount of storage that you need. You cannot use the advanced template to specify a number of volumes to create.
Compatibility mode Select the compatibility that is needed for your VM. Table 3-12 on page 37 lists each processor compatibility mode and the servers on which the VMs that use each processor compatibility mode can successfully operate.
Enable virtual machine remote restart
Users can remote restart a VM on another host easily if the current host fails. This feature enhanced the availability of applications in addition to the solutions that are based on IBM PowerHA® and LPM.
 
Note: This function is based on the PowerVM simplified remote restart function and is supported only by POWER8 servers at the time of writing. For the requirements of remote restart, see the IBM Knowledge Center:
Shared processors or dedicated processor
Decide whether the VM uses processing resources from a shared processor pool or dedicated processor resources.
Option A: Shared processors settings
The following values are available for option A:
Uncapped Uncapped VMs can use processing units that are not being used by other VMs, up to the number of virtual processors that is assigned to the uncapped VM. You can also specify the shared processor pool.
Capped Capped VMs can use only the number of processing units that are assigned to them.
Weight (0 - 255) If multiple uncapped VMs require unused processing units, the uncapped weights of the uncapped VMs determine the ratio of unused processing units that are assigned to each VM. For example, an uncapped VM with an uncapped weight of 200 receives two processing units for every processing unit that is received by an uncapped VM with an uncapped weight of 100.
Option B: Dedicated processor settings
The following values are available for option B:
Idle sharing This setting enables this VM to share its dedicated processors with other VMs when this VM is powered on and idle (also known as a dedicated donating partition).
Availability priority To avoid shutting down mission-critical workloads when your server firmware unconfigures a failing processor, set availability priorities for the VMs (0 - 255). A VM with a failing processor can acquire a replacement processor from a VM with a lower availability priority. The acquisition of a replacement processor allows the VM with the higher availability priority to continue running after a processor failure.
3.4.5 Information that is required for deploy template planning
Administrators can configure image deployment properties and save them as a deploy template. A deploy template includes everything necessary to create quickly and easily a VM, including the deployment target, storage connectivity group, compute template, and so on.
To create a deploy template, complete the following steps:
1. From the Images window, select the image that you want to use to create a deploy template and click Create Template from Image.
2. Fill out the information in the window that opens, then click Create Deploy Template.
3. The deploy template is now listed on the Deploy Templates tab of the Images window.
After creation, you can edit a deploy template by selecting the template and clicking Edit.
3.5 IBM PowerVC storage access SAN planning
In IBM PowerVC Standard and Cloud editions, VMs can access their storage by using either of three protocols:
Classical vSCSI, as described in “vSCSI storage access” on page 42
NPIV, as described in “NPIV storage access” on page 44
vSCSI to SSP, as described in “Shared storage pool: vSCSI” on page 45
A minimum configuration of the SAN and storage is necessary before IBM PowerVC can use them. For example, IBM PowerVC creates virtual disks on storage devices, but these devices must be set up first. You must perform the following actions before you use IBM PowerVC:
Configuration of the FC fabric for the IBM PowerVC environment must be planned first: cable attachments, SAN fabrics, and redundancy. It is common to create at least two independent fabrics to provide SAN redundancy. IBM PowerVC supports adding up to
25 fabrics.
 
Note: IBM PowerVC assumes that all hosts can access all registered storage controllers. The cabling must be performed in a way so that all hosts can access the same set of storage devices.
IBM PowerVC provides storage for VMs through the VIOS.
With IBM PowerVC Standard Edition, the storage is accessed by using NPIV, vSCSI, or an SSP that uses vSCSI.
The VIOS and SSP must be configured before IBM PowerVC can manage them.
The SAN switch administrator user ID and password must be set up. They are used by IBM PowerVC.
The storage controller administrator user ID and passwords must be set up so that SAN logical unit numbers (LUNs) can be created.
For vSCSI, turn off SCSI reserves for volumes that are being discovered on all the VIOSs that are used for vSCSI connections. This action is required for LPM operations and for dual VIOSs.
For vSCSI and SSP, initial zoning of the SSP LUNs must be established to provide access from VIOSs to storage controllers. The SSP must also be created and running for IBM PowerVC to discover it.
In IBM PowerVC Standard Edition, you must create a VM manually to capture your first image. Prepare by performing these tasks:
 – VIOS must be set up for NPIV or vSCSI to provide access from the VM to the SAN.
 – For NPIV, SAN zoning must be configured to provide access from virtual FC ports in VM to storage controllers.
 
Important: If you connect a VM to several FC adapters (and several worldwide port names (WWPNs)) to storage devices with several WWPNs, you must create one zone for each pair of source and target WWPNs. You must not create a single zone with all source and target WWPNs.
 – The OS must be installed in the first VM, and the cloud-init must be installed and used.
After IBM PowerVC Standard Edition can access storage controllers and switches, it can perform these tasks:
Collect inventory on the FC fabric
Collect inventory on storage devices (pools and volumes)
Monitor health
Detect misconfigurations
Manage zoning
Manage LUNs on storage devices
3.5.1 vSCSI storage access
With IBM PowerVC Standard Edition Version 1.2.2 or later and IBM Cloud PowerVC Manager, you can use vSCSI to access SAN storage in the IBM PowerVC environment.
Before you use vSCSI-attached storage in IBM PowerVC, you must complete the following steps:
1. Turn off SCSI reserves for volumes that are being discovered on all the VIOSs that are used for vSCSI connections. This step is required for LPM operations and for dual VIOSs.
For the IBM Storwize family, XIV, and EMC that use the AIX Platform Cluster Manager (PCM) model, you must run the following command on every VIOS where vSCSI operations are run:
chdef -a reserve_policy=no_reserve -c disk -s fcp -t mpioosdisk
Support for IBM System Storage DS8000 was introduced in IBM PowerVC Version 1.3.0. For IBM System Storage DS8000 systems, you must run the following command on every VIOS where vSCSI operations are run:
chdef -a reserve_policy=no_reserve -c disk -s fcp -t aixmpiods8k
 
Note: On the DS8000, if the host entry type for a VIOS is AIX, then that VIOS supports more than 256 vSCSI attached volumes. If the host entry type is Linux or another host type that supports LUNPolling instead of reportLUN, then at most 255 vSCSI attached volumes are supported on that VIOS. Therefore, as preferred practice, see the host entry type for a VIOS to AIX.
Note: You must use the chdef command, not the chdev command.
Important: This step is mandatory. Different commands exist for other multipath I/O (MPIO) drivers. See the documentation of the drivers to learn how to turn off SCSI reserves.
2. You must configure all zoning between the VIOS and the storage device ports so that you can import vSCSI environments easily and use any number of fabrics with vSCSI.
3. You might need to increase the pre_live_migration_timeout setting in nova.conf if many vSCSI-attached volumes are on the VM or a heavy load is on the destination host’s VIOSs. Increasing this setting provides the additional time that is required to process many vSCSI-attached volumes.
Figure 3-6 shows how VMs in IBM PowerVC access storage by using vSCSI.
Figure 3-6 IBM PowerVC storage access by using vSCSI
The flow of storage management from physical storage LUNs to VMs in IBM PowerVC with vSCSI is described:
LUNs are provisioned on a supported storage controller.
LUNs are masked to VIOS FC ports and are discovered as hdisk logical devices in VIOS.
LUNs are mapped (by using mkvdev) from VIOS to VMs over an vSCSI virtual adapter pair.
These steps are completed automatically by IBM PowerVC. No zoning is involved because individual VMs do not access physical LUNs directly over the SAN.
3.5.2 NPIV storage access
Figure 3-7 shows how VMs access storage through NPIV with IBM PowerVC Standard Edition and IBM Cloud PowerVC Manager.
The following list describes the actions that are performed by IBM PowerVC to manage the flow of storage from physical storage LUNs to VMs:
Access to SAN from VMs is configured on VIOSs by using an FC adapter pair and NPIV (by running the vfcmap command).
LUNs are provisioned on a supported storage controller.
LUNs are masked to VM virtual FC ports.
SAN zoning is adjusted so that VMs have access from their virtual FC ports to storage controller host ports. Changes in zoning are performed automatically by IBM PowerVC.
LUNs are viewed as logical devices in VMs.
These actions are completed automatically by IBM PowerVC.
Figure 3-7 IBM PowerVC storage access by using NPIV
3.5.3 Shared storage pool: vSCSI
Figure 3-8 shows how VMs access storage in an SSP with IBM PowerVC.
Here is the flow of storage management from physical storage LUNs to VMs in IBM PowerVC:
Access to storage from VIOSs by using physical FC adapters is set manually.
The SSP is configured manually: Creation of a cluster, inclusion of VIOSs in the cluster, and additions of disk to the pool.
IBM PowerVC discovers the SSP when it discovers the VIOSs.
IBM PowerVC can create logical units (LUs) in the SSP when it creates a VM.
IBM PowerVC instructs the VIOS to map the SSP LUs to LUNs for the VIO clients’ partitions that access them through vSCSI devices.
Figure 3-8 IBM PowerVC storage access by using an SSP
3.5.4 Storage access in IBM PowerVC Standard Edition managing PowerKVM
Figure 3-9 shows how VMs access storage with IBM PowerVC Standard Edition managing PowerKVM.
The following list is a description of the flow of storage management from host internal storage to VMs in IBM PowerVC Standard Edition managing PowerKVM:
PowerKVM accesses the internal storage on the host.
IBM PowerVC manages the internal storage when a PowerKVM host is added for management.
LUN requests are created automatically by IBM PowerVC and mapped to the VMs.
Here is the flow of storage management from SAN storage to VMs in IBM PowerVC managing PowerKVM by using iSCSI:
SAN storage is available through the Ethernet network by configuring access over the iSCSI protocol.
IBM PowerVC manages the SAN storage when the storage provider is added.
LUN requests are created automatically by IBM PowerVC and mapped to VMs.
Figure 3-9 IBM PowerVC managing PowerKVM storage access
3.6 Storage management planning
IBM PowerVC manages storage volumes, which can be attached to VMs. These storage volumes can be backed by IBM Storwize storage devices, SAN Volume Controller devices, IBM XIV storage devices, EMC VMAX storage devices, EMC VNX storage devices, or SSP files.
IBM PowerVC requires IP connectivity to the storage providers to manage the storage volumes.
3.6.1 IBM PowerVC terminology
IBM PowerVC uses a few terms and concepts that differ from terms that are used in PowerVM:
Storage provider Any system that provides storage volumes. In IBM PowerVC Version 1.3.1, storage providers can be IBM Storwize devices, SAN Volume Controller devices that hide the real storage unit that holds the data, IBM XIV devices, EMC VMAX storages, EMC VNX storages, or SSP. Figure 3-10 shows a IBM PowerVC environment that manages one IBM SAN Volume Controller. IBM PowerVC also refers to storage providers as storage controllers.
Figure 3-10 IBM PowerVC storage providers
Fabric Another name for a SAN switch. Figure 3-11 shows a IBM PowerVC Fabrics window that displays information for a switch that is named FAB0, with IP address 9.114.62.197. Click this address on the Fabrics window to open the graphical view of the switch.
Figure 3-11 Fabrics window that shows an embedded switch GUI
Storage pool A storage resource that is defined on the storage provider in which IBM PowerVC can create volumes. IBM PowerVC cannot create or modify storage pools; it can only discover them. The storage pools must be managed directly from the storage providers. Figure 3-12 shows the detail of an IBM Storwize V7000 storage provider that is configured with two storage pools for different purposes.
Figure 3-12 Storage pools
Shared storage pool In IBM PowerVC, shared storage resource refers to the PowerVM shared storage pool (SSP) feature. The SSP cannot be created or modified by IBM PowerVC. You must create the SSP on the VIOS before IBM PowerVC can create volumes on the SSP.
Volume Volumes are also referred to as a disk or a LUN. They are carved from the storage pools and presented as virtual disks to the partitions that are managed by IBM PowerVC.
Storage template This template defines the properties of a storage volume, such as location, thin provisioning, and compression. For example, by using the templates that are shown in Figure 3-13, you can create volumes that are either a normal thin-provisioned volume or a mirrored volume. For more information, see 3.6.2, “Storage templates” on page 49.
Figure 3-13 Storage templates
Storage connectivity group
A set of VIOSs with access to the same storage controllers. For more information, see 3.6.3, “Storage connectivity groups and tags” on page 52.
Tags Tags are a way to partition the FC ports of a host in sets that can be associated with sets of VIOSs. For more information, see 3.6.3, “Storage connectivity groups and tags” on page 52.
3.6.2 Storage templates
Storage templates are used to speed up the creation of a disk. A storage template defines several properties of the disk unit. Disk size is not part of the template. For different types of storage devices, the information that is defined in a template differs. This section introduces the IBM Storwize storage template only, which is a common type of storage that is used in the IBM PowerVC environment.
IBM Storwize storage template definition
The following information is defined in a template:
Name of the storage template.
Storage provider. The template is associated with a single storage provider. It cannot be used to instantiate disks from multiple storage providers.
Storage pool within a storage provider. The template is associated with a single storage pool. You can add another pool to support volume mirroring in the Advanced settings area.
Thin or thick (full) provisioning. To choose thick provisioning, select the Generic type of volume.
Advanced Settings area:
The following information is defined in the Advanced Settings area:
 – I/O group: The I/O group to which to add the volume. For the SAN Volume Controller, the maximum I/O groups that are supported is four.
 – % of virtual capacity : Determines how much real storage capacity is allocated to the volume at creation time, as a percentage of the maximum size that the volume can reach.
 – Automatically expand : Select Yes or No. This setting prevents the volume from using all of its capacity and going offline. As a thin-provisioned volume uses more of its capacity, this feature maintains a fixed amount of unused real capacity, which is called the contingency capacity.
 – Warning threshold: When real capacity reaches a specific percentage of virtual capacity, a warning alert is sent.
 – Grain size: A thin-provisioned grain size can be selected in the range 32 KB - 256 KB. A grain is a chunk that is used for allocating space. The grain size affects the maximum virtual capacity for the volume. Generally, smaller grain sizes save space but require more metadata access, which can affect performance adversely. The default grain size is 256 KB, which is the preferred option. The grain size cannot be changed after the thin-provisioned volume is created.
 – Use all available WWPNs for attachment: Specifies whether to enable multipath zoning. When this setting is enabled, IBM PowerVC uses all available WWPNs from all of the I/O groups in the storage controller to attach the volume to the VM. Enabling multipath causes each WWPN that is visible on the fabric to be zoned to the VM.
 – Enable mirroring : When checked, you must select another pool for volume mirroring. The volume that is created has one more copy in the mirroring pool. IBM Storwize clients can use two pools based on two different back-end storage devices to provide high availability.
A storage template can then be selected during volume creation operations.
Figure 3-14 shows a window that is presented to an IBM PowerVC administrator when the administrator defines the advanced settings for a thin-provisioned storage template definition.
Figure 3-14 Storage template definition: Advanced settings, thin-provisioned
Storage template planning
When you register a storage provider with IBM PowerVC, a default storage template is created for that provider. Edit this default template to suit your needs immediately after IBM PowerVC discovers the service provider.
 
Note: After a disk is created and uses a template, you cannot modify the template settings.
You can define several storage templates for one storage provider. If the storage provider contains several storage pools, at least one storage template is needed for each pool before those pools can be used to create volumes.
When you create a storage volume, you must select a storage template. All of the properties that are specified in the storage template are applied to the new volume, which is created on the storage provider that is specified in the storage template. To create a disk, you must enter the name of the template to use, volume name, and size only. Decide whether to select the Enable sharing check box. See Figure 3-15.
Figure 3-15 Volume creation
A storage template must also be specified when you deploy a new VM to control the properties of the virtual server’s boot volumes and data volumes. IBM PowerVC can manage pre-existing storage volumes. You can select them when you register the storage device or at any later time. Preexisting storage volumes do not have an associated storage template.
3.6.3 Storage connectivity groups and tags
IBM PowerVC uses storage connectivity groups and tags.
Storage connectivity groups
When you create a VM, IBM PowerVC needs a way to identify on which host it has to deploy this machine. One of the requirements is that from this host, the VM connects to its storage. Also, when you request IBM PowerVC to migrate a VM, IBM PowerVC must ensure that the target host also provides the VM with connectivity to its volume.
The purpose of a storage connectivity group is to define sets of hosts with access to the same storage devices where a VM can be deployed. A storage connectivity group is a set of VIOSs with access to the same storage controllers. It can span several host systems on IBM Power Systems servers with landscapes that are managed by IBM PowerVC.
When you deploy a new VM with IBM PowerVC, a storage connectivity group must be specified. The VM is associated with that storage connectivity group during the VM’s existence. A VM can be deployed only on Power Systems hosts that contain at least one VIOS that is part of the storage connectivity group. Specifying the storage connectivity group that a VM belongs to defines the set of hosts on which this VM can be deployed.
The VM can be migrated only within its associated storage connectivity group and host group. IBM PowerVC ensures that the source and destination servers can access the required storage controllers and LUNs.
Default storage connectivity groups are automatically created when IBM PowerVC discovers the environment. These default connectivity groups contain all VIOSs that access the same devices. Figure 3-16 shows the result of the discovery by IBM PowerVC of an environment with the following conditions:
Three POWER servers exist.
Each server hosts two VIOSs.
Each VIOS has two FC ports.
Four VIOSs connect to an IBM SAN Volume Controller.
IBM PowerVC automatically created the storage connectivity groups. Only one storage connectivity group for NPIV storage access was created because no other connectivity types are defined by default. This ­storage connectivity group corresponds to the way that partitions can access storage from these hosts.
Figure 3-16 List of storage connectivity groups
The default storage connectivity groups can be disabled but not deleted. For more information, see 5.9, “Storage connectivity group setup” on page 122.
The system administrator can define additional storage connectivity groups to further constrain the selection of host systems. You can use storage connectivity group to group host systems together in, for example, production and development groups. On large servers that are hosting several VIOSs, you can use storage connectivity groups to direct partitions to use a specific pair of VIOSs on each host.
 
Tip: A storage connectivity group can be modified after its creation to, for example, add or remove VIOSs. Therefore, when your environment changes, you can add new hosts and include their VIOSs in existing storage connectivity groups.
Figure 3-17 shows a diagram of storage connectivity group technology. It includes two Power Systems servers, each with three VIOSs. Two VIOSs from each server are part of the production storage connectivity group (called Production SCG in the figure) and one VIOS from each server is part of the development storage connectivity group (Development SCG). The VMs that are named VM1, VM2, VM4, and VM5 are associated with the production storage connectivity group, and their I/O traffic passes through the FC ports of A1, A2, B1, and B2 VIOSs. The development partitions VM3 and VM6 are associated with the development storage connectivity group, and their traffic is limited to using the FC ports that are attached to VIOSs A3 and B3.
Figure 3-17 Storage connectivity groups
Figure 3-18 shows how IBM PowerVC presents the detail of a storage connectivity group. This storage connectivity group contains three hosts, each with two VIO servers and two FC ports. Only four VIO servers have connectivity to the defined fabric.
Figure 3-18 Content of a storage connectivity group
Storage connectivity group redundancy
During the creation of a new storage connectivity group, the group is assigned to VIOSs that belong to this storage connectivity group, and follow the redundancy requirements regarding the number of VIOSs per host and VIOS’s connectivity to fabrics. IBM PowerVC V 1.3.1 supports the following requirements for VIOSs per host in storage connectivity group:
One VIOS
Two VIOSs
Three VIOSs
Four VIOSs
In addition to the number of VIO servers per host in the storage connectivity group, the fabric access requirement for storage connectivity group must be defined on a VIOS level or on the host level. IBM PowerVC V1.3.1 supports the following fabric access requirements for a storage connectivity group:
Every fabric per VIOS
Every fabric per host
At least one fabric per VIOS
Exactly one fabric per VIOS
Storage connectivity group definitions for redundancy and fabric connectivity can be modified after the storage connectivity group is created. To modify the properties of the storage connectivity group, click Configuration → Storage Connectivity Groups → Group_Name → Edit.
Figure 3-19 shows the storage connectivity group redundancy requirements that can be defined for the group.
Figure 3-19 Storage connectivity group redundancy
The Health tab of the storage connectivity group definition provides information of storage connectivity group health and possible problems. A IBM PowerVC administrator may use the Health tab to check detailed status of storage connectivity group at any time.
Storage port tags
IBM PowerVC introduces a concept that does not exist within PowerVM: storage port tags. ­IBM PowerVC allows arbitrary tags to be placed on FC ports.
 
Note: An FC port can have no tag or one tag. This tag can change over time, but a port cannot have two or more tags simultaneously.
A storage connectivity group can be configured to connect only through FC ports with a specific tag. Storage connectivity groups that share a VIOS can use different physical FC ports on the VIOS. The IBM PowerVC administrator handles this function by assigning different port tags to the physical FC ports of the VIOS. These tags are labels that can be assigned to specific FC ports across your hosts. A storage connectivity group can be configured to connect only through FC ports that have the same tags when you deploy with NPIV direct connectivity. Port tagging is not effective when you use SSP.
Combining a storage connectivity group and tags
By using both the storage connectivity group and tag functions, you can easily manage different configurations of SAN topology that fit your business needs for partitioning the SAN and restricting disk I/O traffic to part of the SAN.
Figure 3-20 shows an example of possible tag usage. The example consists of two IBM Power Systems servers, each with two VIOSs. Each VIOS has three FC ports. The first two FC ports are tagged ProductionSCG and connect to a redundant production SAN. The third port is tagged DevelopmentSCG and connects to a development SAN. Client VMs that belong to either storage connectivity groups (ProductionSCG or DevelopmentSCG) share VIOSs but do not share FC ports.
Figure 3-20 Storage connectivity groups and tags
The VIOSs in a storage connectivity group provide storage connectivity to a set of VMs with common requirements. An administrator can use several approaches to configure storage connectivity groups. Figure 3-21 shows these possible scenarios:
Uniform All VMs use all VIOSs and all FC ports.
Virtual I/O Server segregation
Different groups of different VMs use different sets of VIOSs but all FC ports on each VIOS.
Port segregation Different groups of different VMs use all VIOSs but different FC ports according to tags on those ports.
Combination In a combination of VIOS and port segregation, different groups of different VMs use different sets of VIOSs and different FC ports according to tags on those ports.
Figure 3-21 Examples of storage connectivity group deployments
3.7 Network management planning
A network represents a set of Layer 2 and Layer 3 network specifications, such as how your network is subdivided by using VLANs, and information about the subnet mask, gateway, and other characteristics. When you deploy an image, you choose one or more existing networks to apply to the new VM.
Setting up networks in advance reduces the amount of information that you must enter during each deployment and helps to ensure a successful deployment.
The first selected network is the management network that provides the primary system default gateway address. You can add additional networks to divide the traffic and provide more functions.
IBM PowerVC supports IP addresses by using hardcoded (/etc/hosts) or Domain Name Server (DNS)-based host name resolution. IBM PowerVC also supports Dynamic Host Configuration Protocol (DHCP) or static IP address assignment. For DHCP, an external DHCP server is required to provide the address on the VLANs of the objects that are managed by IBM PowerVC.
 
Note: When you use DHCP, IBM PowerVC is not aware of the IP addresses of the VMs that it manages.
3.7.1 Multiple network planning
Each VM that you deploy must be connected to one or more networks. By using multiple networks, you can split traffic. The IBM PowerVC management host uses three common types of networks when it deploys VMs:
Data network This network provides the route over which workload traffic is sent. At least one data network is required for each VM, and more than one data network is allowed.
Management network
This type of network is optional but highly suggested to provide a higher level of function and security to the VMs. A management network provides the RMC connection between the management console and the client LPAR. VMs are not required to have a dedicated management network, but a dedicated management network simplifies the management of advanced features, such as LPM and dynamic reconfiguration. IBM PowerVC can connect to a management network. First, you must set up networking on the switches and the SEA to support it.
Live Partition Migration (LPM) network
This optional network provides the route over which migration data is sent from one host to another host. By separating this data onto its own network, you can shape that network traffic to specify a higher or lower priority over data or management traffic. If you do not want to use a separate network for LPM, you can reuse an existing data or management network connection for LPM.
Since Version 1.2.2, IBM PowerVC can dynamically add a network interface controller (NIC) to a VM or remove a NIC from a VM. IBM PowerVC does not set the IP address for new network interfaces that are created after the machine deployment. Any removal of a NIC results in freeing the IP address that was set on it.
 
Tip: Create all of the networks that are needed for future VM creation. Contact your network administrator to add all of the needed VLANs on the switch ports that are used by the SEA (PowerVM) or network bridges (PowerKVM). This action drastically reduces the amount of time that is needed for network management (no more actions for IBM PowerVC administrators and network teams).
3.7.2 Shared Ethernet adapter planning
Set up the SEAs for a registered host before you use the host within IBM PowerVC. The configuration for each SEA determines how each host treats networks. IBM PowerVC requires that the SEAs are created before you start to manage the systems.
If you are using SEA in sharing/auto mode with VLAN tagging, create it without any VLANs that are assigned on the Virtual Ethernet Adapters.
IBM PowerVC adds or removes the VLANs on the SEAs when necessary (at VM deletion and creation):
If you deploy a VM on a new network, IBM PowerVC adds the VLAN on the SEA.
If you delete the last VM of a specific network (for a host), the VLAN is automatically deleted.
If the VLAN is the last VLAN that was defined on the Virtual Ethernet Adapter, this VLAN is removed from the SEA.
If you are using SEA and the following setting is true:
 – High availability mode set to sharing: IBM PowerVC ensures that at least two Virtual Ethernet Adapters are kept in the SEA.
 – High availability mode set to auto: IBM PowerVC ensures that at least one Virtual Ethernet Adapter is kept in the SEA.
IBM PowerVC then connects VMs to that SEA, deploys client-level VLANs to it, and allows dynamic reconfiguration of the network to SEA mapping. When you create a network in IBM PowerVC, a SEA is automatically chosen from each registered host, based on the VLAN that you specified when you defined the network. If the VLAN does not exist yet on the SEA, IBM PowerVC deploys that VLAN to the SEA that is specified.
VLANs are deployed only as VMs need them to reduce the broadcast domains.
 
Important: When multiple Ethernet adapters exist on either or both the migration source host or destination host, IBM PowerVC cannot control which adapter is used during the migration. To ensure the use of a specific adapter for your migrations, configure an IP address on the adapter that you want to use.
Note: To manage PowerVM, IBM PowerVC requires that at least one SEA is defined on the host.
You can dynamically change the SEA to which a network is mapped or you can remove the mapping, but this assignment is a default automatic assignment when you set up your networks. It might not match your organization’s naming policies.
The SEA that is chosen as the default adapter has the same network VLAN as the new network. If a SEA with the same VLAN does not exist, IBM PowerVC chooses as the default the SEA with the lowest primary VLAN ID Port Virtual LAN Identifier (PVID) that is in an available state.
Certain configurations might ensure the assignment of a particular SEA to a network. For example, if the VLAN that you choose when you create a network in IBM PowerVC is the PVID of the SEA or one of the additional VLANs of the primary Virtual Ethernet Adapter, that SEA must back the network. No other options are available. Plan more than one VIOS if you want a failover VIOS or expanded VIOS functions.
In the experience of the authors, certain clients want to keep the slot-numbering convention. By default, IBM PowerVC adds and removes the Virtual Ethernet Adapter from the SEA by choosing the next available slot ID. If you want to avoid this behavior, you can modify all of the /etc/nova/nova*.conf file and change the automated_IBM PowerVC_vlan_clean attribute to false by running the following command:
openstack-config --set /etc/nova/nova.conf DEFAULT automated_powervm_vlan_cleanup False
If the host change is already defined, use this attribute for each nova-*.conf file (one for each host), for example:
openstack-config --set /etc/nova/nova-828642A_10D6D5T.conf DEFAULT automated_powervm_vlan_cleanup False
Then, restart the IBM PowerVC Nova service by running the following command:
/opt/ibm/IBM PowerVC/bin/IBM PowerVC-services nova restart
 
Tip: Systems that use multiple virtual switches are supported. If a network is modified to use a different SEA and that existing VLAN is deployed by other networks, those other networks move to the new adapter as well. To split a single VLAN across multiple SEAs, break those SEAs into separate virtual switches. IBM PowerVC supports the use of virtual switches in the system. Use multiple virtual switches when you want to separate a single VLAN across multiple distinct physical networks.
If you create a network, deploy VMs to use it, and then change the SEA to which that network is mapped (your workloads are affected). The network experience a short outage while the reconfiguration takes place.
In environments with dual VIOSs, the secondary SEA is not shown except as an attribute on the primary SEA.
Note: If VLANs are added to SEA adapters and to VIOS profiles after the host is brought into management of IBM PowerVC, the new VLAN is not automatically discovered by IBM PowerVC. To discover a newly added VLAN, run the Verify Environment function in the IBM PowerVC system.
Table 3-13 is a table of suggestions for when you create and use SEAs. The use of SEAs is a preferred practice.
Table 3-13 Preferred practices for shared Ethernet adapter
Type of deployment
High availability mode auto
High availability mode sharing
New host
SEA creation with one VEA. Do not put any VLANs on the VEA.
SEA creation with two VEAs. Do not put any VLANs on the VEAs.
Existing host (Keep the numbering convention.)
Set automated_IBM PowerVC_vlan_cleanup of nova-*.conf to False.
Set automated_IBM PowerVC_vlan_cleanup of nova-*.conf to False.
Existing host (Let IBM PowerVC manage the numbering of the adapters.)
Do nothing.
Do nothing.
3.8 Planning users and groups
The following sections describe the planning that is required for users and groups.
3.8.1 User management
When you install IBM PowerVC, it is configured to use the security features of the operating system on the management host by default. This configuration sets the root operating system user account as the only initially available account with access to the IBM PowerVC server.
As a preferred practice, create at least one system administrator user account to replace the root user account as the IBM PowerVC management administrator. For more information, see “Adding user accounts” on page 61. After a new administrator ID is defined, remove the IBM PowerVC administrator rights from the root user ID, as explained in “Disabling the root user account from IBM PowerVC” on page 64.
 
Important: IBM PowerVC also requires user IDs that are defined in /etc/passwd and they must not be modified, such as nova, neutron, keystone, and cinder. All of these users use OpenStack and they must not be modified or deleted.
For security, you cannot connect remotely to these user IDs. These users are configured for no login.
User account planning is important to define standard accounts and the process and requirements for managing these accounts. A IBM PowerVC management host can take advantage of user accounts that are managed by the Linux operating system security tools or can be configured to use the services that are provided by LDAP.
Operating system user account management
Each user is added, modified, or removed by the system administrator by using Linux operating system commands. After the user ID is defined on the operating system, the user ID becomes available in IBM PowerVC if it is granted a IBM PowerVC role (see 3.8.2, “Projects and role management planning” on page 65), such as admin, deployer, or viewer.
Operating system-based user management requires command-line experience, but it is easy to maintain. No dependency exists on other servers or services. To see user accounts in the IBM PowerVC management hosts, click Configuration  Users in the top navigation bar of the IBM PowerVC GUI. Use the underlying Linux commands to manage your account (useradd, usermod, or userdel, for example).
Adding user accounts
To add a user account to the operating systems on the IBM PowerVC management host, run the following command as root from the Linux command-line interface (CLI):
# useradd [options] login_name
Assume that you want to create a user ID for a system administrator who is new to IBM PowerVC. You want to allow this administrator to view the IBM PowerVC environment only, not to act on any of the managed objects. Therefore, you want to give this administrator only a viewer privilege.
By using the command that is shown in Example 3-1, create the user viewer1, with /home/viewer1 as the home and base directory, the viewer group as the main group, and a comment with additional information, such as IBM PowerVC.
Example 3-1 Add an admin user account with the useradd command
useradd -d /home/viewer1 -g viewer -m -c "IBM PowerVC" viewer1
In Version 1.3.1, there is no longer a viewer group by default. Groups are no longer created by default during installation, but existing groups are not deleted either if you upgraded from a prior version.
If you have a clean installation, run the following command to assign a viewer role to a user:
# openstack role add --user viewer1 --project <project_name_or_id> viewer
If the group exists after an upgrade, give that group the viewer role assignment by running the following command:
# openstack role add --group viewer --project <project_name_or_id> viewer
The new user is created with the viewer role in the IBM PowerVC management host because it is part of the viewer user group. Double-click the viewer1 user account to see detailed information, as shown in Figure 3-22. After the administrator is skilled enough with IBM PowerVC to start managing the environment, you can change the administrator’s s/group/role group to give the administrator more management privileges, as described in “Updating the user accounts” on page 63.
In Version 1.3.1, roles can be assigned directly to users. But, creating groups with roles might help you administer large sets of users.
For example, the admin and developer group can be assigned to a user. Use these commands to create a user with the deployer and admin role.
Note: Do not forget to set a password for the new user if you want to log in with these accounts on the IBM PowerVC GUI.
In the example in Figure 3-22, three user IDs (admin1, deployer1, and viewer1) were added to the initial root user ID.
Figure 3-22 Users information
Figure 3-22 on page 62 shows the new accounts.
Figure 3-23 shows the new user admin1 that was added to the admin group.
Figure 3-23 Detailed user account information
Updating the user accounts
If you are using the group role assignment approach (and pre-created those groups and their role assignments or upgraded a system with them), then Example 3-2 works.
To update a user account in the operating systems on the IBM PowerVC management host, run the following command as root:
# usermod [options] login_name
Use the command that is shown in Example 3-2, update the admin user account with the comment IBM PowerVC admin user account, and move it to the admin user group.
Example 3-2 Update the admin user account with the usermod command
usermod -g admin admin
But, if you are using the user role assignment approach, then you must use the following command:
# openstack role delete --user <user> --project <project> <previous_role>
Then, run the following command:
# openstack role add --user <user> --project <project> <new_role>`
After this modification, the admin user account is part of the admin user group and can manage the IBM PowerVC management host, as shown in Figure 3-23.
Disabling the root user account from IBM PowerVC
If you have upgraded to Version 1.3.1, you probably removed the root user account from the admin user group. so the steps in this section are not needed because groups are not created during the installation. If not, run the following command:
# openstack role delete --user root --project ibm-default admin
 
Important: As a preferred practice, do not use the root user account on IBM PowerVC. It is a security preferred practice to remove it from the admin group.
Lightweight Directory Access Protocol
LDAP is an open standard for accessing global or local directory services over a network or the internet. A directory can handle as much information as you need, but it is commonly used to associate names with phone numbers and addresses. LDAP is a client/server solution. The client requests information and the server answers the request. LDAP can be used as an authentication server.
If an LDAP server is configured in your enterprise, you can use that LDAP server for IBM PowerVC user authentication. IBM PowerVC can be configured to query an LDAP server for authentication rather than using operating system user accounts authentication.
In Version 1.3.1, LDAP configuration is not a required installation step, but a general configuration step. Therefore, the IBM PowerVC-ldap-config command is no longer required. To configure LDAP, run the IBM PowerVC-config identity repository subcommand. For more information, see “Configuring LDAP” in the IBM PowerVC section of the IBM Knowledge Center:
Also, for more information, see “Configuring LDAP” in the IBM Cloud PowerVC Manager section of the IBM Knowledge Center:
Selecting the authentication method
Plan the authentication method and necessary accounts before the IBM PowerVC installation. For simplicity, for your first installation use the operating system authentication method to manage user accounts in most of the IBM PowerVC installations.
In an enterprise environment, use the LDAP authentication method.
3.8.2 Projects and role management planning
This section describes the settings that are required for each user and group to operate and perform actions and work with projects.
Managing projects
A project, sometimes referred to as a tenant, is a unit of ownership. VMs, volumes, images, and networks belong to a specific project. Only users with a role assignment for a given project can work with the resources belonging to that project. Before IBM PowerVC Version 1.3.1, IBM PowerVC supported only a single project named ibm-default. This project was created by the IBM PowerVC installation, and all resources and role assignments were applied to that project because there were no others. The ibm-default project is still created during installation, but IBM PowerVC now also supports the creation of more projects for resource segregation.
You can use the openstack project command to manage projects as needed. As a OpenStack administrator, you can create, delete, list, set, and show projects:
Create a project by running the following command:
openstack project create project-name
Delete an existing project by running the following command:
openstack project delete project-name
List projects by running the following command:
openstack project list
Set project properties (name, or description) by running the following commands:
openstack project set --name <name> project-name
openstack project set --description <description> project-name
Display project details by running the following command:
openstack project show project-name
After you create a project, you must grant at least one user a role on that project.
Managing roles
Roles are assigned to a user or group, in which case they are inherited by all users in that group. A user or group can have more than one role, in which case they can perform any action that at least one of their roles allows.
Roles are used to specify what actions users can perform. Table 3-14 shows the available roles and actions each role is allowed to perform.
Table 3-14 IBM PowerVC security roles
Role
Action
admin
Users with this role can perform all tasks and have access to all resources.
deployer
Users with this role can perform all tasks except the following ones:
Adding, updating, or deleting storage systems or fabrics
Adding, updating, or deleting hosts or host groups
Adding, updating, and deleting networks
Adding, updating, and deleting storage templates
Adding and removing existing VMs to be managed by IBM PowerVC
Adding and removing existing volumes to be managed by IBM PowerVC
Updating FC port configuration
Viewing users and groups
This role is deprecated for future removal.
viewer
Users with this role can view resources and the properties of resources, but can perform no tasks. They cannot view users and groups.
vm_manager
Users with this role can perform the following tasks:
Capturing, importing, or deleting an image
Editing a description of an image
Viewing all resources except users and groups
storage_manager
Users with this role can perform the following tasks:
Creating, deleting, or resizing a volume
Viewing all resources except users and groups
image_manager
Users with this role can perform the following tasks:
Capturing, importing, or deleting an image
Editing description of an image
Viewing all resources except users and groups
deployer_restricted
Users with this role can perform the following tasks:
Deploying a VM from an image
Viewing all resources except users and groups
vm_user
Users with this role can perform the following tasks:
Starting, stopping, or restarting a VM
Viewing all resources except users and groups
self_service
The actual abilities that a user with self_service can perform depends on the project policies that are set by the project administrator. Project policies specify what users are allowed to do and whether administrator approval is required for each action. In general, self_service authority allows the following actions:
Managing VMs that are owned by that user, including capturing them and performing lifecycle operations on them
Deploying VMs by using a deploy template
Reviewing and withdrawing action requests
Viewing the user’s metering data
This role is specific to IBM Cloud PowerVC Manager, and does not otherwise appear.
For example, a user has the vm_manager and storage_manager roles on one project and the viewer role on another project. Users can log in to only one project at a time in the IBM PowerVC user interface. If they have a role on multiple projects, they can switch to one of those other projects without having to log out and log back in. When users log in to a project, they see only resources, messages, and so on, that belong to that project. They cannot see or manage resources that belong to a project where they have no role assignment. There is one exception to this rule, which is that the admin role can operate across projects in many cases. Be mindful of this when handing out admin role assignments.
 
 
 
 
Important: OpenStack does not support moving resources from one project to another project. You can move volumes by unmanaging them and then remanaging them in the new project, but it is not possible to perform the same action for VMs because the network on which that VM depends is tied to the original project.
Note: OpenStack’s image and networking services support sharing between multiple projects, but IBM PowerVC Version 1.3.1 does not support this situation. If you use REST APIs, be aware of this and do not mark images or networks as shared.
Working with roles
To assign a role to a user or group, run the openstack role add command or use the equivalent REST APIs. The role assignments are provided in Table 3-15.
Table 3-15 Role assignments
Action
Description
role add
Adds a role to a user or group on a project.
role assignment list
List role assignments.
role list
List roles.
role remove
Removes a role from a user or group on a project.
role set
Set role properties.
role show
Display role details.
For more information, see the CLI and API documentation. Example 3-3 shows how to add a role.
Example 3-3 OpenStack API command line to set up roles to projects.
openstack role add --project <project_name> --user <user>
openstack role add --group <group> <role>
Setting project policies
IBM Cloud PowerVC Manager administrators can set several project-specific policies. These policies apply only to users with self_service authority. You can set properties such as whether users require administrator approval to perform certain tasks.
To set policies for a project, run the IBM PowerVC-cloud-config command and specify one of the policy types that are listed in the following section, as shown in Example 3-4. If the specified policy type already is set for the project, it is changed to the new value.
Example 3-4 Set a policy by using IBM PowerVC Cloud CLI
IBM PowerVC-cloud-config -p <project> set-policy <policy_type>
For more information about using IBM PowerVC-cloud-config, run IBM PowerVC-cloud-config --help.
The policies that are described in the following sections can be set.
default_expiration_days
This setting specifies the number of days before VMs that are created by self-service users in the project expire. When a VM is deployed in this project, this value is used to set the VM’s expiration date. The default is 30 days. When editing a VM’s expiration date, users cannot select a date further out than the value set in this policy. For example, if default_expiration_days is set to 90, users cannot select a date more than 90 days away from the current expiration date when editing a VM’s expiration date.
The default is 30 days.
default_request_wait_time
This setting specifies the number of days that VM expiration extension requests can be in a pending state. If the request is not approved after the specified number of days, it is automatically approved.
If this value is not set, requests are never automatically approved, which is the default.
deploy_approval_limit
This setting specifies the number of VMs a self-service user can own without requiring approval for further deployments. For example, assume that this value is three. If a user deployed three VMs and then deleted one, the user can deploy one more VM without approval. After this limit is reached, further VM deployments require approval.
If the value is not set, or 0, then all deployments require approvals.
Here are the values that you can set:
-1: Deployments never require approval.
Not set or 0: All deployments require approval.
n > 0: Users can own n VMs without requiring approval for deployments, but any further deploys require approvalexpired_resources_lifetime.
expired_resources_lifetime
This setting specifies how many days expired VMs exist before being deleted. After this period, the VM is deleted. This value applies to all VMs created by self-service users in this project. If this value is not set, expired resources are never automatically deleted.
There is no default value, which means expired VMs are never deleted.
extension_approval_limit
This setting specifies the number of VM expiration date extensions that a self-service user can request without administrator approval. After this limit is reached, further expiration date extension requests require approval.
There is no default value, but if it is not set or set to 0, all extensions require approvals.
Here are the values that you can set:
-1: Extension requests never require approval.
Not set or 0: All extension requests require approval.
n > 0: Users can request expiration extension of a VM n times without approval, but any future extension requests require approval.
snapshot_approval_limit
This setting specifies the number of VM captures or snapshots that can be performed without administrator approval. After this limit is reached, further capture requests require approval.
Here are the values that you can set:
-1: Captures never require approval.
Not set or 0: All captures require approval.
n > 0: Users can capture n VMs without approval, but any future captures require approval.
There is no default value, but if it is not set or set to 0, then all capture requests require approvals.
Example 3-5 shows how to create or update the snapshot_approval_limit policy. By running this command, you can create a policy if it does not exist or update the existing policy. If you do not specify a value for -p, the policy is created for the ibm-default project.
Example 3-5 Set policies for a specific project for self-service users
IBM PowerVC-cloud-config -p test set-policy snapshot_approval_limit 10
The IBM PowerVC management host can display the user accounts that belong to each group. Log in to the IBM PowerVC management host and click Configuration on the top navigation bar of the IBM PowerVC GUI, and then click the Users and Groups tab, as shown in Figure 3-24.
Figure 3-24 Groups tab view under Users on the IBM PowerVC management host
This view displays the default groups. To access detailed information for each group, double-click the group name. Figure 3-25 shows an example of a group that includes the pwrvcviewer ID.
Figure 3-25 Detailed view of viewer user group on the management host
3.9 Security management planning
IBM PowerVC provides security services that support a secure environment and, in particular, the following security features:
LDAP support for authentication and authorization information (users and groups).
The IBM PowerVC Apache web server is configured to use secured HTTPS protocol. Only Transport Layer Security (TLS) 1.2 is supported.
Host key and certificate verification of hosts, storage, and switches.
For a list of configuration rules for Internet Explorer, see this website:
 
Audit logs, which are recorded and available if you have enabled auditing.
3.9.1 Ports that are used by IBM Power Virtualization Center
The set of ports differs with the IBM PowerVC editions (PowerVM and PowerKVM).
Information about the ports that are used by IBM PowerVC management hosts for inbound and outbound traffic is on the following IBM Knowledge Center websites:
IBM Cloud PowerVC Manger, for managing IBM PowerVC:
IBM PowerVC Standard Edition, for managing PowerVM:
IBM PowerVC Standard Edition, for managing PowerKVM:
 
Important: If a firewall is configured on the management host, ensure that all ports that are listed on the associated IBM Knowledge Center website are open.
Important: The installation process no longer automatically disables the firewall on the management host. You must now specify that option during the installation. As a preferred practice, configure the firewall yourself.
3.9.2 Providing a certificate
A IBM PowerVC management host is installed with a default self-signed certificate and a key. IBM PowerVC can also use certificate authority (CA)-signed certificates.
Self-signed certificates are certificates that you create for private use. After you create a self-signed certificate, you can use it immediately. Because anyone can create self-signed certificates, they are not considered publicly trusted certificates. You can replace default, expired, or corrupted certificates with a new certificate. You can also replace the default certificate with certificates that are requested from a CA.
The certificates are installed in the following locations:
/etc/pki/tls/certs/IBM PowerVC.crt
/etc/pki/tls/private/IBM PowerVC.key
Clients can replace libvirt certificates for PowerKVM installations.
The process to replace the certificates is described in the following IBM Knowledge Center websites:
IBM PowerVC Standard Managing PowerVM:
IBM PowerVC Standard Managing PowerKVM:
3.10 Product information
For more planning information, see the following resources.
Direct customer support
For technical support or assistance, contact your IBM representative or go to the IBM Support website:
Packaging
The IBM PowerVC Editions contain a DVD that includes product installation documentation and files. Your Proof of Entitlement (PoE) for this program is a copy of a paid sales receipt, purchase order, invoice, or other sales record from IBM or its authorized reseller from whom you acquired the program, provided that it states the license charge unit (the characteristics of intended use of the program, number of processors, and number of users) and quantity that was acquired.
Software maintenance
This software license offers Software Maintenance, which was previously referred to as Software Subscription and Technical Support.
Processor core (or processor)
Processor core (or processor) is a unit of measure by which the program can be licensed. Processor core (or processor) is a functional unit within a computing device that interprets and runs instructions. A processor core consists of at least an instruction control unit and one or more arithmetic or logic units. With multi-core technology, each core is considered a processor core. Entitlements must be acquired for all activated processor cores that are available for use on the server.
In addition to the entitlements that are required for the program directly, the license must obtain entitlements for this program that are sufficient to cover the processor cores that are managed by program.
A PoE must be acquired for all activated processor cores that are available for use on the server. Authorization for IBM PowerVC is based on the total number of activated processors on the machines that are running the program and the activated processors on the machines that are managed by the program.
Licensing
The IBM International Program License Agreement, including the License Information document and PoE, governs your use of the program. PoEs are required for all authorized use.
This software license includes Software Subscription and Support (also referred to as Software Maintenance).
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset