Managing the IBM PowerVM environment
This chapter provides guidelines to manage your PowerVM environment.
The IBM PowerVM environment is composed of multiple components like an IBM Power server, PowerVM hypervisor (PHYP), Virtual I/O Server (VIOS), client logical partitions (LPARs), and many other capabilities that PowerVM offers. Therefore, management of a PowerVM environment consists of managing all these components. This chapter describes management, best practices, and guidelines to manage your PowerVM environment.
The chapter also covers important aspects of management like monitoring LPARs. It also provides an overview of solutions that are available on PowerVM.
The tools that are available to manage Power servers are also described.
This chapter covers the following topics:
5.1 Hardware Management Console management best practices
The Hardware Management Console (HMC) provides a simplified interface to manage Power servers.
For a description of the menu options and tasks that are available in the HMC, see Overview of menu options, found at:
Figure 5-1 highlights the HMC Management options.
Figure 5-1 HMC Management
As a best practice, configure dual HMCs or redundant HMCs for managing your Power servers. When two HMCs manage one system, they are peers, and each HMC can be used to control the managed system.
For more information about configuration guidelines, see Hardware Management Console virtual appliance, found at:
5.1.1 HMC upgrades
Updates and upgrades are released periodically for the HMC. As part these updates, new functions and improvements are added to the HMC.
For more information about maintaining your HMC, see Updating, upgrading, and migrating your HMC machine code, found at:
5.1.2 HMC backup and restore
As a best practice, back up the HMC data after any changes are made to the HMC or to the information that is associated with LPARs.
The HMC data that is stored on the HMC hard disk drive can be saved to a DVD on a local file system, a remote system that is mounted to the HMC file system (such as NFS), or sent to a remote site by using File Transfer Protocol (FTP).
Figure 5-2 highlights the options that are available on the HMC GUI to back up and restore the HMC.
Figure 5-2 HMC backup and restore
To learn about the tasks that are available on the HMC under HMC Management, see Console Management tasks, found at:
5.1.3 HMC monitoring capabilities
The HMC plays a crucial role in the management of servers. Several tasks are run from the HMC to manage VIOS partitions and client LPARs, servers, and resources that are available on the servers.
HMC captures logs and service events, which enable the system administrator to view the tasks that are performed and analyze the events that are generated. The HMC also plays an important role in the Call Home function. If Call Home is enabled for a server that is managed by HMC, the console automatically contacts a service center when a serviceable event occurs. When the Call Home function on a managed system is disabled, your service representative is not informed of serviceable events.
HMC also provides connection monitoring capabilities. Connection monitoring generates serviceable events when communication problems are detected between the HMC and the managed systems. If you disable connection monitoring, no serviceable events are generated for networking problems between the selected machine and this HMC.
For a description of the tasks that are available on the HMC for the Serviceability tasks, see Serviceability tasks, found at:
5.2 Firmware management best practices
Server firmware is the code that is in system flash memory. It includes several subcomponents, such as PHYP, power control, the service processor, and LPAR firmware that is loaded into either AIX, IBM i, or Linux LPARs.
Firmware management plays a crucial role in the overall Power server management strategy.
Firmware updates are used for:
Adding functions to an existing system or supporting a new system. These updates are added as part of a major release.
Providing fixes or group of fixes to the existing release.
5.2.1 Firmware terminology
This section introduces terms that are used in the context of firmware management.
Release level
A major new function, such as the introduction of new hardware models and functions and features that are enabled through firmware. This firmware upgrade is disruptive.
Service Pack (SP)
Primarily firmware fixes and minor function changes that are applicable to a specific release level. These firmware updates are usually concurrent.
Types of firmware updates:
 – Concurrent
A code update that allows the operating systems that run on the Power server to continue running while the update is installed and activated.
 – Deferred
A code fix that is concurrent but not activated on the system until the Power server is restarted.
 – Partition deferred
A code fix that is concurrent but not activated until the partition is reactivated.
 – Disruptive
A code fix, which requires a Power server restart during the code update process.
 
Note: Deferred, partition-deferred, and disruptive content is identified in the firmware readme file.
5.2.2 Determining the type of firmware installation
The information in this section helps you to determine whether your installation will be concurrent or disruptive.
For systems that are not managed by an HMC, the installation of system firmware is always disruptive.
 
Note: The file names and SP levels that are used in the following examples are for clarification only. They are not necessarily levels that were released in the past or will be released in the future.
The naming convention for system firmware files is as follows:
Example: 01VHxxx_yyy_zzz
 – xxx is the release level.
 – yyy is the SP level for xxx release level.
 – zzz is the last disruptive SP level for xxx release level.
 
Note: Values of SP level and last disruptive SP level (yyy and zzz) are only unique within a release level (xxx). For example, 01VH900_040_040 and 01VH910_040_045 are different SPs.
An installation is disruptive in the following situations:
The release levels (xxx) are different. For example, the currently installed release is 01VH900_040_040. The new release is 01VH910_040_040.
The SP level (yyy) and the last disruptive SP level (zzz) are the same. For example, VH910_040_040 is disruptive, no matter what level of VH910 is installed on the system.
The SP level (yyy) that is installed on the system is earlier than the last disruptive SP level (zzz) of the SP to be installed. For example, the currently installed SP is VH910_040_040 and the new SP is VH910_050_045.
An installation is concurrent if the release level (xxx) is the same and the SP level (yyy) that is installed on the system is the same or later than the last disruptive SP level (zzz) of the SP to be installed. For example, the currently installed SP is VH910_040_040, and the new SP is VH910_041_040.
The definitions of impact area and SP severity that are provided with the firmware details are listed in Preventive Service Planning, found at:
For more information about how to view the current firmware level that is installed on an AIX, Linux, and IBM i partition by using Advanced System Management Interface (ASMI) or HMC, see Viewing existing firmware levels, found at:
5.2.3 Planning firmware updates and upgrades
As a best practice, plan for firmware maintenance twice a year, although the frequency can be tailored to suit the customer's environment. If firmware maintenance is planned twice a year, one of the maintenance windows can support an upgrade to move off a release that is no longer supported. The other maintenance window can be for an update, that is, move to a newer SP at the current release, which can be done concurrently.
5.2.4 System firmware maintenance best practices
Here are best practices to follow for system firmware updates and upgrades:
When a new SP is released, review the readme file for the SP. Check the SP Fix List to see whether any critical fixes are applicable to your environment and configuration.
If an SP includes a High Impact PERvasive (HIPER) fix that is applicable to your environment, install the SP as soon as a maintenance window can be scheduled.
A “deferred fix” may remain pending until the next scheduled restart.
Use a Release Level that is supported by the SPs.
If you do not require the features and functions that are introduced by a new Release Level, you might stay on the older Release Level if it is still supported. SPs continue to be delivered for supported Release Levels.
Server Firmware Update and Upgrade Instructions, found at https://www.ibm.com/support/pages/server-firmware-update-and-upgrade-instructions, includes information about the following firmware maintenance tasks:
For HMC-managed systems, how to concurrently update server firmware.
For HMC-managed systems, how to disruptively update server firmware.
For IBM i stand-alone systems (non-HMC-managed systems).
For more information about updating the firmware on a system that is managed by only PowerVM NovaLink, see Updating the firmware on a system that is managed by PowerVM NovaLink, found at:
As described in Chapter 1, “IBM PowerVM overview” on page 1, Power10 processor-based scale-out and midrange servers use the enterprise Baseboard Management Controller (eBMC) service processor instead of the Flexible Service Processor (FSP).
For eBMC-based servers, you might perform a firmware update directly from the eBMC ASMI menu. For more information, see Firmware update via eBMC ASMI menu, found at:
5.2.5 I/O adapter firmware management
For more information about I/O firmware and I/O firmware updates, see I/O Firmware, found at:
For more information about single-root I/O virtualization (SR-IOV) firmware or to update the driver firmware for shared SR-IOV adapters, see SR-IOV Firmware, found at:
5.3 VIOS management best practices
This section describes VIOS management aspects and best practices in the implementation design.
VIOS facilitates the sharing of physical I/O resources between client LPARs within the server. Because the VIOS serves the client LPARs, it must be maintained and monitored continuously. VIOS has several areas that must be planned and maintained in a production environment, such as the following items:
VIOS version updates and upgrades
Design of single, dual, or more VIOS per server
Live Partition Mobility (LPM) prerequisites for VIOS
Shared Ethernet Adapters (SEA) high availability (HA) modes
Virtual adapter tunables
I/O reporting
The implementation of VIOS in a Power server can take the form of single VIOS managing the virtual I/O or more than a single VIOS for redundancy.
The key here is to understand your availability requirements and decide what you need. If the redundancy is achieved at the hardware level, then redundancy at the application level across multiple LPARs or physical frames might not be needed or vice-versa based on your planning and environment readiness.
5.3.1 Single VIOS
Power servers are resilient by design, so depending on the Power server model and the availability requirements, a single VIOS can be used to support some small and non-production workloads. However, eliminating single points of failure (SPOFs) by implementing dual VIOS is a best practice.
Also, some environments have heavy network and storage I/O traffic. In such cases, consider using multiple VIOSs to isolate network traffic from the I/O from the storage area network (SAN) by using different set of VIOS pairs within the same server.
Overall, using dual or multiple VIOSs per machine are preferred for most situations.
5.3.2 Dual or multiple VIOSs
The benefits of using multiple VIOSs are improved availability and performance.
Mission-critical workloads have different requirements than development and test partitions. Therefore, it is a best practice to plan for the right amount of resources that are needed by one or multiple VIOSs, depending on the workloads that you plan to run.
Installing dual VIOSs in the same Power server is a best practice, and it is the mandatory setup for redundancy. With dual VIOS, you can restart one VIOS for a planned maintenance operation and keep the client LPARs running.
A dual-VIOS setup provides redundancy, accessibility, and serviceability. It offers load-balancing capabilities for multipath input/output (MPIO), multiple SEA, and virtual Network Interface Controllers (vNICs) failover configurations.
Compared to a single VIOS setup, a dual-VIOS setup has the following extra components:
VIOS pairs communicate with each other by using a control channel over virtual local area network (VLAN) ID 4095 on virtual switch when a simplified SEA is used.
Setting the trunk priority on the virtual trunk Ethernet adapters that are used in a SEA configuration. The trunk priority determines which VIOS is the primary in a SEA failover setup.
A dual VIOS configuration allows the client LPARs to have multiple paths (two or more) to their resources. In this configuration, if one of the paths is not available, the client LPAR can still access its resources through another path.
These multiple paths can be used to set up HA I/O virtualization configurations, and it can provide multiple ways for building high-performance configurations. These goals are achieved with the help of advanced capabilities that are provided by PowerVM (VIOS and PHYP) and the operating systems on the client LPARs.
Both HMC and NovaLink allow configuration of dual VIOSs on managed systems.
For the storage, PowerVM offers three types of virtualization for client LPARs for enhanced storage availability to client partitions:
Virtual SCSI (vSCSI)
N_Port ID Virtualization (NPIV)
Shared storage pool (SSP)
For more information about multi-pathing configurations, see the following resources:
Multipathing and disk resiliency with vSCSI in a dual VIOS configuration, found at:
Path control module attributes, found at:
For more information about dual-VIOS configurations, see the following resources:
Creating partition profiles for dual VIOS, found at:
Configuring VIOS partitions for a dual setup, found at:
Virtual Storage Redundancy with dual VIOS Configuration, found at:
5.3.3 VIOS backup and restore
This section describes the PowerVM backup and restore methodology and its importance.
It is important to keep VIOS up to date and backed up because it is a critical part of your infrastructure. When you plan for VIOSs, it is important to plan your VIOS backups. You can back up to a Network Installation Manager (NIM) server, tape, DVD, NFS server, or to
IBM Spectrum Storage solutions. Using HMC V9R2M950 or later, you can back up your VIOS I/O configuration and your VIOS image to the HMC and restore them later from the HMC.
Before any updates, you can use the alt_root_vg command to clone rootvg so that you have a fast failback if VIOS runs into issues. For more information, see How to clone a PowerVM VIOS rootvg?, found at:
VIOS backup by using VIOS tools
You can choose either to back up the VIOS operating system as a bootable backup, or back up only the virtual mappings and configuration backup.
VIOS system backup
You can create an installable image of the root volume group by using the backupios command. The backupios command creates a backup of the VIOS and places it in a file system, bootable tape, or DVD. You can use this backup to reinstall a system to its original state after it was corrupted. If you create the backup on tape, the tape is bootable and includes the installation programs that are needed to install from the backup.
The backupios command can use -cd flag to create a system backup image to DVD-RAM media. If you must create multi-volume discs because the image does not fit on one disc, the backupios command gives instructions for disk replacement and removal until all the volumes are created.
The backupios command can use the -file flag to create a system backup image to the path that is specified. The file system must be mounted and writable by the VIOS root user before the backupios command is run.
For more information, see the backupios command, found at:
VIOS configuration backup
Back up the virtual definition configurations regularly. Use the viosbr command to back up the virtual and logical configuration, list the configuration, and restore the configuration of the VIOS.
The viosbr command backs up all the relevant data to recover VIOS after a new installation. The viosbr command has the -backup parameter that backs up all the device properties and the virtual devices configuration on the VIOS. This backup includes information about logical devices, such as storage pools, file-backed storage pools, and the virtual media repository.
The backup also includes the virtual devices, such as Etherchannel, SEAs, virtual server adapters, the virtual log repository, and server virtual Fibre Channel (SVFC) adapters. Additionally, it includes the device attributes, such as the attributes for disks, optical devices, tape devices, Fibre Channel (FC) SCSI controllers, Ethernet adapters, Ethernet interfaces, and logical Host Ethernet adapters (HEAs).
The viosbr command can run once, or run in a specified period by using the -frequency parameter with the daily, weekly, or monthly option. Daily backups occur at 00:00, weekly backups on Sundays at 00:00, and monthly backups on the first day of the month at 00:01. The -numfile parameter specifies the number of successive backup files that are saved, with a maximum value of 10. After the specific number of files is reached, the oldest backup file is deleted during the next backup cycle.
Example 5-1 shows a backup of all the device attributes and virtual device mappings daily on the VIOS, keeping the last seven backup files.
Example 5-1 A viosbr backup with frequency options
$viosbr –backup -file vio1_backup –frequency daily numfiles 7
The backup files that result from running this command are under /home/padmin/cfgbackups with the following names for the seven most recent files:
vios1_backup.01.tar.gz
vios1_backup.02.tar.gz
vios1_backup.03.tar.gz
vios1_backup.04.tar.gz
vios1_backup.05.tar.gz
vios1_backup.06.tar.gz
vios1_backup.07.tar.gz
All the configuration information is saved in a compressed XML file. If a location is not specified with the -file option, the file is placed in the default location /home/padmin/cfgbackups.
For more information, see the viosbr command, found at:
VIOS backup by using HMC
Starting with HMC V9R2M950, a new GUI feature was added to back up a full VIOS (backupios) and the VIOS configuration (viosbr).
Starting with V10R1M1010, the following commands were added to support VIOS backup and restore:
mkviosbk
lsviosbk
rstviosbk
rmviosbk
cpviosbk
chviosbk
These commands provide the same function through the HMC command-line interface (CLI).
When the backup operation is performed from the HMC, it calls the VIOS to initiate the backup. Then, the backup operation attempts to do a secure copy (scp) to transfer the backup from the VIOS to the HMC.
The following prerequisites must be met for this backup operation to complete successfully:
Resource Monitoring and Control (RMC) must be active for the VIOS that is going to be backed up.
Enough space in VIOS /home/padmin and HMC /data/.
The Secure Shell (SSH) must be working from HMC to VIOS and from VIOS to HMC (port number 22 must be allowed in any firewall between HMC and VIOS).
For more information about managing VIOS backups from HMC, see Manage Virtual I/O Server Backups, found at:
If you decide to use the HMC CLI for VIOS backups, you must be at HMC V10R1M1010 or later.
To perform VIOS backups from the HMC CLI, you can use one of the following three backup types:
The vios type is for a full VIOS backup. Example 5-2 shows a VIOS full backup from the HMC.
Example 5-2 VIOS full backup from the HMC
$ mkviosbk -t vios -m sys1 -p VIOS_NAME -f vios1_full_backup -a "nimol_resource=1,media_repository=1"
The viosioconfig type is for a VIOS I/O configuration backup. Example 5-3 shows a VIOS I/O configuration backup from the HMC.
Example 5-3 VIOS I/O configuration backup
$ mkviosbk -t viosioconfig -m sys1 -p VIOS_NAME -f vios1_io_backup
A ssp configuration backup. Example 5-4 shows VIOS SSP configuration backup from the HMC.
Example 5-4 VIOS SSP configuration backup
$ mkviosbk -t ssp -m sys1 -p vios1 -f vios1_ssp_backup
For the syntax of the mkviosbk command, see HMC Manual Reference Pages - MKVIOSBK, found at:
If you decided to restore those VIOS configurations, use the rstviosbk command, as shown in Example 5-5.
Example 5-5 Restoring the VIOS configuration by using the rstviosbk command
$ rstviosbk -t viosioconfig -m sys1 -p VIOS_NAME -f vios1_io_backup
$ rstviosbk -t ssp -m sys1 -p VIOS_NAME -f vios1_ssp1_backup
For more information, see HMC Manual Reference Pages - RSTVIOSBK, found at:
5.3.4 VIOS upgrade
This section covers the VIOS upgrade and releases notes.
To ensure the reliability, availability, and serviceability (RAS) of a computing environment that uses the VIOS, update the VIOS software to the most recent fix level for that release. The most recent level contains the latest fixes for the specified VIOS release. You can download the most recent updates for VIOS from the IBM Fix Central website, found at:
For more information about VIOS releases, see Virtual I/O Server release notes, found at:
It is important to keep your VIOS up to date and backed up because it is a critical part of your infrastructure. For more information about current support, see PowerVM Virtual I/O Server on FLRT Lite, found at:
The upgrade steps depend on the level that is installed. For example, to upgrade to VIOS 3, Version 2.2.6.32 or later must be installed.
The base code is downloaded from the IBM Entitled Systems Support (IBM ESS) website, found at:
When you download the code for your entitled software, you see PowerVM Enterprise ED V3. Technology levels and SPs are downloaded from Fix Central. You can download the flash image because it is a fully updated PowerVM 3.1 image.
Upgrade methods
Various ways to upgrade the VIOS to Version 3 are available:
Use the viosupgrade command on the VIOS itself.
For more information, see the viosupgrade command, found at:
Use the viosupgrade command with AIX NIM.
Use the manual upgrade. The manual upgrade includes three steps:
 – Manually back up the VIOS metadata by using the viosbr -backup command.
 – Install VIOS 3 through the AIX NIM Server, Flash Storage, or HMC.
 – Restore the VIOS metadata by using the viosbr -restore command.
For more information, see the following resources:
Migrating the Virtual I/O Server by using the viosupgrade command or by using the manual method, found at:
Supported Virtual I/O Server upgrade levels, found at:
Viosupgrade tool considerations
To use the viosupgrade command upgrade approach, complete the following steps:
1. Download the latest VIOS flash media.
2. The target VIOS must be at SP 2.
3. Add empty disk to the VIOS for the alt_clone command.
4. Mount the mksysb.image from the ISO image that you downloaded and uploaded to VIOS, as shown in Example 5-6.
Example 5-6 Mounting mksysb.image from the VIOS ISO image
# loopmount -i /tmp/VIOS.iso -o "-V udfs -o ro" -m /mnt
5. Copy the mksysb file from the mounted image to a different location as shown in Example 5-7.
Example 5-7 Copying the mksysb image to another location
# cp /mnt/usr/sys/inst.images/mksysb_image /home/VIO31_mksysb_image
6. Start the upgrade process from the copied mksysb file as shown in Example 5-8.
Example 5-8 Using the viosupgrade command to start the process
# viosupgrade -l -i /home/VIO31_mksysb_image -a hdisk3
7. Check the upgrade status by using the viosupgrade -l -q command.
For more information, see Upgrading to VIOS 3.1, found at:
If you are confused about the upgrade path to follow, see Table 5-1 and Table 5-2 on page 194.
Table 5-1 Upgrade path for VIOSs that are not configuring SSP clusters
VIOS level
Procedure to upgrade
Alternative procedure
1.5.x
1. Migrate to 2.1.
2. Update to SP 2.2.6.10.
3. Update to SP 2.2.6.61.
4. Update to SP 2.2.6.65.
5. Upgrade to 3.1.
1. Reinstall at 3.1.
2. Restore a configuration from backup.
2.1 - 2.2.6.0
1. Update to SP 2.2.6.10.
2. Update to SP 2.2.6.61.
3. Update to SP 2.2.6.65.
4. Upgrade to 3.1.
1. Reinstall at 3.1.
2. Restore a configuration from backup.
Table 5-2 Upgrade path for VIOSs that are configuring SSP clusters
VIOS level
Procedure to upgrade
Alternative procedure
2.2.0.11 - 2.2.1.3
1. Update to 2.2.1.4 or 2.2.1.5
2. Update to SP 2.2.6.10
3. Update to SP 2.2.6.61
4. Update to SP 2.2.6.65
5. Upgrade to 3.1
1. Reinstall at 3.1
2. Restore configuration from the backup
2.2.1.4 - 2.2.6.0
1. Update to SP 2.2.6.10
2. Update to SP 2.2.6.61
3. Update to SP 2.2.6.65
4. Upgrade to 3.1
1. Reinstall at 3.1
2. Restore configuration from the backup
5.3.5 VIOS monitoring
As a best practice, monitor the VIOSs in your environment. A basic level of monitoring is to use the error log to monitor different types of errors.
For more information, see the errlog command, found at:
While administrators are reviewing the report that is generated by errlog, they can use a tool called summ, which decodes FC and SCSI disk AIX error report entries. It is an invaluable tool that can aid in diagnosing storage array or SAN fabric-related problems, and it provides the source of the error.
For more information, see the summ tool, found at:
You can configure AIX syslog, which provides an extra option for defining different event types and the level of criticality.
For more information, see VIOS syslog, found at:
An excellent option for VIOS performance monitoring is the nmon command. nmon can run either in interactive or recording mode. It can display system statistics in interactive mode or to record them in the file system for later analysis.
If you specify any of the -F, -f, -X, -x, and -Z flags, the nmon command is in recording mode. Otherwise, the nmon command is in interactive mode.
The nmon command provides the following views in interactive mode:
Adapter I/O statistics (pressing the a key)
I/O processes view (pressing the A key)
Detailed PAdapter I/O statistics (pressing the a key)
I/O processes view (pressing the A key)
Detailed Page Statistics (pressing the M key)
Disk busy map (pressing the o key)
Disk groups (pressing the g key)
Disk statistics (pressing the D key)
Disk statistics with graph (pressing the d key)
IBM ESS vpath statistics view (pressing the e key)
FC adapter statistics (pressing the ^ key)
JFS view (pressing the j key)
Kernel statistics (pressing the k key)
Long-term processor averages view (pressing the l key)
Large page analysis (pressing the L key)
Memory and paging statistics (pressing the m key)
Network interface view (pressing the n key)
NFS panel (pressing the N key)
Paging space (pressing the P key)
age Statistics (pressing the M key)
Disk busy map (pressing the o key)
Disk groups (pressing the g key)
Disk statistics (pressing the D key)
Disk statistics with graph (pressing the d key)
IBM ESS vpath statistics view (pressing the e key)
FC adapter statistics (pressing the ^ key)
JFS view (pressing the j key)
Kernel statistics (pressing the k key)
Long-term processor averages view (pressing the l key)
Large page analysis (pressing the L key)
Memory and paging statistics (pressing the m key)
Network interface view (pressing the n key)
NFS panel (pressing the N key)
Paging space (pressing the P key)
Process view (pressing the t and u keys)
Processor usage small view (pressing the c key)
Processor usage large view (pressing the C key)
SEA statistics (pressing the O key)
Shared-processor LPAR view (pressing the p key)
System resource view (pressing the r key)
Thread level statistics (pressing the i key)
Verbose checks OK/Warn/Danger view (pressing the v key)
Volume group statistics (pressing the V key)
WLM view (pressing the W key)
For more information, see the nmon command, found at:
In the recording mode, the command generates the nmon files. You can view these files directly by opening them or with post-processing tools such as nmon analyzer. The nmon tool disconnects from the shell during the recording so that the command continues running even if you log out.
If you use the same set of keys every time that you start the nmon command, you can place the keys in the NMON shell variable.
For more information, see the nmon recording tool commands, found at:
Another good tool is the VIOS Performance Advisor, which provides advisory reports that are related to the performance of various subsystems in the VIOS environment. To run this tool, run the part command.
For more information, see the part command, found at:
The output that is generated by the part command is saved in a .tar file that is created in the current working directory. The vios_advisor.xml report is present in the output.tar file with the other supporting files.
To view the generated report, complete the following steps:
1. Transfer the generated.tar file to a system that has a browser and a .tar file extractor that is installed.
2. Extract the .tar file.
3. Open the vios_advisor.xml file that is in the extracted directory.
The system configuration advisory report consists of information that is related to the VIOS configuration, such as processor family, server model, number of cores, frequency at which the cores are running, and the VIOS version.
Here are the types of advisory reports that are generated by the VIOS Performance Advisor tool:
System configuration advisory report
CPU advisory report
Memory advisory report
Disk advisory report
Disk adapter advisory report
I/O activities (disk and network) advisory report
For more information, see the following resources:
Virtual I/O Server Performance Advisor reports, found at:
VIOS Performance Advisor called part, found at:
5.4 LPAR management best practices
LPAR management is a continuous process throughout the lifecycle of an LPAR. Activities like performance monitoring, dynamic logical partitioning (DLPAR), and LPAR profile changes are some of the most common tasks that are performed regularly. This section covers LPAR management aspects.
5.4.1 LPAR configuration management
You can manage the configuration of your LPARs by using the HMC. With the HMC, you can adjust the resources that are used by each LPAR.
For more information, see Managing logical partitions, found at:
5.4.2 LPAR performance management
LPAR performance management plays a critical role in running your workloads optimally.
For optimal performance of your LPAR, assign adequate resources as required by the workload while monitoring and analyzing the performance of the LPAR periodically.
For more information, see Performance considerations for logical partitions, found at:
5.4.3 Operating systems monitoring
Operating systems monitoring is a continuous process that facilitates the tracking of resource utilization. With it, you can improve service availability and productivity by proactively and reactively identifying, diagnosing, and repairing slow, underperforming, or unstable components.
It is important to differentiate monitoring tools from management tools:
Monitoring tools watch overall system, communication, and application performance, and they track various metrics of resources in the system.
Management tools facilitate administering and tuning components in a system to improve system availability and performance, and ensuring that the system configuration is kept at a wanted state.
Various monitoring tools are available for Power servers and client LPARs. Some of the most popular monitoring tools are the following ones:
nmon
nmon
nmon is a monitoring tool that is available for AIX and Linux LPARs.
nmon for AIX
The nmon tool displays system statistics in interactive mode and records system statistics in recording mode. It runs either in interactive or recording mode.
For more information, see the nmon command, found at:
nmon for Linux
nmon for Linux is open source and available under the GNU General Public License.
For more information, see nmon for Linux, found at:
topas
The topas command is included with AIX. It reports selected statistics about the activity on the local and remote system.
For more information, see the topas command, found at:
topas cross-partition view
The topas cross-partition view (or CEC view) allows displaying performance metrics for multiple VIOS and AIX LPARs that are running in the same Power server in real time. To view cross-partition statistics in topas, use the -C flag with the topas command in the CLI of any LPAR in the server.
The command displays real-time information from all LPARs on the same Power server, such as the processor and memory configuration of each LPAR and real CPU and memory utilization.
To use the topas cross-partition view, the following conditions must be met:
All the LPARS must be running in the same hardware and using the same subnet.
Make sure that the xmtopas service is active on all LPARS.
Ensure that the following line is uncommented in the /etc/inted.conf file:
xmquery dgram udp6 wait root /usr/bin/xmtopas xmtopas -p9
Check the subsystems that run under the inetd services with the lssrc -ls inetd command.
For more information, see Viewing the Cross-Partition panel, found at:
Monitoring tools for IBM i
Many tools are available on IBM i to monitor, audit, and troubleshoot problems in different areas of the system:
Monitoring security
More than one technique is available for monitoring and auditing security on your system.
In a security audit, you must review and examine the activities of a data processing system to test the adequacy and effectiveness of procedures for data security and data accuracy. The security audit journal is the primary source of auditing information on the system. A security auditor inside or outside your organization can use the auditing function that is provided by the system to gather information about security-related events that occur on the system.
An intrusion detection system is software that detects attempts or successful attacks on monitored resources that are part of a network or host system.
For more information about planning and implementing processes to monitor security on IBM i, see Monitoring security, found at:
Monitoring performance
Several options are available on IBM i to help you identify and resolve performance problems.
IBM iDoctor is a suite of dynamic tools that identify performance issues quickly on IBM i systems. Monitor your overall system health at a high level, and leverage advanced drill-down capabilities for specific issues. Some iDoctor tools are complementary, and others require a license key.
For more information, see IBM iDoctor for IBM i, found at:
Most of the tools that collect or analyze performance use either trace or sample data. Collection Services regularly collect sample data on various system resources. Several tools analyze or report on this sample data, and you can use these reports to get a broader view of system resource utilization and answer many common performance questions. IBM i Job Watcher and IBM i Disk Watcher also collect sample data. For more detailed performance information, several tools generate trace-level data. Often, trace-level data can provide detailed information about the behavior and resource consumption of jobs and applications on your system. Performance Explorer and the Start Performance Trace (STRPFRTRC) commands are two common tools for generating trace data.
For more information, see Researching a performance problem, found at:
Managing and administering IBM i
IBM Navigator for i is a web console interface where you can perform the key tasks to administer your IBM i. The web application is part of the base IBM i operating system.
For more information and a description of the management tasks that you can perform from the IBM Navigator for i console, see IBM Navigator for i, found at:
5.5 Management solutions on PowerVM
This section describes solutions on PowerVM, which facilitate HA, migration, security, and workload optimization in PowerVM environments.
5.5.1 Migration solutions
The VIOS plays a vital role in the migration process by enabling I/O virtualization and serving during the LPM and the Simplified Remote Restart (SRR) operations:
Partition mobility migrates AIX, IBM i, and Linux LPARs from one system to another system. The mobility process transfers the LPAR environment that includes the processor state, memory, attached virtual devices, and connected users without any shutdown or without disrupting the operation of that LPAR.
For more information, see Live Partition Mobility, found at:
When a server crashes, the partitions on that server also crash. To mitigate the impact of a server outage, you can use SRR capability on HMC, which is a PowerVM HA option for LPARs. SRR allows you to re-create the same LPAR with same attributes and virtual adapters in another managed system when a current managed system becomes unavailable.
For more information, see Simplified Remote Restart via HMC or PowerVC, found at:
5.5.2 Availability solutions
Several availability solutions work on PowerVM:
An SSP is a pool of SAN storage devices that can be used among VIOSs. SSP virtualizes FC SAN disks on the VIOS and represents them to Virtual I/O Clients over vSCSI. It enables many modern storage capabilities and functions without support from underlying disk subsystems.
Creating Simple SSP among two VIO servers, found at:
Shared Storage Pool (SSP) Best Practice, found at:
IBM VM Recovery Manager (VMRM) for Power Systems is a high availability and disaster recovery (HADR) solution that enables VMs to be moved between systems by using LPM for planned operations or restarted on another system for unplanned outage events. VMs can be replicated and restarted at remote locations for disaster recovery (DR) operations.
For more information, see the following resources:
IBM VM Recovery Manager for IBM Power Systems, found at:
Overview for IBM VM Recovery Manager DR for Power Systems, found at:
VM Recovery Manager HA overview, found at:
IBM Power Virtualization Center (PowerVC) automated remote restart (ARR)
The PowerVM SRR can be implemented in combination with PowerVC to provide ARR. ARR monitors hosts for a failure. If a host fails, PowerVC automatically remote restarts the virtual machines (VMs) from the failed host to another host within a host group.
For more information, see Automated remote restart, found at:
5.5.3 Security solutions
IBM PowerSC is a security and compliance solution that is optimized for virtualized environments on IBM Power servers that run AIX, IBM i, or Linux. PowerSC sits on top of the IBM Power server stack. It integrates security features that are built at different layers. You can centrally manage security and compliance on Power servers to get better support for compliance audits, including General Data Protection Regulation (GDPR).
For more information, see IBM PowerSC, found at:
5.5.4 Workload optimization solutions
IBM PowerVM is designed to serve high workloads. It includes several tools and enhancements to optimize the use of system resources.
Dynamic Platform Optimizer
Dynamic Platform Optimizer (DPO) is a hypervisor function that is initiated from the HMC CLI. DPO rearranges LPAR processors and memory placement on the system to improve the affinity between processors and memory resources of LPARs. When DPO is running, many virtualization features, such as mobility operations, are blocked. During a DPO operation, if you want to add, remove, or move dynamically physical memory to or from running LPARs, you must either wait for the DPO operation to complete or manually stop the DPO operation.
You can use the HMC to determine affinity scores for the system and LPARs by using the lsmemopt command. An affinity score is a measure of the processor-memory affinity on the system or for a partition. The score is a number 0 - 100, where 0 represents the worst affinity and 100 represents perfect affinity. Based on the system configuration, a score of 100 might not be attainable. A partition that has no processor and memory resources does not have an affinity score. When you run the lsmemopt command for such partitions, the score is displayed as none on the HMC CLI.
In addition to manually running DPO by using the optmem command, you can schedule DPO operations on the HMC.
The following conditions apply to a scheduled DPO operation:
The current server affinity score of the managed system is less than or equal to the server affinity threshold that you provided.
The affinity delta (which is the potential score minus the current score) of the managed system is greater than or equal to the affinity delta threshold of the server that you provided.
For more information, see Dynamic Platform Optimize, found at:
Capacity on Demand
Servers can consist of a number of active and inactive resources. Active processor cores and active memory units are resources that are available for use on your server. Inactive processor cores and inactive memory units are resources that are installed in your server, but are not available for use until you activate them.
Capacity on Demand (CoD) is described in 2.9, “Capacity on Demand” on page 66.
For more information, see Capacity on Demand, found at:
IBM Power Enterprise Pools
IBM Power Enterprise Pools (PEP) is an offering that is supported on certain modern Power servers. It delivers enhanced multisystem resource sharing and by-the-minute consumption of on-premises Power server compute resources to clients by deploying and managing a private cloud infrastructure on Power servers.
Two types of Power Enterprise Pool are available:
PEP 1.0 by using Mobile Capacity.
PEP 2.0 by using Shared Utility Capacity.
For more information, see Power Enterprise Pool 1.0 and 2.0, found at:
AIX Workload Partitions
The Workload Partition (WPAR) environment is different from the standard AIX operating system environment. Various aspects of the system, such as networking and resource controls, function differently in the WPAR environment.
The WPAR information describes how to install applications in a WPAR environment that uses various applications, such as Apache, IBM Db2®, and IBM WebSphere® Application Server. These examples are not intended to imply that they are the only supported versions or configurations of these applications.
You can create and configure application WPARs by using the wparexec command and the chwpar command.
When you create an application WPAR, a configuration profile is stored in the WPAR database. You can export this profile to create a specification file that contains the exact same configuration information for that WPAR. All WPARs must be created by an authorized administrator in the global environment.
Application WPARs provide an environment for isolation of applications and their resources to enable checkpoint, restart, and relocation at the application level. They have less usage on system resources than system WPARs and they do not require their own instance of system services.
For more information, see IBM Workload Partitions for AIX, found at:
5.6 Management tools on Power servers
This section introduces some of the tools that are available to manage your Power server environment.
5.6.1 Performance and Capacity Monitor
The Performance and Capacity Monitor (PCM) is an HMC GUI that displays performance and capacity data for managed servers and LPARs.
The PCM displays data for a single physical server in a new browser window. The PCM allows the HMC to gather performance data so that a system administrator can monitor current performance and capacity changes in their Power servers environment over time.
For more information, see Performance and Capacity Monitoring, found at:
5.6.2 The nmon analyzer
The nmon_analyser tool is helpful in analyzing performance data that is captured by using the nmon performance tool.
For more information, see nmon_analyser: A no-charge tool for producing AIX performance reports, found at:
5.6.3 Microcode Discovery Service
Microcode Discovery Service (MDS) is used to determine whether the microcode on the Power server is at the latest level. MDS relies on an AIX utility that is called Inventory Scout. Inventory Scout is installed by default on all AIX LPARs.
For more information, see Microcode Discovery Service - Overview, found at:
As a best practice, obtain an MDS microcode report is to upload an Inventory Scout survey file for analysis.
For more information, see Microcode Discovery Service - MDS and Inventory Scout, found at:
5.6.4 Fix Level Recommendation Tool
The Fix Level Recommendation Tool (FLRT) provides cross-product compatibility information and fix recommendations for IBM products. Use FLRT to plan upgrades of key components or verify the current health of a system. Enter your current levels of firmware and software to receive a recommendation.
For more information, see FLRT, found at:
FLRT Live Partition Mobility report
For LPM environments, the FLRT LPM report provides recommendations for LPM operations based on entered values for source and target systems. These recommendations might include fixes for known LPM issues.
For more information, see Live Partition Mobility Recommendations, found at:
Fix Level Recommendation Tool Vulnerability Checker
The Fix Level Recommendation Tool Vulnerability Checker (FLRTVC) script provides security and HIPER reports based on the inventory of your system. The FLRTVC script is a ksh script that uses FLRT security and HIPER data (a CSV file) to compare the installed file sets and interim fixes against known vulnerabilities and HIPER issues.
For more information, see FLRTVC Script and Documentation, found at:
5.7 PowerVC
IBM PowerVC is built on OpenStack to provide simplified virtualization management and cloud deployments for IBM AIX, IBM i, and Linux VMs. It can build private cloud capabilities on Power servers and improve administrator productivity. It can further integrate with cloud environments through higher-level cloud orchestrators.
PowerVC provides several benefits:
Fast deployment to save time and IT costs with faster time to value through simple installation and configuration.
Saves the cost of formal training and eliminates the need for specialized skills with an intuitive user interface.
Cost savings and less demand on IT with resource pooling and placement policies.
It uses host grouping to provide separate policy-based control for a subset of the total managed resources.
It delivers policy-based automation for active workload balancing within a host group.
PowerVC offers different capabilities that depend on the edition that is installed and the hypervisor that you are using to manage your systems. For a comparison of major features and capabilities of PowerVC editions, see Feature support for PowerVC, found at:
PowerVC enables you to capture and import images that you can deploy as VMs. An image consists of metadata and one or more binary images, one of which must be a bootable disk. To create a VM in PowerVC, you must deploy an image. For more information, see Working with images, found at:
For the most recent enhancements to PowerVC, see What’s new, found at:
For more information about PowerVC, see PowerVC 2.1.0, found at:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset