IBM Platform Computing Message Passing Interface
This appendix introduces IBM Platform Computing Message Passing Interface (MPI) and how it is implemented.
The following topics are covered:
IBM Platform MPI
IBM Platform MPI is a high-performance and production-quality implementation of the Message Passing Interface standard. It fully complies with the MPI-2.2 standard and provides enhancements, such as low latency and high bandwidth point-to-point and collective communication routines over other implementations. IBM Platform MPI 8.3 for Linux is supported on Intel/AMD x86 32-bit, AMD Opteron, and EM64T servers that run CentOS 5, Red Hat Enterprise Linux AS 4, 5, and 6, and SUSE Linux Enterprise Server 9, 10, and 11 operating systems.
For more information about IBM Platform MPI, see the IBM Platform MPI User’s Guide, SC27-4758-00.
IBM Platform MPI implementation
To install IBM Platform MPI, you need to download the installation package. The installation package contains a single script that when you run it, decompresses itself and installs the MPI files in the designated location. There is no installation manual available, but the installation is as simple as running the script in the installation package.
 
Help: For details about how to use the installation script, run:
sh platform_mpi-08.3.0.0-0320r.x64.sh -help
When you install IBM Platform MPI, even if you give an install dir as input to the script, all files are installed under the directory /opt/ibm/platform_mpi. Example A-1 shows the install log of a successful installation. This example provides the shared directory /gpfs/fs1 as the install root. After the installation, the files are available at /gpfs/fs1/opt/ibm/platform_mpi.
Example A-1 IBM Platform MPI - Install log
[root@i05n45 PlatformMPI]# sh platform_mpi-08.3.0.0-0320r.x64.sh -installdir=/gpfs/fs1 -norpm
 
Verifying archive integrity... All good.
Uncompressing platform_mpi-08.3.0.0-0316r.x64.sh......
Logging to /tmp/ibm_platform_mpi_install.JS36
International Program License Agreement
 
Part 1 - General Terms
 
BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, CLICKING ON
AN "ACCEPT" BUTTON, OR OTHERWISE USING THE PROGRAM,
LICENSEE AGREES TO THE TERMS OF THIS AGREEMENT. IF YOU ARE
ACCEPTING THESE TERMS ON BEHALF OF LICENSEE, YOU REPRESENT
AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND LICENSEE
TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS,
 
* DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, CLICK ON AN
"ACCEPT" BUTTON, OR USE THE PROGRAM; AND
 
* PROMPTLY RETURN THE UNUSED MEDIA, DOCUMENTATION, AND
PROOF OF ENTITLEMENT TO THE PARTY FROM WHOM IT WAS OBTAINED
 
Press Enter to continue viewing the license agreement, or
enter "1" to accept the agreement, "2" to decline it, "3"
to print it, or "99" to go back to the previous screen.
1
Installing IBM Platform MPI to /gpfs/fs1/
Installation completed.
When you install IBM Platform MPI on the shared directory of a cluster, avoid using the local rpmdb of the server where you are installing MPI. You can use the option -norpm to extract all of the files to the install dir and disable interaction with the local rpmdb.
If you are not installing IBM Platform MPI on a shared directory, you need to install it in all hosts of the cluster that will run applications that use MPI. The installation must be done in the same directory in all hosts.
Before you can start using IBM Platform MPI, you need to configure your environment. By default, MPI uses Secure Shell (ssh) to connect to other hosts, so if you want to use a different command, you need to set the environment variable MPI_REMSH. Example A-2 shows how to set your environment and execute hello_world.c (an example program that ships with IBM Platform MPI) to run on the cluster with four-way parallelism. The application runs the hosts i05n47 and i05n48 of our cluster.
Example A-2 IBM Platform MPI - Running a parallel application
[root@i05n49 PlatformMPI]# export MPI_REMSH="ssh -x"
[root@i05n49 PlatformMPI]# export MPI_ROOT=/gpfs/fs1/opt/ibm/platform_mpi
[root@i05n49 PlatformMPI]# /gpfs/fs1/opt/ibm/platform_mpi/bin/mpicc -o /gpfs/fs1/helloworld /gpfs/fs1/opt/ibm/platform_mpi/help/hello_world.c
[root@i05n49 PlatformMPI]# cat appfile
-h i05n47 -np 2 /gpfs/fs1/helloworld
-h i05n48 -np 2 /gpfs/fs1/helloworld
[root@i05n49 PlatformMPI]# /gpfs/fs1/opt/ibm/platform_mpi/bin/mpirun -f appfile
Hello world! I'm 1 of 4 on i05n47
Hello world! I'm 0 of 4 on i05n47
Hello world! I'm 2 of 4 on i05n48
Hello world! I'm 3 of 4 on i05n48
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset