Introduction to IBM Platform Computing
In this chapter, we introduce IBM Platform Computing and how it became part of IBM. We also introduce the overall benefits of the products for client clusters and grids and for High Performance Computing (HPC) clouds.
This chapter describes these topics:
1.1 IBM Platform Computing solutions value proposition
We can identify two user segments within the technical computing market. One segment consists of the business/application users that try to make their applications meet the business demands. The second segment is the IT organization either at the departmental level or at the corporate level that tries to provide the IT support to run these applications more efficiently.
What happens is that on the business and application user side, applications are becoming more complex. One example is risk management type simulations to try to improve results to have more complex algorithms or to add more data.
All this complexity is driving the need for more IT resources. Clients cannot get these resources because they cannot pay for them from a budget perspective and their business opportunities are constrained. This approach is considered the demand side.
On the supply side, the IT organizations set up siloed data centers for different application groups to guarantee service levels and availability when they are needed. Typically, infrastructure is suboptimal because significant workload requirements drive the overall size of the infrastructure but you are over-provisioning. Unfortunately, the IT organization is capital expenditure-constrained so it cannot add more hardware.
People are trying to figure out different ways. You can either take advantage of new technologies, such as graphics processing units (GPUs). Or, you can try to move to a shared computing environment to simplify the operating complexities. A shared computing environment can normalize the demand across multiple groups. It effectively provides visibility into an environment that is considered a much larger IT infrastructure even though the client does not have to fund it all themselves. It provides a portfolio effect across all the demands.
Also, the overall IT resources are fairly static and clients want to be able to burst out to cloud service providers. If clients have short-term needs, they can increase the resource roles but they do not necessarily want to keep the resources on a long-term basis.
There are many demands on the business and user side. The resources are from the IT side. How do you make these two sides fit together better without increasing costs?
IBM Platform Computing solutions deliver the power of sharing for technical computing and analytics in distributed computing environments.
This shared services model breaks through the concept of a siloed application environment and creates a shared grid that can be used by multiple groups. This shared services model offers many benefits but it is also a complex process to manage. At a high level, we provide four key capabilities across all our solutions:
The creation of shared resource pools, both for compute-intensive and data-intensive applications, is heterogeneous in nature. It is across physical, virtual, and cloud components, making it easy for the users. The users do not know that they are using a shared grid. They know that they can access all the resources that they need when they need them and in the correct mix.
Shared services are delivered across multiple user groups and sites and, in many cases, are global. We work with many types of applications. This flexibility is important to break down the silos that exist within an organization. We provide much of the governance to ensure that you have the correct security and prioritization and all the reporting and analytics to help you administer and manage these environments.
Workload management is where we apply policies on the demand side so that we can ensure that the right workloads get the right priorities, but then also place them in the right resources. So, we understand both the demand side and the supply side. We can provide the right algorithm to schedule to maximize and optimize the overall environment to deliver service level agreements (SLAs) with all the automation and workflow. If you have workloads that depend on each other, you can coordinate these workflows to achieve a high utilization of the overall resource pool.
Transform static infrastructure into dynamic infrastructure. If you have undedicated hardware, such as a server or a desktop, you can bring it into the overall resource pool in a smart manner. We can burst workloads both internally or externally to third-party clouds. We work across multiple hypervisors to take advantage of virtualization where it makes sense. You can change the nature of the resources, depending on the workload queue to optimize the overall throughput of the shared system.
1.1.1 Benefits
IBM Platform Computing is software that manages complex calculations, either compute-intensive or data-intensive in nature, on a large network of computers by optimizing the workload across all resources and greatly reducing the time to results.
IBM Platform Computing software offers several key benefits:
Increased utilization because we reduce the number of IT silos throughout the organization
Improved throughput in the amount of work that can be done by these networks of computers
Increased service levels by reducing the number of errors
Better IT agility of the organization
One of the key concepts is shared, distributed computing. Distributed computing is the network of computers. Sharing is bringing multiple groups together to use one large connection of networks and computers without increasing cost. This concept is a key message for many CFOs or CEOs. It is all about being able to do more without increasing cost, effectively increasing the output for you to compute.
We see this concept in two main areas. One area is scientific or engineering applications that are used for product design or breakthrough science. The other area is large complex commercial tasks that are increasingly seen in industries, such as financial services or risk management. These tasks are necessary for banks and insurers that need complex analytics on larger data sets.
Within financial services, we help people make better decisions in real time with pre-trade analysis and risk management applications.
In the semiconductor and electronics space within electronic design automation (EDA), we help clients get to market faster by providing the simulation analysis that they need on their designs.
Within the industrial manufacturing space, we help people create better product designs by powering the environments behind computer-aided design (CAD), computer-aided engineering (CAE), and OMG Model-Driven Architecture (MDA) applications.
In life sciences, it is all about faster drug development and time to results even with genomic sequencing.
The oil and gas shared applications are seismic and reservoir simulation applications where we can provide faster time to results for discovering reserves and identifying how to exploit other producing reservoirs.
1.1.2 Cluster, grids, and clouds
A cluster is typically a single application or single group. Because clusters are in multiple applications, multiple groups, and multiple locations, they became more of a grid and you needed more advanced policy-based scheduling to manage it.
Now that we are in the era of cloud, it is all about how using a much more dynamic infrastructure against an infrastructure with the concepts of on-demand self-service. When we start thinking about cloud, it is also interesting that many of our grid clients already considered their grids to be clouds. The evolution continues with the ability of the platform to manage the heterogeneous complexities of distributed computing. This management capability has many applications in the cloud. Figure 1-1 shows the cluster, grid, and HPC Cloud evolution.
Figure 1-1 Evolution of distributed computing
Figure 1-1 illustrates the transition from cluster to grid to clouds and how the expertise of IBM in each of these categories gives the IBM Platform Computing solutions a natural position as the market moves into the next phase.
It is interesting to see the evolutions of the types of workloads that moved from the world of HPC into financial services with the concepts of risk analytics, risk management, and business intelligence (BI). Data-intensive and analytical applications within our installation base are increasingly adopted.
The application workload types become more complex as you move up and as people move from clusters to grids in the much more dynamic infrastructure of cloud. We also see the evolution of cloud computing for HPC and private cloud management across a Fortune 2000 installation base.
This evolution occurs in many different industries, everywhere from the life sciences space to the computer and engineering area and defense digital content. There is good applicability for anyone that needs more compute capacity. There is good applicability for addressing more complex data tasks when you do not want to move the data but you might want to move the compute for data affinity. How do you bring it all together and manage this complexity? You can use the IBM Platform Computing solutions capability to scan all of these areas. This capability differentiates it in the marketplace.
IBM Platform Computing solutions are widely viewed as the industry standard for computational-intensive design, manufacturing, and research applications.
IBM Platform Computing is the vendor of choice for mission-critical applications. Mission-critical applications are applications that can be large scale with complex applications and workloads in heterogeneous environments. IBM Platform Computing is enterprise-proven with an almost 20-year history of working with the largest companies in the most complex situations. IBM Platform Computing has a robust history of managing large-scale distributed computing environments for proven results.
1.2 History
Platform Computing was founded in 1992 in Toronto, Canada. Their flagship product was Platform Load Sharing Facility (LSF®), an advanced batch workload scheduler. Between 1992 and 2000, Platform LSF emerged as the premier batch scheduling system for Microsoft Windows and UNIX environments, with major installations in aerospace, automotive, pharmaceutical, energy, and advanced research facilities. Platform Computing was an early adopter and leader in Grid technologies. In 2001, it introduced Platform Symphony® for managing online workloads.
In 2004, Platform recognized the emergence of Linux with the introduction of the Platform Open Cluster Stack in partnership with the San Diego Supercomputing Center at the University of California. In 2006, Platform increased its focus on HPC with Platform LSF 7 and targeted HPC offerings. In 2007 and 2008, Platform acquired the Scali Manage and Scali Message Passing Interface (MPI) products. In 2009, it acquired HP MPI, combining two of the strongest commercial MPI offerings.
In 2011, Platform brought their high performance, high availability approach to the rapidly expanding field of Map-Reduce applications with the release of Platform MapReduce and commercial support for the Apache Hadoop distributed file system.
Platform Computing is positioned as a market leader in middleware and infrastructure management software for mission-critical technical computing and analytics environments. They have over 2,000 global clients, including 23 of the 30 largest enterprises and 60% of the top financial services industry as a key vertical.
The key benefit of IBM Platform Computing solutions is the ability to simultaneously increase infrastructure utilization, service levels, and throughput, and reduce costs on that heterogeneous, shared infrastructure.
IBM Platform Computing applications are diverse. They are everywhere from HPC to technical computing and analytics. There are offerings that scale from single sites to the largest global network grids, and multiple grids as well.
1.2.1 IBM acquisition
On 11 October 2011, IBM and Platform Computing announced an agreement for Platform to become part of the IBM Corporation. The completion of this acquisition was announced on 9 January 2012, and Platform Computing became part of the IBM Systems and Technology Group, System Software brand. Integration went quickly and the first round of IBM branded Platform Software products were announced on 4 June 2012. This IBM Redbooks publication is based on these IBM branded releases of the Platform software products.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset