Chapter 15. Virtualizing the Computing Environment

In This Chapter

  • Seeing how virtualization evolved

  • Knowing how virtualization works

  • Dealing with management issues

  • Moving virtualization to the cloud

Why are we putting virtualization and cloud computing together in a discussion of service management?

Virtualization (using computer resources to imitate other computer resources or even whole computers) is one of the technical foundations of cloud computing (providing computing services via the Internet). We think that these two concepts are important to the data center and its destiny.

In this chapter, we present an overview of virtualization: what it means and how it is structured. We follow that discussion by explaining cloud computing. We also look at how the combination of virtualization and cloud computing is transforming the way services are managed.

Understanding Virtualization

Many companies have adopted virtualization as a way to gain efficiency and manageability in their data center environments. Virtualization has become a pragmatic way for organizations to shrink their server farms.

Essentially, virtualization decouples the software from the hardware. Decoupling means that software is put in a separate container so that it's isolated from operating systems.

Virtualization comes in many forms, because one resource is emulating (imitating) another resource. Here are some examples:

  • Virtual memory: PCs have virtual memory, which is an area of the disk that's used as though it were memory. In essence, the computer more efficiently manages virtual memory; it simply puts information that won't be used for a while on disk, freeing memory space. Although disks are very slow in comparison with memory, the user may never notice the difference, especially if the system does a good job of managing virtual memory. The substitution works surprisingly well.

  • Software: Companies have built software that can emulate a whole computer. That way, one computer can work as though it were actually 20 computers. If you have 1,000 computers and can reduce the number to 50, the gain is very significant. This reduction results in less money spent not only on computers, but also on power, air conditioning, maintenance, and floor space.

Note

In a world in which almost everything is a service, virtualization is a fundamental mechanism for delivering services. Indeed, virtualization provides a platform for optimizing complex IT resources in a scalable manner (in a way that can grow efficiently), which is ideal for delivering services.

We can summarize the nature of virtualization with three terms:

  • Partitioning: In virtualization, many applications and operating systems (OSes) are supported within a single physical system by partitioning (separating) the available resources.

  • Isolation: Each virtual machine is isolated from its host physical system and other virtualized machines. One virtual-instance crash doesn't affect the other virtual machines. Data isn't shared between one virtual container and another.

  • Encapsulation: A virtual machine can be represented (and even stored) as a single file, so you can identify it easily based on the service it provides. In essence, the encapsulation process could be a business service. This encapsulated virtual machine can be presented to an application as a complete entity. Therefore, encapsulation can protect each application so that it doesn't interfere with another application.

Note

Using a hypervisor in virtualization

If you've read about virtualization, you've bumped into the term hypervisor. You may have found this word to be a little scary. (We did when we first read it.) The concept isn't technically complicated, however.

Note

A hypervisor is an operating system, but more like the kind that runs on mainframes than like Windows, for example. You need one if you're going to create a virtual machine. One twist: The hypervisor can load an OS as though that OS were simply an application. In fact, the hypervisor can load many operating systems that way.

Note

You should understand the nature of the hypervisor. It's designed like a mainframe OS because it schedules the amount of access that these guest OSes have to the CPU; to memory; to disk I/O; and, in fact, to any other I/O mechanisms. You can set up the hypervisor to split the physical computer's resources. Resources can be split 50–50 or 80–20 between two guest OSes, for example.

The beauty of this arrangement is that the hypervisor does all the heavy lifting. The guest OS doesn't have any idea that it's running in a virtual partition; it thinks that it has a computer all to itself.

Note

Hypervisors come in several types:

  • Native hypervisors, which sit directly on the hardware platform

  • Embedded hypervisors, which are integrated into a processor on a separate chip

  • Hosted hypervisors, which run as a distinct software layer above both the hardware and the OS

Abstracting hardware assets

One of the benefits of virtualization is the way that it abstracts hardware assets, in essence allowing a single piece of hardware to be used for multiple tasks.

The following list summarizes hardware abstraction and its management:

  • File system virtualization: Virtual machines can access different file systems and storage resources via a common interface.

  • Virtual symmetric multiprocessing: A single virtual machine can use multiple physical processors simultaneously and thus pretend to be a server cluster. It also can emulate a fairly large grid of physical servers.

  • Virtual high availability support: If a virtual machine fails, that virtual machine needs to restart on another server automatically.

  • Distributed resource scheduler: You could think of the scheduler as being the super-hypervisor that manages all the other hypervisors. This mechanism assigns and balances computing capability dynamically across a collection of hardware resources that support the virtual machines. Therefore, a process can be moved to a different resource when it becomes available.

  • Virtual infrastructure client console: This console provides an interface that allows administrators to connect remotely to virtual center management servers or to an individual hypervisor so that the server and the hypervisor can be managed manually.

Managing Virtualization

To manage virtualization, you must keep track of where everything is, what everything has to accomplish, and for what purpose. You must also do the following things:

  • Know and understand the relationships among all elements of the network.

  • Be able to change things dynamically when elements within this universe change.

  • Keep the placement of virtual resources in step with all the other information held in the configuration management database (CMDB). Given that few organizations have anything approaching a comprehensive CMDB, that's asking for a lot. In fact, the CMDB needs to know how all service management capabilities are integrated. (For more information on the CMDB, see Chapter 9.)

Foundational issues

Managing a virtual environment involves some foundational issues that determine how well the components function as a system. These issues include how licenses are managed, how workloads are controlled, and how the network itself is managed. The reality is that IT sits between the network's static virtualization and the dream of full automation. We discuss some foundational issues in the following sections.

License management

Warning

Many license agreements tie license fees to physical servers rather than virtual servers. Resolve these licenses before using the associated software in a virtual environment. The constraints of such licenses may become an obstacle to efficiency.

Service levels

Measuring, managing, and maintaining service levels can become more complicated simply because the environment itself is more complex.

Network management

The real target of network management becomes the virtual network, which may be harder to manage than the physical network.

Workload administration

Set policies to determine how new resources can be provisioned, and under what circumstances. Before a new resource can be introduced, it needs to be approved by management. Also, the administrator has to be sure that the right security policies are included.

Capacity planning

Although it's convenient to think that all servers deliver roughly the same capacity, they don't. With virtualization, you have more control of hardware purchases and can plan network resources accordingly.

IT process workflow

In virtualization, the workflow among different support groups in the data center changes; adjust procedures gradually.

Abstraction layer

Managing virtualization requires an abstraction layer that hides and manages things between the physical storage subsystems. The virtualization software needs to be able to present the whole storage resource to the virtualized environment as a unified, sharable resource. That process can be more difficult than it sounds. All the administrative functions that you'd need in a physical data center have to be deployed in a virtualized environment, for example. Following are some of the most important considerations:

  • You have to factor in backup, recovery, and disaster recovery. Virtualized storage can be used to reinforce or replace existing backup and recovery capabilities. It can also create mirrored systems (duplicates of all system components) and, thus, might participate in disaster-recovery plans.

  • Warning

    You can back up whole virtual machines or collections of virtual machines in any given state as disk files. This technique is particularly useful in a virtualized environment after you change applications or complete configurations. You must test — and, therefore, simulate — this configuration before putting it in a production environment.

  • You must manage the service levels of the applications running in a virtualized environment. The actual information delay from disk varies for data held locally, data held on a storage area network (SAN), and data held on network access storage (NAS), and the delay differences may matter. Test different storage options against service levels.

    Tip

    For more information on SANs, see Storage Area Networks For Dummies, 2nd Edition, by Christopher Poelker and Alex Nikitin (Wiley Publishing, Inc.).

  • In the long run, establish capacity planning to support the likely growth of the resource requirement for any application (or virtual machine).

Provisioning software

Provisioning software enables the manual adjustment of the virtualized environment. Using provisioning software, you can create new virtual machines and modify existing ones to add or reduce resources. This type of provisioning is essential to managing workloads and to moving applications and services from one physical environment to another.

Provisioning software enables management to prioritize actions based on a company's key performance indicators. It enables the following:

  • Migration of running virtual machines from one physical server to another

  • Automatic restart of a failed virtual machine on a separate physical server

  • Clustering of virtual machines across different physical servers

Note

Managing data center resources is hard under any circumstance — and even harder when those resources are running in virtual partitions. These managed resources need to provide the right level of performance, accountability, and predictability to users, suppliers, and customers. Virtualization must be managed carefully.

Virtualizing storage

Increasingly, organizations also need to virtualize storage. This trend currently works in favor of NASes rather than SANs, because a NAS is less expensive and more flexible than a SAN.

Note

Because the virtualized environment has at least the same requirements as the traditional data center in terms of the actual amount of data stored, managing virtualized storage becomes very important.

In addition to application data, virtual machine images need to be stored. When virtual machines aren't in use, they're stored as disk files that can be instantiated at a moment's notice. Consequently, you need a way to store virtual-machine images centrally.

Hardware provisioning

Before virtualization, hardware provisioning was simply a matter of commissioning new hardware and configuring it to run new applications or possibly repurposing hardware to run some new application.

Virtualization makes this process a little simpler in one way: You don't have to link the setup of new hardware to the instantiation of a new application. Now you can add a server to the pool and enable it to run virtual machines. Thereafter, those virtual machines are ready as they're needed. When you add a new application, you simply configure it to run on a virtual machine.

Warning

Provisioning is now the act of allocating a virtual machine to a specific server from a central console. Be aware of a catch, however: You can run into trouble if you go too far. You may decide to virtualize entire sets of applications and virtualize the servers that those applications are running on, for example. Although you may get some optimization, you also create too many silos that are too hard to manage. (For more information on silos, see the nearby sidebar "Static versus dynamic virtualization.") You may have optimized your environment so much that you have no room to accommodate peak loads.

Note

The hypervisor (refer to "Using a hypervisor in virtualization," earlier in this chapter) lets a physical server run many virtual machines at the same time. In a sense, one server does the work of maybe ten. That arrangement is a neat one, but you may not be able to shift those kinds of workloads without consequences. A server running 20 virtual machines, for example, may still have the same network connection with the same traffic limitation, which could act as a bottleneck. Alternatively, if all those applications use local disks, many of them may need to use a SAN or NAS — and that requirement may have performance implications.

Security issues

Warning

Using virtual machines complicates IT security in a big way. Virtualization changes the definition of what a server is, so security is no longer trying to protect a physical server or collection of servers that an application runs on. Instead, it's protecting virtual machines or collections of virtual machines. Because most data centers support only static virtualization, it isn't yet well understood what will happen during dynamic virtualization. Definite issues have been identified, however, and we address several of them in the following sections.

Network monitoring

Current network defenses are based on physical networks. In the virtualized environment, the network is no longer physical; its configuration can actually change dynamically, which makes network monitoring difficult. To fix this problem, you must have software products that can monitor virtual networks and, ultimately, dynamic virtual networks.

Hypervisors

Warning

Just as an OS attack is possible, a hacker can take control of a hypervisor. If the hacker gains control of the hypervisor, he gains control of everything that it controls; therefore, he could do a lot of damage. (For more details, see "Using a hypervisor in virtualization," earlier in this chapter.)

Configuration and change management

The simple act of changing configurations or patching the software on virtual machines becomes much more complex if the software involved is locked away in virtual images, because in the virtual world, you no longer have a fixed static address to update the configuration.

Perimeter security

Providing perimeter security such as firewalls in a virtual environment is a little more complicated than in a normal network, because some virtual servers are outside a firewall.

Tip

This problem may not be too hard to solve, because you can isolate the virtual resource spaces. This approach places a constraint on how provisioning is carried out, however.

Taking Virtualization into the Cloud

Virtualization, as a technique for achieving efficiency in the data center and on the desktop, is here to stay. As we indicate earlier in this chapter, virtualization is rapidly becoming a requirement for managing a data center from a service-delivery perspective. Despite the economies that virtualization provides, however, companies are seeking even better economies when they're available.

In particular, companies have increasing interest in cloud computing (see the following section), prompted by the assumption that cloud-computing providers may achieve more effective economies of scale than can be achieved in the data center. In some contexts, this assumption is correct.

If you like, you can think of cloud computing as being the next stage of development for virtualization. The problem for the data center is that workloads are very mixed; the data center needs to execute internal transactional systems, Web transactional systems, messaging systems such as e-mail and chat, business intelligence systems, document management systems, workflow systems, and so on. With cloud computing, you can pick your spot and focus on getting efficiency from a predictable workload.

From this somewhat manual approach, you can move to industrial virtualization by making it a repeatable platform. This move requires forethought, however. What would such a platform need?

Warning

For this use of resources to be effective, you must implement a full-service management platform so that resources are safe from all forms of risk. As in traditional systems, the virtualized environment must be protected:

  • The virtualized services offered must be secure.

  • The virtualized services must be backed up and recovered as though they're physical systems.

  • These resources need to have workload management, workflow, provisioning, and load balancing at the foundation to support the required type of customer experience.

Without this level of oversight, virtualization won't deliver the cost savings that it promises.

Defining cloud computing

Based on this background, we define cloud computing as a computing model that makes IT resources such as servers, storage, middleware, and business applications available as a service to business organizations in a self-service manner. Although all these terms are important, the important one is self-service.

Note

In a self-service model, organizations look at their IT infrastructure not as a collection of technologies needed for a specific project, but as a single resource space. The difference between the cloud and the traditional data center is that the cloud is inherently flexible. To work in the real world, the cloud needs three things:

  • Virtualization: The resources that will be available in a self-service model no longer have the same kinds of constraints that they face in the corporate environment.

  • Automation: Automation means that the service is supported by an underlying platform that allows resources to be changed, moved, and managed without human intervention.

  • Standardization: Standardization is also key. Standardized processes and interfaces are required behind the scenes. Interoperability is an essential ingredient of the cloud environment.

When you bring these elements together, you have something very powerful.

What type of cloud services will customers subscribe to? All the services that we describe as the foundation of virtualization (refer to "Understanding Virtualization," earlier in this chapter) are the same ones that you'd make available as part of the cloud. You want to be able to access CPU cycles, storage, networks, and applications, for example, or you may want to augment the physical environment with additional CPU cycles during a peak load. Alternatively, you may want to replace an entire data center with a virtualized data center that's based on a virtualized environment managed by a third-party company.

Cloud computing is in its very early stages. In fact, in many situations customers aren't even aware that they're using a cloud. Anyone who uses Google's Gmail service, for example, is leveraging a cloud, because Google's own search environment runs within its own cloud. In other situations, large corporations are experimenting with cloud computing as a potential way to transfer data center operations to a more flexible model.

Another example is Amazon.com, which sells access to CPU cycles and storage as a service of its cloud infrastructure. A customer may decide to use Amazon's cloud to test a brand-new application before purchasing it, because renting is easier than owning.

In cloud environments, customers add CPU cycles or storage as their needs grow. They're protected from the details, but this protection doesn't happen by magic. The provider has to do a lot of work behind the scenes to manage this highly dynamic environment.

Using the cloud as utility computing

For decades, thinkers have talked about the day when we would move to utility computing as a normal model of managing computing resources. Computing power would be no different from electricity. When you need some extra light in a room, for example, you turn on the light switch, and the electric utility allocates more power to your house. You don't have an electrical grid in your home, and you don't have to acquire tools to tune the way that power is allocated to different rooms of your home. Like electrical power, computing power would be a highly managed utility.

Obviously, we're far from that scenario right now. The typical enterprise is filled with truly heterogeneous data centers, assorted servers, desktops, mobile devices, storage, networks, applications, and vast arrays of management infrastructures and tools. In fact, you may have been told that about 85 percent of these computing resources are underused.

In addition, at least 70 percent of the budget spent on IT keeps the current systems operational rather than focusing on customer service. The advent of cloud computing is changing all that. Organizations need to reduce risk; reduce costs; and improve overall service to their customers, suppliers, and partners. Most of all, they need to focus on the service levels of the primary transactions that define the business.

Warning

IT organizations that decide to proceed with business as usual are putting their companies at risk. Also, because most IT budgets aren't growing, meeting customer expectations and performance goals without violating the budget is imperative. In truth, the biggest problem that IT organizations have isn't just running data centers and the associated software, but managing the environment so that it meets the required level of service.

Veiling virtualization technology from the end user

Any vendor that wants to provide cloud services to its customers has a lot to live up to. All the virtualization technology that supports these requirements is hidden from the customer. Although the customer may expect to run a wide variety of software services on the cloud, she may have little, if any, input into the underlying services.

Cloud customers see only the interface to the resources. In this self-service mode, they have the freedom to expand and contract their services at will. Vendors providing cloud services have to provide a sophisticated service-level agreement (SLA) layer between the business and the IT organization. The vendors have a responsibility to provide management services, including a service desk to handle problems and real-time visibility of usage metrics and expenditures. For two reasons, it's the vendor's responsibility to provide a completely reliable level of customer service:

  • The customer has the freedom to move to another vendor's cloud if he isn't satisfied with the level of service.

  • Customers are using the cloud as a substitute for a data center; therefore, the cloud provider has a responsibility to both internal and external customers.

Overseeing and managing a cloud environment are complicated jobs. The provider of the cloud service must have all the management capabilities that are used in any virtualized environment. In addition, the provider must be able to monitor current use and anticipate how it may change. Therefore, the cloud environment needs to be able to provide new resources to a customer in real time. Also, a high level of management must be built into the platform. Much of that management needs to be autonomic — self-managing and self-correcting.

Note

Any sophisticated customer leveraging a cloud will want an SLA with the cloud provider. That customer also needs a mechanism to ensure that service levels are being met (via a full set of service management capabilities).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset