CHAPTER  
2

The Merger

Enterprise Business and IT Management

SUMMARY

Business and IT are two distinct disciplines. In most enterprises, they are also interdependent but not necessarily cooperative. Cloud service management decisions need both business and technical input to be made wisely. Cloud utility computing is a significant change to the way IT works. A thorough understanding of what utilities are and how they work is required by both business and IT because they influence many decisions. The development of electrical and other utilities is frequently compared to the development of utility computing. Their similarities and differences help explain the value and difficulty of cloud computing.

From the outside, enterprise and IT management appear to be in harmony. On the inside, this is probably the exception rather than the rule. Often, the IT department and the business units appear to be armed camps, not harmonious partners.

It is easy to argue that such antagonism is wrong, but the roots of the antagonism are also easily discernible. Most IT professionals have degrees like a bachelor of science or higher in computer science (BSCS) or electrical engineering (BSEE). The signal degree on the business side is an MBA (Master of Business Administration). These disciplines have different subject matter and disciplines. They speak different languages; their scales of value are different. Software engineers seldom admire well-constructed business cases, and business managers rarely appreciate elegant and tightly written code.

Technologists and business managers have a hard time communicating. Both use words and phrases that the other does not understand. REST (representational state transfer) to a software engineer and rest to a business manager are not even slightly similar.

In addition, their goals are different. Technologists aim to get the most out of technology. They plot the most efficient way to accomplish a task and exercise new features of the technology with the same acumen with which a business manager coaxes an improved bottom line. Both technologists and businesspeople must make an effort to understand each other for IT to be an effective contributor to the enterprise. Lack of mutual understanding leads to business regarding IT as sunken overhead and losing opportunities offered by new technologies; and to technologists whose efforts repeatedly miss the requirements of business.

Business and IT Architecture

ARCHITECTURE DEFINED

Architecture is the underlying organization of a system and the principles that guide its design and construction. An architecture identifies the components of the system, the relationships between the system’s components, and the relationship between the system and its environment. IT architecture not only describes the physical computing infrastructure and software running on the infrastructure but also details the relationship between IT and other business units and departments and the structure and content of the data stored and processed by the system.

The architecture of an enterprise is a nested hierarchy. At the top of the hierarchy is the overall enterprise architecture. Within the enterprise architecture are the architectures of its components. They may be business units, organizations, or other entities. For example, within an enterprise architecture, the human relations department, the engineering department, and the manufacturing business unit may all be components with their own architecture.

The hierarchy depends on the organization of the enterprise. In Figure 2-1, a single IT architecture serves several departments and business units. This arrangement is common but by no means the only way to organize an enterprise architecture. For example, each business unit may have its own IT architecture while the functional departments such as human resources share an IT architecture.

The reasons for differing enterprise architectural patterns vary. A business unit may have been acquired with its own IT department, and enterprise management may decide to leave the business unit structure undisturbed. In other cases, a business unit or department may be geographically remote from the enterprise premises, making centralized IT awkward. When regulations dictate that two departments must be strictly separated, distinct IT architectures may be required. A division or unit with special technology, such as computer-aided design (CAD) or computer-aided manufacturing (CAM), might work better with a separate IT architecture. Other reasons may relate to either business or engineering requirements.

9781430261667_Fig02-01.jpg

Figure 2-1. An enterprise architecture contains the architectures of subunits. IT is both a subunit and a link among other subunits.

The boxes on an enterprise architecture diagram do not exist in isolation. Just as application architecture and data architecture must coordinate to be effective within the IT architecture, the IT architecture must coordinate with the entire enterprise architecture as well as each subunit. IT architecture has to be planned and implemented in accordance with these considerations and it must also evolve along with the other components in the enterprise architecture.

When IT architecture is designed and evolves in isolation, the enterprise as a whole suffers. The damage is both technical and managerial. IT cannot support the business units and departments in achieving their goals and the enterprise does not get the full benefit of innovation in IT. This becomes a classic “impedance mismatch” in which the business asks for things IT can’t supply and IT provides things that business can’t use. The consequences can be ugly. Unsatisfactory outsourcing is one result. Inefficient departmental IT projects that duplicate and work around the IT organization are another.

Cloud and Enterprise Architecture Fragmentation

Cloud facilitates IT projects both inside and outside the IT department, because any employee with a credit card can have entire datacenters, advanced application development platforms, and sophisticated enterprise applications at their fingertips. The situation is reminiscent of the days when anyone could buy a PC, put it on her desk, and automate her department. IT can anticipate something similar to the course of distributed computing, in which desktop IT projects slipped out of IT’s view and basics such as disaster recovery and security suffered until management was forced to reassert IT control.

The relationships between enterprise architecture components are always full-duplex (such that communication between the two connected components flows in both directions simultaneously). Departmental architectures that ignore the architectures of other components are as counterproductive as IT that ignores enterprise and departmental architecture. Both cause problems. Departmental systems that run contrary to the enterprise architecture may not communicate or integrate with similar systems in other departments. When a department has an activity that relies on IT processes, IT may not be able to help if they do not understand the role of IT processes in the departmental activity. IT departments have sophisticated performance monitoring, fault detection, and incident systems that manage the reliability of the IT system, but these may be of no help to a department that has its own private system running in isolation.

Fragmentation is a problem for both technical and business management. Technically, systems must be designed for integration and sharing. Too often, applications are designed to meet a specific set of requirements that disregard the possibility that the application will eventually have to communicate with and serve many departments. Even when integration is addressed in the requirements, they can be deferred or ignored in the rush to put the system in operation. When this happens without a clear plan for remediation, fragmentation can become permanent. Cooperation between business and IT management in the form of strategic planning is a solution to fragmentation.

Closing the Architectural Divide

The relationship between IT and business determines many outcomes. Technologists who are deep into the hardware and software heart of IT sometimes forget that IT exists to support business and may even be the business itself for businesses that rely heavily on software, such as insurance companies and online businesses. These technologists are often frustrated by businesspeople who do not understand the mechanisms that make their business work. Businesspeople sometimes forget that IT is vital to their present and future business and listen more to outside business advisors than to their own dedicated technologists.

The edges of these architectures are critical. IT technologists do not have to understand the details of the core of enterprise and departmental business architectures. For instance, IT does not need to understand the mesh of relationships between sales and production, but they do need to thoroughly understand where IT touches the enterprise and each of its business units and departments. IT can legitimately expect that the business units and departments understand in business terms the resources available to IT in meeting their requests. In addition, IT can expect that the business understands the implications of ignoring IT requirements.

Fortunately, the divide between IT and business has become less severe. In the heady days of the first Internet boom, business sometimes gave IT a blank check to pursue the next big thing. That may have appeared to be good for IT, but it left a hangover. Freewheeling attitudes lead to ill-planned projects that delivered less than was promised. The pendulum swung to the opposite extreme and business became excessively skeptical of IT claims. In the sober light after the boom, the mutual dependence of business and IT became evident, and conscious efforts were made to set up frameworks that would avoid antagonism and promote constructive cooperation.

A fundamental step toward closing this divide is a comprehensive architecture that integrates the enterprise, enterprise subunits, and IT department. There are a large number of enterprise architecture frameworks available from the academy, industry analysts, enterprise integrators, standards organizations, and other sources. Each of these has its own strengths and weaknesses, and each enterprise must select the best fit. A comprehensive architectural framework has value because it clarifies the relationships and dependencies between entities in the enterprise.

This need for an IT architecture that coordinates with the overall enterprise architecture is only accentuated by cloud offerings that make it easy for non-IT groups to float their own IT projects. An enterprise IT architecture is a tool for avoiding that form of chaos, because it makes it easier for business units and departments to see the services that the IT department provides and so disinclines them to strike out on their own.

The key is to pay attention to enterprise goals and plans in developing an IT architecture that aligns with the enterprise. IT itself seldom represents a revenue stream. Its role must be to enhance other revenue streams and provide other units with the capacity to expand and transform their business. This puts an extra burden on IT to know their customers: the business units that depend on them.

The challenges discussed here are not limited to IT. Any subunit of an enterprise faces a similar problem in working with the other parts of the enterprise, and all can benefit from taking a similar approach. Nevertheless, given the general challenge, there is much that is specific to IT.

Computing as Utility

Computing as a utility is an important concept that has been discussed for a long time. Both technology and business are being affected by the transformation of some aspects of IT into utilities.

UTILITY DEFINED

In its most general sense, utility is what a service does, and it is the basis for deciding if a service will meet the needs of the consumer. When discussing computing as a utility, the meaning is more specific and derives from the concept of a public utility, a service that maintains an infrastructure for delivering a specific service to a wide range of consumers. The service provided by a utility is a commodity. Some utilities are implemented by government agencies; others are private or a combination of public and private. Utilities may be regulated or they may follow industry or de facto standards, but in any case the services they provide are interchangeable.

Utility is a business and economic concept that applies to a wide range of services. A utility is a broadly consumed service that is offered to the public. Utilities are standardized, at least in following a de facto standard. Therefore, consumers can switch relatively easily among utility providers.1

Utilities often do not yield value directly to their consumers; the utility has to be transformed before the consumer gets value. For example, electricity from the service panel only has value to a homeowner when it is connected to a light bulb or some other appliance.

Utilities are often government-regulated to enforce uniform interfaces and safety standards. For example, building codes enforce standard 240-volt receptacles that will not accept a standard 120-volt plug. This prevents damage to 120-volt appliances and injury to users.

When utilities are monopolies, their pricing structure is often government-controlled. They may be publicly or privately owned. Some electric companies are public, often operated by municipalities. Others are private, but regulations limit the prices that can be charged—which benefits consumers where there is no alternative to a single electricity supplier.

COMMODITY DEFINED

A commodity is a good (as opposed to a service) that is assumed to be of uniform quality so that one can be exchanged for another without effect. Commodities are more or less fungible. Fungible assets are those which can be exchanged freely and have the same value. Money is frequently held up as the exemplar of fungible assets. A dollar debt can be paid with any dollar; it does not have to be the exact bit of currency that changed hands when the debt was incurred.

Utilities are closely related to the idea of commodity. Commodity can refer to any good, as opposed to service, but it has the more specialized meaning of a good that is sold with the assumption that it meets an agreed-upon standard that is applied to all goods of that kind, no matter what the source may be. Commodities are fungible; a commodity may always be exchanged for an equal quantity of a commodity of the same type. For example, a bushel of commodity wheat purchased today could be resold six months later at a higher price, because today’s commodity wheat is interchangeable with expensive future wheat.

Utilities deal in commodity services. A utility provider is separate from a utility consumer: the utility is supplied uniformly over a wide consumer base, and the economic incentive for using a utility stems from an economy of scale and the transfer of risk to the provider. Consumers are usually charged based on the service consumed, and are not charged directly for the investment in the infrastructure needed to generate the service. These concepts, inherent in the nature of utilities, translate into questions that can be asked about the potential of a service as a utility:

  • Can the service be effectively supplied by a provider? Or would consumers be better off supplying the service themselves?
  • Do economies of scale exist? Is it substantially cheaper to supply the service to many consumers?
  • Will consumers benefit substantially from being charged only for the service used? Can the service be metered? Does the benefit offset the cost and effort of metering the service?
  • Are there meaningful risks that the utility could mitigate, such as physical dangers in generating the service or financial risks in equipment investments?
  • Is the service a fungible commodity that is used identically by all consumers without regard to the source? Is the service uniform? Or must it be customized for most consumers?

An example of a service that would probably not be a good candidate for becoming a utility is a dry-cleaning service. All the questions above but the last can be answered affirmatively. Dry cleaning is a specialized task that uses dangerous chemicals and requires expensive equipment. Large dry-cleaning plants with automated equipment can do the job cheaply. Then why isn’t dry cleaning a utility? The reason is that customers who accept one dry-cleaning service may not accept another. People are fussy about their dry cleaning. They search for a dry cleaner that cleans their clothing just the way they like it. They have special requests. Occasional rush jobs are important. Consequently, few consumers would be satisfied with a utility dry-cleaning service that allowed few options or custom service.

These concepts are independent of the technical implementation of the utility. Technology typically makes a utility possible, but technology alone will not qualify a service as a candidate for becoming a utility.

When a service is transformed into a utility, the technology may change. Small generators might be replaced by huge dynamos driven by huge hydroelectric projects, mountains of coal, or nuclear reactors. The business and financial basis of the service will change also. Consumers will sign contracts for service. The utility will guarantee minimum quality of service. Regulators may become involved. Small entrepreneurial ventures may be consolidated into larger utility suppliers.

The notion of computing as a utility arose naturally when timesharing began to become common (Chapter 1). It was a short step from customers sharing time with a large datacenter to the idea that timeshare terminals might become widely available and treated as a utility. This vision did not materialize. Timesharing thrived for a time, but terminals did not appear everywhere. Instead, new devices called personal computers (PCs) began to pop up everywhere. The first PCs were little more than expensive toys not connected to any other computer, let alone the massive hub the timeshare utility envisioned. But with the aid of ever more powerful integrated circuits and miniature processors, they rapidly rose in importance.

The Birth of a Utility: The Electrical Grid

It is a commonplace that IT is undergoing transformation to a utility, in a process analogous to the emergence of electric power as a public utility around the turn of the nineteenth century.2

Early Adoption of Electricity

The transformation of power generation to a utility has parallels with the progress of IT. Before electric power utilities, large electricity consumers like manufacturers had their own private generating plants. These dedicated private industrial facilities were among the largest generation plants, much like the private mainframes owned by large institutions and corporations. Smaller concerns and residences relied on small generation companies that supplied electric power to clusters of consumers in limited areas. A small generation plant might supply a few small businesses and a handful of residences within a mile radius.

As electrical engineering advanced, the small generation companies expanded to supply larger and larger areas at lower and lower prices. Gradually, concerns began to shut down their private generators and rely on cheap and reliable power from the generation companies specializing in generating and distributing comparatively cheap electrical power over wide areas. Eventually even the largest electricity consumers such as aluminum smelters shut down their private generators and plugged into the growing grid.

Productivity and Electricity

Surprisingly, the adoption of electricity did not result in immediate gains in productivity. Prior to electricity, power distribution in a single plant was done through a system of rotating overhead shafts, pulleys, and belts. This arrangement dictated the layout of factories. It had been much easier to transmit power upward than to spread it out horizontally. Multi-floor factories with limited square footage on each floor used less power than factories with the corresponding square footage on a single floor. Placing machines close together also made more efficient use of power. Layouts like this were inflexible and hard to use. Moving or replacing a machine required moving precisely aligned shafts and pulleys. Changes required highly skilled millwrights and could stop production in the entire plant for days. Narrow aisles made moving materials difficult. Equipment positions could not be adjusted as needed to speed up work.

These limitations caused inefficiency. Modern plants tend to be single-floor structures that cover large areas. Most machines have their own electric motors and power is transmitted to the machine via electric cables and wiring. This arrangement permits ample space for moving materials using equipment such as forklifts and conveyers. There is room for maintenance. The plant configurations can be changed to accommodate new processes, equipment, and products. Running a cable and plugging it into an electric panel usually has little effect on the rest of the plant, so the rest of the plant is not affected by maintenance and changes. These flexible layouts depend on the ease of running electrical cables and moving electric motors to where they are needed.

When electric motors replaced steam engines, electric motors were seen as replacements for central steam engines connected to manufacturing equipment with shafts and pulleys. Instead of immediately installing an electric motor for each machine, the shaft-and-pulley system was preserved and only a few central electric motors were installed. This meant that individual machines would not have to be modified or replaced to accommodate an electric motor.3

Adoption in this manner resulted in some immediate reduction in production costs, but large increases in efficiency did not appear until the old plant designs were replaced or changed to take advantage of the innovation. Replacing and redesigning plants and equipment was a slow process that involved rethinking plant design. It was not complete a decade or even two decades after electric power had become prevalent. Consequently, the real economic impact of electrification did not appear until electrical power had been available for some time.

This is important to keep in mind when examining the adoption of computing in the Information Age. If the pattern of the electrical industry holds true, the first appearance of innovation—such as the von Neumann computer or the application of computers to business—is not likely to result in the immediate transformation of an economy. The first stage is to insert the innovation into existing structures and processes with as few changes as possible. But a significant innovation, such as electricity, will eventually drive a complete transformation, and this transformation is what determines the ultimate significance of the initial spark. Unfortunately, the ultimate significance is hard to identify except in retrospect.

Pre-Utility Electricity and Computing

The pre-utility stage of electricity generation has similarities to IT departments and datacenters today. Although there is some outsourcing of IT services, similar to the early stages of the electrical transformation when an occasional manufacturer used electricity from a local public generation plant, most computing consumers own their own computing equipment. Large enterprises have datacenters; small consumers may only have a smartphone—but almost all own their computing resources. However, with the advent of cloud computing and the ancillary technologies such as high-bandwidth networks that make the cloud possible, consumers are beginning to use computing equipment as a sort of portal to computing utilities accessed remotely, similar to the wall receptacles and power cords that electricity consumers use to tap into the electrical power grid.

The concept of utility computing is exciting to business because it promises lower costs, spending proportional to the utilization of resources, and reduction of capital investment in computing hardware. Utility computing is also exciting to the engineer, just as designing and building the colossal electrical infrastructure and power grid were exciting to electrical engineers of the twentieth century.

Analogies Are Dangerous

Thinking by analogy is often helpful, but it is also risky. The high school physics analogy between water in a pipe and electricity is helpful in understanding Ohm’s Law, but the water-electricity analogy can be deceptive.4 If the aptness of the water analogy when applied to Ohm’s Law tempts you to stretch it to include gravity and conclude that electricity runs downhill, you will be wrong. IT is similar to electrical systems and may be in the midst of a transformation similar to that of electricity at the beginning of the twentieth century, but a computing grid is not the same as an electrical grid. IT is also similar in some ways to railroad systems and various other systems pundits have compared with IT, but, as the stock fund prospectus says, “Past performance does not guarantee future results.”

The history of analogous utilities may lead us to questions about computing utilities, but analogies will not answer the questions. The pattern of the adoption of electricity certainly has similarity to mainframes taking over back-office functions such as billing without changing the essential nature of business. The analogy suggests that we ought to look for something similar to the redesign of manufacturing to take advantage of the properties of an electric motor. Internet commerce is one candidate; online advertising is another. But there is no assurance that the Information Age will follow the pattern of previous ages. For example, we cannot argue that social networking will bring about economic change because electricity changed manufacturing. We are forced to wait and observe the changes that occur.

In the meantime, it is the job of the IT architect to work with business and IT management to respond to the changes that are occurring now.

IT Utilities

Utility computing gets a lot of attention, but there are also many commodities and utilities within IT. For example, anyone can pull a Serial Advanced Technology Attachment (SATA) standard commodity hard drive off the shelf and expect it to perform as advertised in any computer that has a SATA interface. These drives come from several different manufacturers that compete on read-write speeds, data capacity, and mean time between failures. Periodically the standard is revised to support innovations in the industry, but the drives on the same version of the standard remain interchangeable. When possible, standards are designed to be backward compatible, so that drives built to different standards remain compatible.5

There are many other commodities in IT. Network equipment and protocols connect together computers and computer systems with as little regard as possible for the kinds of systems that connect to the network. Although there are large differences in capacity and speed, networking equipment is also interchangeable. Domain Name Service (DNS) is a utility. It is a service that is offered by many different providers and is freely used by everyone who connects to the Internet.

The largest IT utility is the Internet. Consumers establish wired connections to the Internet by entering into a business relationship with an Internet service provider (ISP), which acts as a gateway to the Internet. Consumers are free to switch from provider to provider, although they may be limited by the providers available in their geographical location. In most cases, there is a clear demarcation point that distinguishes the provider’s equipment from the consumer’s equipment and premises.

IT Utility Providers and Consumers

IT differs from other utilities in important ways. Fundamentally, computers are designed to be used for many different purposes with great flexibility. The electrical grid, on the other hand, exposes electrical power in fixed forms with little flexibility. On the electrical supply side, innovation and progress come from producing more electricity at less cost delivered more efficiently. People don’t buy more electricity from suppliers with more exciting flavors or colors of electricity. If consumers have alternate providers, which they seldom do, they only shop for a lower price, greater reliability, or better customer service. The product is always the same.

Utility Computing Providers

In the early days of electricity, the product of generators was always electricity. The voltage varied, some was alternating, some was not, but the generators all produced electricity. By setting standards for voltage and phase, generation plants became interchangeable. Electric current is a physical force that can be measured and controlled, not an abstract concept. A utility does not deal in abstract concepts; the utility charges for something that can be metered and used.

Computing is an abstract concept. In the days of timesharing as a utility, measuring the computing abstraction was relatively easy because it could be equated to a few measurements such as connection duration, central processing unit (CPU) time, and data stored. These were measurable by user account, and consumers could be billed based upon them. Because there was only one central computer with a limited number of CPUs in a timesharing system, the measurements were straightforward. When distributed computing became prevalent, measuring computing became much more complex.

User accounts are not as meaningful in a distributed system, where a server may not be aware of the user account associated with a service request from a client. Measuring computing in this environment is possible, but it is more challenging than on a timesharing system based on a limited number of central computers. Cloud computing also approaches metrics somewhat differently than timesharing did. Clouds abstract the notion of computing consumption from the underlying hardware. Instead of directly measuring the consumption of hardware resources, clouds measure consumption of the computing abstraction. The abstraction hides the complexity of the underlying hardware. Metrics based on the abstraction at least appear to be understandable, even though the underlying measurements and calculations may be quite complex.

Measurement was one more obstacle to utility computing in an environment where the cost of computers had plummeted. Small businesses and individuals found it more economical and convenient to cut costs by using new, cheaper computers instead of subscribing to timesharing utilities. This undercut the economic basis for timesharing.

Unlike electricity, where establishment of a utility power grid was a slow but linear progression, utility computing has developed in fits and starts, encountering technical difficulties and economic diversions along the way.

Utility Computing Consumers

On the consumer side of the electrical service panel demarcation point, there is an enormous range of competitive products that can be plugged or wired in (Figure 2-2). The innovation and diversity in electrical products are on the consumer premises. Ordinary electricity consumers have only one control over the electricity they receive: they can cancel electrical service. They cannot signal the utility to reduce the voltage, increase the frequency, or change the phase by a few degrees. The types of output from the utility are very limited.

9781430261667_Fig02-02.jpg

Figure 2-2. Utilities are separated from consumers by a distinct demarcation point. The commodity is uniform at the demarcation point, but it is used in different ways by consumers.

Other utilities such as railroads are similar. A railroad moves cargo in a limited set of types of railcars, but the contents of those railcars may vary greatly. The contents of a hopper car may be coal or limestone, but the car is the same. When you go to a gasoline station, you are offered three types of gas that can be burned in any gasoline-powered car.

IT Progress toward Utility Computing

In the days of timesharing, there was a vision of utility computing. In the vision of those days, there would be one enormous computer that would share resources such as data, and all users would log on, forming a great pool of simultaneous users of a single computer. That vision crumbled with distributed computing. Instead of one enormous central computer, each user got a private computer on his desk. Computing would have to evolve further to get back to the utility computing vision.

9781430261667_Fig02-03.jpg

Figure 2-3. Progress toward utility computing was not linear

When distributed computing became prevalent, computing was anything but a utility. It was as if the consumer electronics industry preceded the electrical grid. The computing consumer had a range of platforms, each with its own set of software products, but there was nothing that they all plugged into to make them work. The Internet appeared, but the Internet was primarily a connecting medium that computers used to exchange information. Most computing occurred on single computers. Instruction sets varied widely between vendors. Higher-level languages such as FORTRAN, COBOL, and C improved portability, but differences in the input and output system and other peripherals almost guaranteed that moving programs from one platform to another would have problems.

UNIX was one step toward utility computing, because it was ported to different platforms. This is still evident today. The Big Three in server hardware—IBM, HP, and Sun (now Oracle)—all run UNIX. Unfortunately, although the companies all wrote UNIX operating systems—IBM wrote AIX, HP wrote HP-UX, and Sun wrote Solaris—each did so in a slightly different way. The result is almost a utility, in the same way that a horse is almost a camel inasmuch as both have four legs. To produce utility-like portability, developers had to write different code for each UNIX platform to account for tiny but critical differences in low-level functions such sockets and thread handling. It was an improvement over the days before UNIX because almost all the code stayed the same, but a handful of critical differences made porting difficult.

Java was another step forward. Java isolates the platform idiosyncrasies in a virtual machine (VM). The virtual machine is ported to different platforms, but the code for an application stays the same. This yields greater portability than C or C++ on UNIX, but a separate computer for each user is still required.

At the same time, Microsoft and Windows made steps toward utility computing by sheer dominance in the computing arena. Microsoft Windows was an early winner in the desktop operating system wars, and Microsoft made progress toward establishing Windows as the de facto standard for distributed servers.

Microsoft had competition. Linux is an open-source version of UNIX that has won great favor as a platform-independent alternative to the IBM, HP, and Sun versions of UNIX. A few years ago, it looked like a David and Goliath battle was being waged between tiny Linux and mighty Microsoft.

A new twist, virtualization, has dimmed the operating systems’ spotlight. Virtualization separates the operating system from the underlying hardware. A virtualization layer, often called a hypervisor, also called a virtualization platform, simulates the hardware of a computer, and operating systems run on the simulated hardware. Each operating system running on the hypervisor acts as an individual virtual computer, and a single hypervisor can run many virtual computers simultaneously.6 Adding to the flexibility, a hypervisor can run on several physical computers at once, moving VMs from physical computer to physical computer and optimizing the use of the physical system.

Large systems can be deployed as VMs. Virtual deployments have many advantages, but one of the most important is that hypervisors can run most operating systems virtually. Consumers don’t have to make all-or-nothing choices when choosing operating systems and hardware. The mix of virtual hardware and operating systems available on hypervisors makes a new kind of computing utility possible. The uniform commodity is made available in the form of VMs and other virtual hardware. This uniform commodity is the foundation of clouds.

Utilities and Cloud Service Models

Cloud and utility computing are not the same thing. Some uses of cloud fit the utility definition well. Other uses do not fit so well.

Of the cloud service models (described the next section), infrastructure as a service (IaaS) is the most likely to become a utility. In some countries, IaaS could be a candidate to become a regulated utility like the power grid or telephone. An IaaS cloud could become government monopoly like the old postal service. It is tempting to imagine a future landscape where every IaaS provider offers a standard interface that consumers can use interchangeably. More likely, market forces may force IaaS providers to agree on a reasonably uniform de facto interface, not unlike the network of gasoline stations that blanket the country having settled on three choices of gasoline. There is no regulatory body dictating what gas may be pumped, but when every petroleum refiner and gasoline station owner wants to sell to every vehicle on the road and all automobile manufactures want to produce cars that can be gassed up anywhere, the industry has arrived at great uniformity.

Platform as a service (PaaS) and software as a service (SaaS) differ from traditional utilities. Since they compete on the uniqueness of their service, they are less likely to become utilities in the traditional sense.

Utilities providers follow many of the same practices as all service providers do, but there are also differences. Supply and demand in the market tends to force utility prices to converge on a market price. Utilities can be very profitable because they have an assured market and great demand, but their profitability is more dependent on supply, demand, and the regulatory climate. They benefit from innovation in the way their product is produced, and not so much from innovation in the product itself. Non-utility services compete more often on offering innovative services that are attractive to consumers and garner a premium price.

Cloud Computing Models

The three National Institute of Standards (NIST) models of cloud service—IaaS, PaaS, and SaaS—illustrate some of aspects of clouds as utilities.

Infrastructure as a Service

The most basic NIST service model is IaaS. It provides the consumer with a virtual infrastructure made up of a few basic virtual entities such as virtual computers, network, and storage. Consumers are free to deploy and use this infrastructure as they see fit. This is closest to the electricity model. IaaS competition has begun to look like competition between electric utilities. The supplied infrastructure is quite uniform. There are a few variations but the range is limited.

Platform as a Service

PaaS provides platform utilities for developing and supporting applications such as integrated development environments (IDEs), databases, and messaging queues. This is in addition to the infrastructure an IaaS provides. Competition still focuses on the basic variables of IaaS: price and performance. However, the offered platforms vary greatly and PaaS services also compete on the effectiveness and ease of use of their development-and-support platform.

Software as a Service

SaaS competition is similar to that of PaaS. The consumer of SaaS is not exposed to the infrastructure at all, and there is little direct competition on that level.  Consumers see only running software. The software is unique to the SaaS provider. If a budding SaaS customer relationship management (CRM) startup were to copy the Salesforce.com CRM service, the startup would probably be quickly bound up in a lawsuit. To compete, the startup has to develop better CRM software.

Do IT Utilities Matter?

Whether or not computing becomes a utility has great significance to both business and IT. Carr’s article “IT Doesn’t Matter” urged the possibility that computing will become a utility.7 In such a discussion, it is important to make clear the demarcation point between provider and consumer. For the electrical grid, the demarcation point is the service panel. A commodity—a standardized electric current—passes through the service panel and is used by a lot of different appliances and devices. In the century since electricity became a commodity in the United States, the electrical product industry has boomed. The millions of devices that are powered from the electrical grid are unique and often innovative products, not utilities. Vacuum cleaners, for example, are not interchangeable. Some are good for carpets; some better for hardwood floors. Some are lightweight and easy to use; others are heavy-duty and require strength and skill.

Virtualized computing is similar to electric current and can be standardized into an interchangeable commodity.8 Software is not as easily characterized as a utility. The most important characteristic of software goes back to the day when von Neumann thought of inputting processing instructions instead of wiring them in. Software was invented to make computers easily changed. From that day, computers were general-purpose machines.

When discussing computing utility, the distinction between hardware and software is important. Software is malleable: it originated in the desire to transform single-purpose computers into multipurpose machines that could easily and quickly be restructured to address new problems. Software invites change and innovation. It resembles the electrical appliances on the consumer side of the electrical service panel.

Computing hardware is certainly continually innovative and flexible, but software is expected to change more rapidly than hardware. In addition, hardware designers have tended more toward compatibility than software designers have. For example, the Intel x86 instruction set, dating from the late 1970s, is still is present in processors built today, though it has been expanded and adapted to more complex and powerful chips. The same computer hardware is used to run many different software applications. Enterprises have many different software applications and many are bespoke applications that are built for a single organization; whereas enterprise hardware infrastructures vary in size and capacity but are all similar compared to the software that runs on them.

The analogy between the development of electrical utilities and computing breaks down a bit when we compare software to electrical appliances and hardware to utility electricity. A lot of software will run on any hardware built with an x86. It is as if the electric service panel were designed and widely deployed before electrical utilities were designed. A consequence is that utility computing may not follow the same sequence as electrical utilities. The great gains for electrification of manufacturing occurred when factory builders began to take advantage of using many small motors instead of shafts and pulleys. Many large gains have already occurred without utility computing and these gains have some similarity to the flexibility offered by electric motors.

This difference can be seen in what has happened since Carr asked his provocative question. Carr made a number of recommendations to CIOs based on his prediction that IT would soon become a utility. His recommendation was to be conservative with IT: Don’t overinvest, don’t rush into new technology, and fix existing problems before acquiring new ones. This is cautious advice and businesses that followed it certainly avoided some losses, but they may also have missed out on opportunities.

At the same time, IT departments were dismayed by this attitude. Carr’s recommendations boiled down to tight IT budgets based on advice that was both business and technical analysis—a business recommendation based on a prediction of an impending technical change. In the best of all worlds, such a decision should come from collaboration between the business and technical sides of the house.

IT departments that followed Carr’s advice, anticipated cloud utility computing, and were cautious about investing in and expanding their physical infrastructure have probably already begun to use IaaS utility computing. Such departments consequently reduced their capital investment in IT without compromising their capacity to expand the services that they offer to their enterprise. This is a clear benefit from Carr’s advice.

But following Carr’s advice may not have been all good. Departments that were also reluctant to expand or modify their software base may have fared less well. There have been important software innovations in the past ten years that these organizations may have missed while waiting for software to become a utility. They may be behind the curve of big data analysis (Chapter 1) of the their customer base, use of social networking tools for fostering customer relationships, or even the more mundane improvements in integrating different segments of the business. Again, software is closer to the electrical appliances on the consumer side of the electrical panel. It changes more rapidly and is more closely tailored to the needs of individual customers.

IT clearly does still matter, but it has changed since Carr wrote his article in 2003. Carr predicted the change toward utility computing, but there are some aspects that have gone in different directions. No one is likely to say that the IT services provided by most IT departments have all become interchangeable commodities. Email, for example, is often treated as a commodity because email providers are largely interchangeable, but even the SaaS service desk services are not particularly interchangeable among SaaS providers. SaaS has moved the maintenance and administration of the service desk out of the local IT department, but it has not converted the service desk into a uniform commodity. On the other hand, the infrastructure that supports the back end of SaaS service desks has become a commodity. SaaS consumers are purchasing that infrastructure indirectly though their SaaS fees. Many other enterprise services are similar.

The move from an on-premises application to a SaaS service or deploying on-premises applications on a PaaS or IaaS service is likely to be a business decision, but it is also a technical decision. For business, reducing the capital investment in hardware by replacing it with cloud services has significant financial implications. Not only does it affect the cost of IT services, it also affects the distribution of capital expenditures and operating expenses.

At the same time, these decisions have a highly technical component. For example, the performance of a cloud application is unlikely to be the same as that of the same application deployed on-premises. It may improve or it may get worse—or, most likely, it will be better in some places and worse in others. Predicting and responding to the behavior of a migrated system requires technical knowledge. Even a self-contained SaaS application may have effects on network traffic distribution that will require tuning of the enterprise network.

Although some technologists may regret it, moving the infrastructure and administration of a service to an IaaS provider is often a good use of utility computing provided it does not compromise security or governance. Switching from an in-house application to a SaaS application may be a good decision, but perhaps not if a SaaS application is not available that fits the needs of the enterprise, the same way a home vacuum cleaner may not work for a cleaning contractor. These decisions can only be made wisely when there is both business and technical input.

Conclusion

IT decisions—especially strategic decisions that affect the long-term direction of IT and the enterprise—always have both a technical and a business component. Decisions made entirely by one side or the other are likely to have compromised outcomes. Even highly technical decisions about areas deep in the infrastructure or code eventually affect business outcomes. This problem—the coordination of business and technology—is a fundamental aspect of the service management approach to IT, which is the subject of the next chapter.

EXERCISES

  1. Define architecture in your own terms. What is the relationship between architecture and enterprise planning? What makes IT architectures different from other enterprise architectures?
  2. What makes a utility different from an ordinary service?
  3. Compare and contrast the development of the electrical power grid to utility computing.
  4. Explain why the decision to move to a cloud computing utility model is a joint technical and business decision.
  5. Does IT matter?

1Changing your cell-phone provider is difficult, because the providers have an interest in keeping their existing customers. However, the difficulties are far less than the obstacles to switching your brand of printer and continuing to use the supply of print cartridges for your old printer.

2A prominent proponent of this theory is Nicholas Carr, whose article “IT Doesn’t Matter” (Harvard Business Review, May 2003, http://www.nicholascarr.com/?page_id=99) has provoked a great deal of discussion, both for and against his proposition, since it was published in 2003. Carr included all aspects of computing in his prediction that most enterprises would outsource their current IT activities to computing utility providers, which is more than simply outsourcing computing.

3Nicholas Crafts, “The Solow Productivity Paradox in Historical Perspective” (CEPR Discussion Paper 3142, London: Centre for Economic Policy Research, January 2002), http://wrap.warwick.ac.uk/1673/. This paper presents a rigorous discussion of the consequences of adopting electricity and compares it to computer adoption.

4Ohm’s Law is a basic electrical principle that states that voltage is directly proportional to amperage and resistance. Most people learn Ohm’s law through an analogy with water in a pipe. The resistance corresponds to the diameter of the pipe, the voltage corresponds to the pressure, and the amperage corresponds to the volume of water that flows. If the pressure is increased, the water flowing will increase. If diameter of the pipe is decreased, the pressure must be increased to get the same volume. This analogy is excellent because it leads to an intuitive understanding of one aspect of electricity and is used frequently for teaching Ohm’s Law.

5If the standard is backward compatible, a drive built to an older standard can plug into a system built to a newer standard. If the standard is forward compatible, a drive built to a new standard can be plugged into a system built to an older standard.

6These virtual computers are usually called virtual machines (VMs) or virtual systems (VSs). The Java VM referred to above is a VM that runs on operating systems rather than a hypervisor.

7Carr, supra.

8The Distributed Management Task Force (DMTF) standard—Cloud Infrastructure Management Interface (CIMI)—is an example of such standardization (http://dmtf.org/standards/cmwg). See my book Cloud Standards for discussions of many standards that contribute to IT and clouds in particular.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset