Chapter 2. A BRIEF HISTORY OF MODERN TECHNOLOGY

Technological advancements abound through history and helped push civilization forward—sometimes faster, sometimes slower—to greater wealth and better health. From cavemen mastering fire and using rocks as tools, to movable type, the Industrial Revolution, radios, light bulbs, cars, and microchips—all were mind-bending breakthroughs in their time and paved the way toward further innovation and technological advancement.

Like the Industrial Revolution, the last decades have heralded a new historical turning point: the Information Age. This is a period of instant and near unlimited proliferation and access to information—a modern-day renaissance where information is sought and shared freely. But how did it come to be, and what might it mean for investors? Good, informed investing decisions often require the context of history. No, you don't need to become an expert on the history of technology, but it's worth taking a stroll through the past to see where we've been to get a sense of where we may go.

Semiconductors, computers, communication devices, and the Internet all played a role fostering the Information Age. This chapter details a brief history of their development. It also covers the build-up to the 1990s Tech bubble, how it burst, and its aftermath.

THE SNOWBALL EVENT: HISTORY OF SEMICONDUCTORS

We'll start with chips. Sure, it would be great fun (and maybe a bit tiresome) to begin with the wheel as the first technology and work all the way up to today, but as investors we're more concerned with the more relevant history of today's investible technologies. With-out semiconductors, there would be no cell phones, computers, or Internet—no Intel, Microsoft, Hewlett Packard, Oracle, or Apple! Semiconductors are the bedrock foundation for much of today's technology—among the first and most important building blocks in manufacturing electronic devices. Their development is one of the most important milestones on the road to the Information Age. But where did semiconductors come from?

Eureka! A Half-Way Conductor!

In 1833, natural philosopher Michael Faraday stumbled upon the first known "semiconductor effect" while investigating the impact of changes in temperature on silver sulfide.[14] He noticed the metal's conducting power increased with heat and fell when heat dissipated. This contradicted the known effects of temperature on other metals. We know now this is a property of most semiconductors. An interesting discovery, but what do you do with a metal that gets more conductive the hotter it gets? Not much, until you discover rectification and transistors.

Rectification is when electrical currents flow in a single direction—a basic but vital effect to functional electronic devices. It was discovered in 1874 and its first broad application was detecting radio signals in the early 1900s.[15] This is a classic feature of many technologies—a principle is discovered but may not become applicable in a mass (or profitable) way until years later. Around the same time, silicon entered the picture. An engineer with American Telephone and Telegraph (now AT&T) tested rectification on thousands of minerals, with silicon crystals and germanium performing better than others.[16]

Building on earlier work from John Bardeen and Walter Brattain, William Shockley of Bell Labs first conceived the junction transistor in 1948. The transistor could be used to switch and amplify electrical currents—another task vital to electronic devices and one previously performed by vacuum tubes. A major benefit over vacuum tubes was the transistor's much smaller size—which has only gotten smaller over time. Early transistors were made with germanium, but silicon emerged as the material of choice due to lower electrical leakage and ability to withstand extreme temperatures.[17]

The Dawn of Integrated Circuits

As electronics grew more complex and required thousands of transistors, interconnecting them became increasingly difficult. In the search for an easier solution, integrated circuits were born in 1958. This breakthrough is credited to two individuals: Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor. Integrated circuits (ICs) were a giant leap forward in the development of electronic devices. Instead of producing a single transistor at a time, multiple transistors and components could be built on a single piece of semiconductor material. This enabled more automated manufacturing and the ability to produce smaller and more powerful chips. However, producing these powerful chips was cost prohibitive.

Initial Applications

Defense was one of the few markets where the benefit of an IC's small size outweighed its high cost of production. The US Air Force used ICs in computers and Minuteman missiles in the 1960s. But the potential for integrated circuits was much greater. Prior to ICs, electronics like computers and calculators were made with vacuum tubes. This hindered the devices' wide-scale adoption because vacuum tubes are much bulkier. A single computer could be as large as an entire room. ICs allowed electronics manufacturers to scale down size and overcome these limitations.

In following years, ICs increased in complexity and functionality. This move was driven by economics: Unit costs fell as the number of components per chip increased[19]—getting more bang for your buck, so to speak. It was during this time that former Intel CEO Gordon Moore observed the number of components on an IC doubled roughly every two years—now referred to as "Moore's Law."

A Giant in the Making

In 1968, Robert Noyce and Gordon Moore left Fairchild Semi-conductor to start a new company with approximately $2.5 million in funding. Originally called NM Electronics for "Noyce Moore," the two purchased rights to use another name—Integrated Electronics, or Intel for short.[20] At the time, Intel focused primarily on memory, intent on making chips practical for mass adoption. They also wanted the best-performing and most reliable products on the market.

Research and development (R&D) and economies of scale became vital cornerstones of successful semiconductor firms. The industry is characterized by rapid technological advancement—firms need to invest heavily in R&D or their products risk becoming obsolete. Production costs were equally important. (What good is the best chip on the market if no one can afford it?) Intel recognized this and, to this day, R&D and economies of scale remain two of its core strengths—it is now the largest semiconductor manufacturer in the world.

The Microprocessor

1971 was another breakthrough year. Intel went public and launched the world's first microprocessor—the 4004. The Intel 4004 had the same computing power as the 1946 ENIAC—the first electronic computer built.[21] Produced on a two-inch silicon wafer, this single chip contained 2,300 transistors and all parts for a working computer.[22] Though the chip was designed for a calculator, it was fully programmable. Functionality could be customized with different software, allowing use in myriad electronic devices.

Intel's success led to new iterations of the microprocessor, sparking a revolution in computing devices, including the personal computer, but also creating competition. Over the next decade, Fairchild Semiconductor, Texas Instruments, RCA, Motorola, IBM, and Advanced Micro Devices all manufactured microprocessors. By the end of the 1970s, a saturated market led to price wars—an environment favoring those with best economies of scale.[23] Recognizing the need to lower production costs, Intel was the first to move production overseas in 1972. It also took a unique approach and built massive chip manufacturing plants, which lowered unit costs relative to competitors operating smaller facilities. This helped Intel with one of its greatest victories—being chosen as the microprocessor supplier for IBM's PC in 1981. Its chips weren't superior to competitors', but they could be made at lower costs.

Rise of Japan and Asia

By the 1980s, Japan had invested heavily in semiconductor production. Focused on memory, the region excelled at producing high-quality chips through finely tuned manufacturing processes. Japan gobbled market share and, by the middle of the decade, surpassed the Americas in semiconductor billings. This almost bankrupted Intel, forcing the firm to sell a portion of itself to IBM in 1982.[24] It also led Intel to exit the memory business in 1985 and focus solely on microprocessors.

But Japan's reign began to fade. By the early 1990s, PCs were penetrating further into businesses and homes, and Intel was a primary beneficiary. Its economies of scale and R&D in the microprocessor market were unrivaled. But a new region was on the rise—Asia, particularly South Korea and Taiwan, was investing in chip production with a focus on memory. Using Intel as an example, Asian companies departed from Japan's strategy of superior quality and placed a stronger emphasis on cost. Asian manufacturers began mass producing memory and were able to take market share from Japan. As seen in Figure 2.1, Asia surpassed Japan in aggregate semiconductor billings by the late 1990s and even surpassed the Americas by the early 2000s.

Another competitive advantage Asia had—the cost of building a manufacturing plant and employing labor was significantly lower than in the US and Japan. Some firms, such as Intel, saw this and opened international production facilities. This also led to the creation of outsourced third-party semiconductor manufacturers domiciled in Asia.

The New Millennium

Relative to previous years, semiconductor advancement over the last decade has been less dramatic. R&D will continue to play an important role in the industry. However, the incremental performance benefit of new ICs is less significant today than it once was. In other words, current microprocessors are powerful enough for software that hardware improvements are not as noticeable to users. This makes low cost structure ever more critical.

Global Semiconductor Billings by Region Source: Thomson Reuters.

Figure 2.1. Global Semiconductor Billings by Region Source: Thomson Reuters.

Advancement has primarily been through manufacturing processes. One such example is the transition from 200mm to 300mm wafers. The larger size allows more chips to be produced on a single wafer. Another is the increasing number of transistors crammed onto chips by shrinking feature sizes (Moore's Law). Both these trends can improve production scale, and those able to make the transitions faster have enjoyed advantages over peers.

HISTORY OF COMPUTERS

The computer, one of the most important inventions of the last century, has a history and timeline intimately intertwined with semiconductors since breakthroughs in semiconductor technology made today's computers possible. Computers are now small, smart, fast, and everywhere—but how did we get from the abacus to the ultra-skinny laptop?

Early Origins

A computer's function is simply to make computations. By this definition, calculators were technically early iterations. But the earliest machines we'd understand as true computers in the pre-integrated circuit days were huge because of bulky vacuum tubes. They were just too big for mass adoption. Deemed the first "computer," the ENIAC, built in 1944, contained almost 18,000 vacuum tubes, weighed 30 tons, and filled an entire room.[26] It was supposed to calculate military artillery firing tables, but the war ended before it could be completed.

Many of the early computers were developed for government and defense purposes, simply because the government was one of the few customers who could afford them. But the discovery and ongoing development of ever smaller and cheaper integrated circuits allowed computers to be developed as general purpose machines, leading to wider adoption.

Big Blue

In 1952, Thomas Watson, Jr. was elected president of a firm called IBM, which specialized in timecard equipment. He led the transition from timecards to computers, and in 1959, IBM released its first set of transistorized computers. The company's focus on superior technology and aggressive pricing tactics made it difficult for competitors to gain market share, and, by 1961, it's estimated IBM held over 80 percent of the computer market.[27]

IBM took a risk in 1964, developing the System/360—computers with interchangeable software and peripherals—a significant departure from the existing industry model of large, all-encompassing mainframes. Demand was strong.

But tides began turning in the late 1960s. Part of IBM's initial success was its ability to bundle the System/360 products since it sold individual machines at much higher prices. This backfired when customers were ready to upgrade components and did not want to pay for entirely new bundles. It created an opportunity for competitors to offer cheaper IBM-compatible equipment—eventually forcing IBM to lower prices to maintain leadership.

The PC

In the 1970s, smaller "personal" computers were springing up. Companies like Apple, Northgate, Zenith, ZEOS, Atari, and Commodore were all in the market, while IBM, who failed to identify the trend early, lost market share. The firm turned itself around in 1981 with the IBM PC. Superior distribution and service made the PC an instant success and industry standard. The PC's small size and lower price point blew the market wide open, leading to widespread adoption by enterprises—its intended target market. It was even cheap enough to enter homes. A marked change from IBM's previous vertically integrated approach, the PC used third-party components, running on an Intel microprocessor and Microsoft's MS-DOS operating system.

Attack of the Clones

Success of the IBM PC made it a benchmark for imitators, and in the 1980s myriad firms produced IBM PC-compatible computers. Because each PC was built with similar off-the-shelf parts, the industry became increasingly commoditized. Superior hardware and software were no longer differentiating factors. Offered at lower prices, clones eroded IBM's market share and the firm lost its lead in PCs.

Apple emerged as a major player during this time and released the Macintosh in 1984. Instead of running applications on coded instructions, the "Mac" incorporated a graphic user interface (GUI). The GUI used icons and windows—this user-friendly approach helped with its success. But despite innovative technology, Apple kept its software and hardware design proprietary and charged a premium over competitors. With no low-cost compatible alternatives, it failed to become an industry standard like the PC.

A New Revolution in PC Manufacturing

While businesses and consumers rapidly adopted PCs in the 1990s and the market matured, a firm called Dell emerged with a unique strategy. Instead of charging a premium for superior products, Dell focused on service and cost management. The firm cut out middlemen and shipped products directly to customers. It built customized computers to order, improving customer service and reducing costs since there was little in-process inventory.

Dell's supply-chain management strategy was remarkably successful and helped establish it as the world's leading PC manufacturer—a distinction it traded with Hewlett-Packard (HP) on occasion. (HP took the lead in 2006 and has maintained it since.) Dell's core market was in desktop computers, particularly in the mature US market. The firm failed to gain a strong enough presence in emerging markets and in notebook computers—the two largest growth drivers in recent years. These, however, were areas of strength for HP, allowing them to gain market share.

Upgrade Cycles

Computer technology evolved rapidly, but demand had less to do with new PC models and more to do with the goods inside. The two largest drivers historically were new generations of microprocessors and software, specifically operating systems.

Role of Microprocessors Intel's rise as the globally dominant manufacturer of microprocessors was partially aided by their scale and R&D, but marketing was perhaps equally as important. You may recall their "Intel Inside" branding campaign, launched in 1991, which successfully enlightened the public on the importance of the microprocessor in their computer. This direct marketing approach was previously used by PC manufacturers, but it was new for component manufacturers.

Differences between microprocessor generations in the 1990s were more significant than today. The benefits of increased speed and efficiency outweighed the cost of upgrading—and when coupled with Intel's successful advertising, demand for PCs would usually increase with each new generation of processors.

Role of Software Evolving software also impacted demand for new computers. The IBM PC used MS-DOS as its operating system, but Microsoft retained licensing rights. This meant PC-compatible manufacturers could adopt MS-DOS to mimic the IBM machine as best as possible. The rapid growth of these compatibles quickly established Microsoft as the industry standard.

Under Bill Gates' guidance, Microsoft released many iterations of its operating system. The first version of Windows came out in 1985 and was similar to Apple's Mac OS. Its GUI used icons and tiled windows to display applications. Newer versions were released in following years, and, like microprocessors, demand for PCs would increase because updated operating systems often required more robust hardware.

HISTORY OF COMMUNICATIONS

Most investors today easily understand why mobile phones and related equipment fall in the Technology sector rather than Telecom. Today's phones aren't really phones—they're credit card-sized powerful computers you can use to retrieve and send e-mails, store contacts, manage your schedule, write a doctoral thesis, play games, and, oh yes, talk. But how did we get from smoke signals to bulky Princess phones to super cool tiny "gotta-have-'em" handhelds?

Mobile Phones

Personal mobile phones made their entrance in the 1970s but were primarily for experimental and trial purposes. The first known cellular phone call was made in 1973 by Dr. Martin Cooper of Motorola, who set up a base station in New York and called his longtime rival at Bell Labs.[29] It wasn't until the early 1980s that the technology was launched commercially in the US. Motorola was first to the game in 1983 with its DynaTAC mobile phone—the size of a brick—boasting one hour of talk time and eight hours of standby.[30]

Other mobile phones were springing up internationally from Finnish and Korean giants Nokia and Samsung. These revolutionary devices cost thousands of dollars and were too cost prohibitive for the mass market.

Battle for the Top

The 1990s set the stage for what are now the big five in mobile phones: Nokia, Samsung, LG Electronics, Motorola, and Sony Ericsson. While still somewhat expensive by today's standard, the cost of cell phones and wireless service was falling, and they were on their way to becoming ubiquitous in the developed world. Nokia and Motorola were the dominant players during this time, each with a unique strategy.

Finland's Nokia made an early strategic decision to focus on telecommunications. It shed other business lines and built a vast distribution and manufacturing network. This generated economies of scale still unrivaled today. Cost had become its core strength. The firm used this to its advantage in competitive pricing environments to gain market share over peers. By 1998, Nokia became the world's leading mobile phone manufacturer, a title it still holds today.[31]

Motorola was a strong competitor with a different strategy. A pioneer in the mobile phone market, innovation was at its core. In the cell phone industry, smaller is generally seen as better, and Motorola consistently beat peers to market with innovative and smaller models. In 2004, the Motorola RAZR set the standard for slim phone design and became one of the world's best-selling handsets ever. However, the firm grew overly dependent on this phone and failed to release compelling new lines. Its previous strength became its weakness, ending in heavy market share losses.

Smartphones

The definition of smartphones has changed over time, but it generally refers to phones with more advanced features—and higher price tags. Many early versions were personal digital assistants (PDAs), but their popularity faded as traditional cell phones began offering similar features. Nokia made an early entrance into smartphones in the mid-1990s and, to this day, is the world's largest provider.

However, this lucrative market attracted new entrants. Research in Motion (RIM) developed a unique service allowing users to synchronize corporate e-mail to its secure business servers. This led to significant market share gains, particularly with business customers. Apple is another, albeit more recent, newcomer with its iPhone. This popular touchscreen device put Apple on the smartphone map almost instantly. Its success led almost every major phone manufacturer to release a similar touchscreen handset. It also demonstrated that a non-traditional handset manufacturer could penetrate the market, encouraging more new entrants from other industries.

HISTORY OF THE INTERNET

The Internet is both the result of, and an important ingredient in, further fomenting the Information Age. It's hard for most folks today to remember a time when memos weren't delivered instantly and information wasn't readily available 24/7—all from your desk, lap, or telephone. How did we get from those barbaric not-so-long-ago days to the 24/7 wired days?

Early Origins

The Internet's origins trace back to the 1960s and the Advanced Research Projects Agency (now Defense Advanced Research Projects Agency, or DARPA). Commissioned by the US Department of Defense, this agency created ARPANET: A network of computers connecting US universities and research institutions. It used packet switching technology and interface messaging processors (IMPs), basically modern day routers, to share information—small amounts of it, slowly, and with frequent interruptions.

"E-mail," such as it was, was developed in the 1970s, but even more important was establishing the transmission control protocol/Internet protocol (TCP/IP) as the standard method for interfacing. Developed by Robert Kahn and Vint Cerf, this allowed computers and networks of various kinds to communicate with one another—a kind of universal language and the same protocol used today.

Commercialization

In the mid-1980s, the National Science Foundation (NSF) developed the NSFNET for faster communication between research and academic institutions. It was designed to be a "network of networks" that would also connect to the ARPANET.[32] However, non-restricted academic access led to significant traffic growth the NSFNET could not support. As a result, the NSF solicited bids to upgrade the network, and the winning proposal came from IBM and MCI.[33] The private sector was now involved.

Managing the network, however, was cumbersome as there were vast amounts of information in different forms. This made it challenging to organize and find data. Tim Berners-Lee found a solution in 1989. While working at the European Organization for Nuclear Research (CERN) in Switzerland, he proposed a hypertext markup language for transferring information over the Internet. It was a vast improvement from the previous system, and he called it the World Wide Web (WWW).

Recognizing a better browser was necessary for navigating the Web, the team at CERN started looking for outside help. A group of students at the University of Illinois responded with a program called Mosaic. The user-friendly browser grew in popularity and was later released for PC and Macintosh computers. This quickly led to widespread adoption of the World Wide Web. By 1994, there were 10 million Web users on 10,000 servers, a fifth of which were commercial.[34] The Information Age kicked into high gear.

The Dot-com Era

Conditions in the mid to late 1990s were ripe for growth. Demand for Internet access was rapidly increasing and Web business opportunities were plentiful. To accommodate the significant rise in demand for Internet service, there was also a period of heavy infrastructure investment. Sales of copper and fiber optic cables, routers, switches, and computers increased. This period helped bolster demand for hardware and software alike, leading to a broad rise in Technology stock prices. There were points when the Technology sector represented over one-third the weight of the S&P 500 Index.

The Boom

Infrastructure suppliers were initial beneficiaries of the Internet boom. The physical network needed to expand if businesses and consumers were to gain access. It began when the NSF commissioned private firms to build Internet access points. Controlled by WorldCom, Pacific Bell, Sprint, and Ameritech, these points were designated for public use.[35] However, they couldn't handle the increasing amount of network traffic and additional infrastructure was needed. This came from large telecommunications firms like AT&T that built out their own private Internet access points. In attempts to attract businesses, many cities across the US also built out extensive fiber optic networks.

Demand for routing equipment skyrocketed during this period. Cisco Systems was the dominant supplier. Founded in 1984, Cisco had an early jump on networking technology. Over the years, it invested heavily in R&D and consistently produced industry-leading products. It also adopted an acquisition strategy in the early 1990s that diversified its product portfolio, making it a one-stop shop for virtually all network communications products. The firm generated $714 million in annual revenue in 1993, which increased to $22.3 billion by 2001.[36]

Internet service providers (ISPs) were next in line to benefit. Demand for access was growing fast. There were approximately 160 US commercial ISPs in early 1995, increasing to about 4,000 by mid-1997 (including Canada).[37] In this increasingly crowded market space, some even offered free service as the battle transitioned from profitability to market share.

Then came the dot-coms. Rapid proliferation of Internet service into businesses and homes opened a whole new channel of opportunities. Firms were offering business to business (B2B), business to consumer (B2C), and consumer to consumer (C2C) web-based services. There were literally thousands of startups, and many of today's most prominent Internet companies (e.g., Google, Amazon, eBay) were started in the 1990s. As detailed in Figure 2.2, the Technology sector was producing more IPOs than any other.

Initial Public Offerings: Technology Sector vs. Non-Tech Sectors Source: Bloomberg Finance, L.P.

Figure 2.2. Initial Public Offerings: Technology Sector vs. Non-Tech Sectors Source: Bloomberg Finance, L.P.

Benefits of the rapidly expanding sector weren't solely confined to Technology. Rapid proliferation of new innovative businesses and technology helped bolster the entire economy. Semiconductors, computers, mobile handsets, and the Internet improved efficiency in business and communications. This led to significant productivity gains, with US labor per hour increasing 2.75 percent annually from 1995 to 2000—nearly double the pace of the previous 25 years.[38] Moreover, the 1990s were characterized by one of the longest economic expansions in US history.

The Bust

The party eventually ended as many fledgling firms lacked viable business plans. Focus was placed on share gains through heavy marketing, technology investment, and expansion—all of which required significant capital. The firms couldn't generate enough revenue to cover expenses, burned through cash, and eventually went bust. Webvan is a notable example. This online grocer went public in 1999, raising $375 million in its IPO. It quickly expanded into eight US cities and made a $1 billion order for a group of warehouses.[39] Unable to pay its expenses, the firm went out of business in 2001, less than two years after going public. Hundreds of other Technology firms in the late 1990s shared similar fates.

Valuations across the sector reached astronomical levels on massively euphoric sentiment. Many investors believed Technology's run was only beginning. Many would increase relative portfolio exposure on "dips," claiming they were excellent buying opportunities. The sector generated above-average returns for years, leading investors to "chase heat." In March 2000, the Technology bubble finally burst. Internet companies fell by the wayside, creating sympathy selling throughout the sector.

The Aftermath

The MSCI World Technology sector represented almost 24 percent of the broad index in March 2000, but by March 2003, Technology's share fell to 11 percent.[40] When the dust finally settled, it was clear only the strong survived. Firms with poor business strategies were gone—acquired or liquidated. However, some firms with sound strategies were gone, too—simply recession victims.

The bust also helped shape business strategies over the following years. Historically, many Technology firms had large cash balances to protect against the sector's typical volatility. The bubble bursting and the ensuing recession took this volatility to an entirely new level as demand fell drastically. Numerous firms shifted to more conservative business strategies by increasing reserves and sitting on cash. This worked to the sector's benefit in recent years as global credit markets froze and liquidity dried up. Many sectors were starved for capital, but Technology firms were in a healthier position, able to make acquisitions, buy back shares, or simply weather the downturn. In fact, Technology reclaimed its spot from Financials as the largest S&P 500 sector in May 2008.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset