Chapter 6. A DEEPER LOOK AT CURRENT AND EMERGING TECHNOLOGIES

Technology is in a never-ending state of change. In the spirit of competition, companies pursue a common goal: Offer the best and most cutting-edge products or services possible. This has fueled the fire of innovation over the Technology sector's entire history. Advancement has been profound, particularly in the last 30 years, be it manufacturing processes, new products, or industry-wide technology standards.

This chapter digs deeper into some of the major technologies used today, as well as how they're evolving. For the semiconductor production process, some important emerging technologies include:

Semiconductor Production Process

  • Immersion Lithography

  • High-k Dielectrics

And emerging technology for computers include:

Computer Technology

  • Solid State Drives

  • Data Deduplication

  • Cloud Computing

  • Virtualization

  • Web 2.0

SEMICONDUCTOR PRODUCTION PROCESS

Semiconductors have evolved remarkably quickly. Over the years, chips have dramatically shrunk in size while computing power has exponentially increased—a trend largely attributed to manufacturing process advancements. But just how are chips produced?

First Things First—Silicon Wafers

Chips start out as silicon wafers. Why silicon? Silicon is an abundant element, found in raw materials like sand, with natural semiconductor properties—meaning it can act as both a conductor and an insulator and be used at higher temperatures than many alternatives. (For more, revisit Chapter 2.) Because raw materials like sand and quartz are forms of silica (which contains oxygen), wafer producers must first source silica and strip away the oxygen in order to get purified polysilicon necessary for wafer production.

Silicon Valley

Polysilicon is then formed into ingots—crystalline cylinders. This is done by melting polysilicon down in a large crucible. A high-purity silicon rod is dipped into the mixture, and silicon crystals begin forming on the rod as it's rotated and slowly lifted from the mixture—forming an ingot (shown in the picture above). Ingots are sliced into thin wafers using a diamond saw blade, then smoothed and polished.

Wafers, measured in diameter, differ in size. The standard for much of the 1990s was 200mm, but most chip manufacturers have begun using larger 300mm wafers. The larger size allows more chips to be produced on a single wafer, improving manufacturing efficiency. Producers are already beginning to talk about transitioning to larger 450mm wafers—odds are wafers will continue getting larger.

Chip Production Today

Once produced, silicon wafers are sent to semiconductor fabrication plants globally where they're used to manufacture chips. Chip production is a complicated, exacting process—done in a room free of dust and other particulates that can contaminate chips. The old Intel commercials with employees dancing around in space-like suits weren't off the mark—those futuristic ensembles are necessary to prevent skin particles and hair from entering the air.

Production is split into front-end and back-end processes. Included in the front-end are thermal oxidation, patterning, etching, and doping/diffusion—generally occurring in that order. The back-end is testing and assembly. The process itself (shown in Figure 6.1) generated over $42 billion in sales for Semiconductor Equipment firms in 2007,[101] with about 80 percent of semiconductor capital equipment spending on front-end equipment.[102]

Front-End Production First, in thermal oxidation, silicon wafers are cleaned and heated to 1000°C in an oxidation furnace. The process forms a layer of silicon dioxide (insulator) on the surface of the wafer.

Patterning, often called photolithography or photomasking, is next. This is the step where circuit designs are imprinted on the wafer in a process very similar to taking a picture. The wafer is coated with a layer of photoresist, or light-sensitive film. It's then placed under a photomask—a quartz plate containing microscopic cutouts of the circuitry. An ultraviolet light is then used to project the photomask's image onto the wafer where it's imprinted on the light-sensitive film.

Stages of the Semiconductor Production Process

Figure 6.1. Stages of the Semiconductor Production Process

Etching is the next step—the design imprinted on the photoresist is transferred to the actual wafer. This is done by hardening the image and etching away remaining portions of the photoresist with chemicals until only the circuit pattern on the wafer is left.

Once the circuit pattern has been successfully transferred to the wafer, its electrical properties must be altered—through doping—to make certain areas conduct electricity (the circuit pattern) while others insulate.

The front-end production process is repeated for each layer of circuitry needed on a semiconductor device—some requiring over 20 layers.

Back-End Production While not technically part of front-end production, there are a few additional steps before back-end production can commence. These are referred to as metallization/dielectric deposition and passivation. The former is a process where metal wires are used to connect conductive portions of the semiconductor and complete the desired electrical circuit. These metal layers are separated by dielectric films serving as insulators. The latter is the final step before back-end production. It involves depositing an insulating layer on the chip to protect it from contamination.

Back-end production is testing and assembly. Testing is performed on the entire wafer by automated computer systems. The wafer is then sliced into individual chips and defectives are removed. (The percent of functional chips on the wafer is referred to as the yield.) Remaining semiconductors are then visually inspected under powerful microscopes. Then, they're encapsulated into plastic, or sometimes ceramic, packages (assembly) and shipped.

EMERGING MANUFACTURING TECHNOLOGIES

At some point, Moore's Law (components on an IC should double every two years as features get smaller) may no longer apply—component growth will slow—due to sheer physical limitations (though scientists have been predicting such a slowdown for years, that's never come ... yet). But where is the limit? This has been an ongoing issue facing chip manufacturers. However, new techniques have staved off the slowdown of Moore's Law—at least for now.

Immersion Lithography

The patterning phase has been particularly challenging in the quest to make ever-smaller features. Feature sizes on chips can only be as small as the circuit pattern imprinted on the wafer—done by shining light through a photomask. But features are already smaller than certain light wavelengths—traditional patterning methods have reached their limit.

The emergence of immersion technology extended the life of optical lithography. In traditional patterning, there's a gap of air between the light's lens and wafer surface—creating a physical limit to refraction angles. But water increases the number of angles. So semiconductor manufacturers replaced the gap with water instead of air, allowing further feature shrinkage and higher feature density on chips.

But once again, physics spoiled the party. Water has a refractive index of only 1.33—creating another barrier to resolution improvements.[103] The solution—using fluids and lenses with higher refractive indexes—eventually hit barriers, too.

The next solution was double patterning—the patterning phase is performed twice on a single layer of the chip, which can further reduce feature sizes. This technique allows manufacturing at the 32 and 22 nanometer levels[104]just 9 times wider than a strand of DNA. The drawback? Double patterning—of which there are various forms, but all use two different photomasks—often doubles the cost.

So what's next in the world of lithography? Extreme ultraviolet (EUV) technology is one likely candidate (at the time of this writing). It involves a light source with a 13.4 nanometer wavelength.[105] At this point, this technique is still extremely challenging for manufacturers and widespread commercial use is likely years down the road. (Top litho-graphy equipment manufacturers include the Netherlands' ASML Holdings and Japan's Canon and Nikon.)

High-k Dielectrics

Dielectrics are materials that do not conduct electrical currents—more or less synonymous with insulators. They are a key component in semiconductors because they separate conducting portions of the chip.

Silicon dioxide is historically used in semiconductor production as a dielectric. But just as with lithography, problems arose as feature sizes shrunk. The space between wire interconnects and transistors (conducting portions) narrowed so much that silicon dioxide became ineffective. If the material was too thin it allowed electrical leakage, reducing reliability and significantly draining power.

In early 2007, after spending years evaluating hundreds of materials, Intel found a solution in a high-k material called hafnium.[106] (High-k stands for high dielectric constant—referenced by the Greek letter K.) These materials are thicker and can reduce electrical leakage by more than 100 times that of silicon dioxide.[107] Hafnium-based materials should allow further downscaling in the near term, but the laws of physics cannot be broken. One day a new technology will be required if feature sizes are to shrink further. (Intel is the pioneer and current leader in this technology, as of this writing.)

According to Gordon Moore himself: "It [Moore's Law] can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens."[108] Semiconductor manufacturing may be closer to this limit than some think. The logical barrier would be when chip features approach the size of atoms—which is almost the case today. (One nanometer can be represented by a string of approximately 3 to 10 atoms.[109]) As of 2005, Gordon Moore believed it would be another 10 to 20 years before this limit is reached.[110] Only time will tell.

COMPUTER TECHNOLOGY

Be it PCs, servers, or storage devices, computer technology has reshaped and redefined how we operate as a society, creating efficiencies across virtually every business sector and driving significant productivity gains. And the computer evolution continues with a variety of emerging trends.

Emerging PC Technology—Solid State Drives

Though PCs come in many brand "flavors," they're all made with the same basic components—the microprocessor, motherboard, memory, hard disk drive (HDD), and liquid crystal display (LCD). PC evolution has largely been driven by advancement in these components.

Solid state drives like NAND flash have existed for some time, but they have a relatively new role in PCs—replacing hard disk drives as the primary form of storage. Capacities have yet to match hard disk drives, but flash memory offers some compelling advantages, like speed.

NAND flash acts simultaneously as the hard disk drive and DRAM, improving speed and efficiency of accessing and running software programs. For example, computers using NAND flash as primary storage can boot up almost instantly. The downfall is limited storage capacity, but this continues to improve as Moore's Law is pushed further. Only time will tell if and when solid state drives completely replace hard disks. (Top memory producers include Korea's Samsung Electronics, Japan's Toshiba, and Korea's Hynix Semiconductor.)

Enterprise Storage Technology

Enterprise storage hardware can be deployed in multiple ways. Some businesses optimize their storage networks for speed, while others prefer a more seamless integration with existing hardware and software. All methods can offer some benefit depending on specific needs, but emerging technologies like data deduplication can help improve storage technology for all purposes.

Current Storage Network Standards Storage systems can be set up in multiple ways depending on the type of computing environment. Direct-attached storage (DAS) was the first established. The key differentiator for this model is its absence of a storage network. For example, a hard disk drive on a PC can only be accessed by the PC it is attached to. On an enterprise level, the storage device might be connected directly to a server to boost that individual server's storage capacity.

In network environments, network-attached storage (NAS) and storage area networks (SANs) are currently the two primary and most commonly employed technologies. Both are a form of networked storage, but NAS incorporates file systems in the actual storage device itself, whereas in SAN the file system is separate from the storage device. Moreover, each utilizes different protocols and network standards for data transfer—NAS uses Ethernet while SAN uses Fibre Channel. The two are currently expected to converge into a new standard called Fibre Channel over Ethernet (FCoE).

Regardless of which environment is employed, the benefits are significantly boosted total storage capacity, allowing the sharing of information and resources across multiple end-users.

Emerging Storage Technologies—Data Deduplication The most prominent recent breakthroughs in storage technology is in data management—a task performed by software. Data deduplication offers significant benefits regardless of a firm's underlying storage network technology (NAS or SAN).

Firms have a few ways to manage the challenge of a growing amount of digital data including compression, file deduplication, and data deduplication, with data deduplication currently being the most efficient. Compression removes redundant bits of file information to shrink that file's size. File deduplication removes entire redundant files. For example, an email attachment sent to 20 people in an office, when backed up, is stored 20 times on the storage hardware. File deduplication frees up space by removing these duplicate attachments.

Data deduplication goes further. If the attachment is a book chapter that is edited and forwarded on to others, who also make edits and forward it on, file deduplication only removes chapters that were exactly the same. But data deduplication looks at data in smaller blocks and determines what information is the same and what is different. It then removes redundant pieces of the chapter and saves one updated version in a single location. While compression might reduce storage by about 2 to 1 and file deduplication by about 3 to 1, data deduplication can reduce storage by about 20 to 1 or better.[111]

Data Domain and Quantum are niche players in this market. But at the time of this writing, EMC Corp. is acquiring Data Domain after outbidding rival NetApp—a reflection of this technology's value.

Other Notable Emerging Technologies

Computer technology is a highly diverse market evolving on many fronts. Out of the vast number that remain, cloud computing, virtualization, and Web 2.0 are technologies most worth mentioning.

Cloud Computing "Cloud computing," a particularly hot emerging technology, is a fundamental shift from the historical computing model in which the user's PC housed the required software and hardware. In this new model, much of the required software and hardware can be stored in the "cloud" (i.e., the Internet). Each PC function can be provided by third parties as a service.

For example, computers could be manufactured without hard disk drives, DRAM, or even NAND flash. Instead, storage could be provided for a fee by firms like Google and Microsoft, which operate huge data centers with massive amounts of infrastructure. Even CPUs could be offered as a service. This is already occurring in the software industry. Software-as-a-Service (SaaS—covered in Chapter 4) is a form of cloud computing.

Cloud computing could cause PCs to grow "dumber"—evolving into simple Internet portals. Netbooks may be the first phase of this shift. The implications could be significant. Think of the music industry. Compact discs used to be industry standard (never mind vinyl records before them). However, the emergence of digital files like MP3s rocked the music world—music is packaged, sold, and promoted radically differently now—and the implications touched everyone in that world, from the artist themselves to their labels to the distributors to even concert venues and tour promoters. The long-form album was blown apart—consumers can buy single songs from any location with an Internet connection. Cloud computing has the ability to drive similar change in literally thousands of different industries, many of which fall outside the Technology sector. It can benefit businesses and consumers alike in time and cost.

But despite its many potential benefits, cloud computing has risks. For instance, is information stored at a Google data center more or less secure than if it were stored directly on company controlled infrastructure? Because of these risks, wide-scale adoption could take years.

Virtualization Virtualization is software that helps improve hard-ware efficiency. It addresses a problem that began with the standardi-zation of the client-server architecture. Under this model, firms often added separate servers for each additional application, many of which ran on different operating systems. The benefits were significantly higher computing capacity, but much of the hardware was underutilized.

Virtualization software can decouple the entire software environment from the hardware infrastructure, creating "virtual" instead of physical machines. Then hardware resources can be shared, regardless of different operating systems. The end result is significantly improved efficiency and utilization levels. Firms deploying virtualization software are often able to scale back hardware needs and reduce total IT costs. VMware is currently the leader in this market, but many others are jumping on the bandwagon, including industry giants Microsoft and Oracle.

Web 2.0 While better technology is enabling the transition, Web 2.0 is more societal than technology-based. The term refers to the so-called second phase of the World Wide Web. The transition has been underway for quite some time, but it is an ongoing evolution and a concept worth mentioning. The overriding theme is the transition from simple text and graphics to feature-rich multimedia content that can be generated and shared by all users, making the Web a more interactive and social medium (think YouTube, blogs, etc.). The transition, however, can create severe stress on existing network infrastructure due to the vast amount of content generated and shared (e.g., video). So while it may be difficult to pick out the next Facebook, investing in network equipment manufacturers is another way to ride the Web 2.0 wave.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset