Chapter 2

Layers and the Evolution of Communications Networks

Abstract

Cloud computing is forcing change beyond just the IT strategies of enterprises; it is core to the businesses of the Internet content providers while also enabling telcos to transform their businesses. The enablers of cloud include data centers crammed with servers, and optical interconnect. Optics is used to link IT equipment within the data center and to provide the high-capacity traffic transport between data centers. This interworking of telecom and datacom technologies is complex but can be simplified by segmenting the networking into layers. This chapter introduces a four-layer model—from wide area networks down to the chip—as a way to describe the hierarchy of interconnects used. It concludes by addressing the role of silicon photonics in this layered networked model.

Keywords

Cloud; silicon photonics; Moore’s law; Internet content providers; layering; telcos; wide area networks; servers; backplane; chips

Layers seems to be everywhere in mother nature.

Michael S. Gazzaniga [1]

Everything that has happened in the telecom network is now being replicated inside the data center. And then everything that is happening in the data center is going to be on the board, and then everything on the board is going to be in the package, and then everything in the package is going to be on the chip.

Lionel Kimerling [2]

2.1 Introduction

I [Rubenstein] have my father’s slide rule on my desk. It sits in a dark green box whose worn corners hint at years of use.

For readers familiar only with the digital age: the slide rule was a constant companion of engineers and academics. An ingenious mathematical device, it was used to multiply, divide, calculate roots, and work out logarithms by aligning numbered rulers against each other with the aid of a movable glass window inscribed with a line.

Fast-forward half a century and the WolframAlpha application runs on my tablet computer. I tap in numbers or a function, press a key, and the answer appears in multiple forms: numerically, as functions, and graphically, plotted in several ways.

The tablet does not perform the calculations, although you would not know, given how quickly the answers appear. And the tablet itself is no mathematical slouch. It is an extremely powerful computer with a microprocessor chip made up of two ARM 64-bit cores, each working to a clock beating over a billion times a second.

Instead, the calculation is sent, via the local service provider, through the Internet to find WolframAlpha’s computers—servers—housed in a data center. The servers do the calculations and the results are returned to the sender. How do I know? Well, if I disconnect Wi-Fi and go offline, a polite screen message appears: “Sorry, you need an Internet connection to do that.” Another reason to keep the slide rule close at hand.

If the tablet is a powerful computer, why send the calculation to some remote location hosting computers and wait for the answer to return? The glib answer is that the network is now sufficiently advanced to enable this. Using optical networking, data can be sent, computed, and returned in a fraction of a second. Here, data in the form of an electrical signal is converted into light and sent over an optical fiber before being restored as an electrical signal at the servers. The same signal translation process is undertaken when the data is returned to the tablet. And WolframAlpha offers more knowledge resources than just math.

2.1.1 The Cloud as an Information Resource

This model of a “dumb terminal” talking to centralized computing is not new. An early example was the mainframe computer where requests were sent via a terminal into a central computing room located in the same building or campus. Now, networking allows for the computing to be hundreds or even thousands of kilometers away from the user, while the computing resources—based on tens of thousands of computers linked in a data center and even between data centers—bear little resemblance to the early mainframes.

Nearly all the operations you undertake online involve interactions with remote computers or servers. The name server reflects how the computer unit “serves up” data and remote applications. Through such user requests—e.g., Internet searches or hitting a “Like” button—a wealth of data is accumulated, including your habits, preferences, and interests.

Such data fuels the businesses of Amazon, Google, and Facebook—referred to as Internet content providers—and explains the rapid rise of new industries such as Big Data. Companies in many segments are building data centers to host and “mine” this treasure trove of information. Car manufacturers, e.g., want data not only about the state of your connected vehicle but how you are driving it, your routes, and where you stop for coffee. In other words, they want to know your lifestyle—and that of everyone else driving their vehicles [3].

The cloud, as described, is characterized largely by human-triggered transactions. But soon it will be common for machines to be connected to the Internet, a development known as the Internet of Things.

Imagine an airborne drone navigating its way to deliver a parcel. For a successful delivery, the autonomous drone will need to navigate its way to the destination, requiring reliable networking and rapid interaction with remote computing. And the drone will be one of many in the sky.

Estimates suggest that Internet-connected machines could number between 20 and 30 billion as early as 2020 [4]. The amount of data generated, transmitted, and analyzed will accelerate as more and more items become connected, placing new demands on data centers and the network.

Having cloud-based centralized IT resources is also changing the way enterprises work. Instead of having their own engineering staff to purchase and maintain hardware and software, cloud providers can do it for them. By centralizing the task for multiple enterprises, the cloud providers can achieve economies of scale that only the largest of enterprises can match.

2.1.2 Optical Networking is Central to the Internet

Two huge industries, telecommunications and data communications, are required for the WolframAlpha application to work. Communications service providers—telcos—operate the networking infrastructure that underpins the Internet, while the Internet businesses create the clever services like the tablet application using cloud computing that run on top of the telecom networks.

Within the data center, cabling and networking switches are used to connect the IT equipment such as servers and storage (Fig. 2.1). The networking also includes Internet Protocol routers and optical transmission equipment to connect the data center to another data center or to the general network, referred to as the wide area network or WAN.

image
Figure 2.1 Data center networking schematic showing servers, routers, and storage networked with cables and switches.

Some of the largest data center operators own optical fiber and operate networking equipment, but the telcos are required for some part of the data’s journey between users and the information they receive.

2.1.3 Silicon Photonics: A Technology to Tackle the Industry’s Challenges

Both the telecom and datacom industries have pressing requirements that are stretching technologies to the limit. This book concentrates on the business and technology trends, explains the growing role of optical technology in networks, and discusses why silicon photonics is well-positioned to deliver solutions. Moreover, datacom and telecom are just two markets for silicon photonics, albeit two very important ones.

Before reflecting on the industry challenges, it is important to look at telecom and datacom more closely by segmenting these two increasingly intertwined worlds into layers. By highlighting the role of optics at each of these layers, the emerging market opportunities for silicon photonics become clearer.

2.2 The Concept of Layering

Cells, bacteria, and the brain are all examples of complex systems in nature, and they all share a layered structure. Layering turns out to be a useful tool to analyze complex systems, whether natural or man-made [1]. Telecom and datacom combined can be viewed as one such complex system.

Fig. 2.2 shows a four-layered model of cloud and telecoms. What becomes clear is that the layers share common characteristics. For example, the three pillars of the information age—communications, computing, and storage—are found at each of the four layers. What distinguishes the layers is their scale or, more accurately, their reach—an important metric in communications.

image
Figure 2.2 The four-layered model for cloud and telecoms: chips, platforms, the data center, and telecommunications networks. From: Layer 1 Synopsys, Layer 2 Cisco, Layer 3 Google, and Layer 4 GTT Communications.

Each layer can be viewed distinctly, with its own implementations and rules. This is a common characteristic of layered systems and explains the power of the approach: breaking down the system into its layered elements and then analyzing each layer allows a complex system to be simplified. In turn, details can be encapsulated and abstracted at the individual layers.

The layers, now described, start with Layer 4 telecom networks and drill down to the shortest reach, Layer 1: the chip level.

2.3 The Telecom Network—Layer 4

The telecom network is the topmost layer, Layer 4. This is not one network but a collection of networks, run and managed by the telcos. The networks range from the link between your home and the local exchange to global communication links that carry traffic across continents and even between continents using submarine cables (Fig. 2.3).

image
Figure 2.3 A telco network including long-haul, metro core, metro access, and premises, and their typical reaches. Exfo. “From EXFO, http://www.exfo.com/solutions/metro-core-networks/bu2-bu3-packet-optical-transport/technology-overview”.

Telcos may also provide cloud-based services, but for this discussion we focus on their role in enabling the cloud’s workings in general by overseeing the complex networks that underpin the Internet.

Telcos’ networks serve large enterprises and billions of individual users. The telcos also offer wholesale services to other operators. Their businesses generate huge revenues: the world’s 15 largest telecom operators have annual revenues that, when combined, exceed one trillion (one thousand billion) US dollars, while the total telecom operator market is worth nearly $2 trillion annually.

The networks in Layer 4 can be categorized into three types: long-haul, metropolitan (metro) and regional, and access.

• Long-haul networks span distances from 1000 to as many as 13000 km in the case of pan-Pacific submarine cabling. The networks use dense wavelength-division multiplexing, an important optical technology first used in the network in the 1990s. Dense wavelength-division multiplexing sends light at different wavelengths or colors down an optical fiber. Each color is like a unique traffic lane. With modern optical transmission technology, the highways carrying this traffic are often designed with 96 wide lanes, each one carrying 100 Gb—100 billion bits—or even 200 Gb of data each second. Since optical transmission usually sends the data in one direction down a fiber, a fiber pair is needed to send traffic in both directions.

• Metro and regional networks serve a city or a region, respectively. Distances here range from 40 to 1000 km, with dense wavelength-division multiplexing the predominant technology used. These networks are important for cloud computing because they link data centers sprinkled across a metropolitan area. Data center interconnect is growing in importance, but it is just one traffic type of many carried by metro and regional networks.

• The access network refers to the edge of the network, closest to the user or an enterprise. Distances here span to 40 km, although most access links are shorter. The radio access network—the radio part of the mobile network linking the cellular mast and your handset—and the fixed or wired network that brings broadband to your neighborhood and home are located here. Radio technologies, copper wires such as the legacy telephone network, and optical broadband technologies like fiber-to-the-home are used in the access network.

Telcos also operate aggregation points—buildings that house lots of equipment—in their networks, known as points-of-presence. They also operate central offices where equipment that aggregates traffic—e.g., radio and broadband data—reside. And as the operators increasingly embrace servers to deliver services, these buildings are now also housing data center equipment.

2.3.1 Cloud Computing is Driving the Need for High-Bandwidth Links Between Data Centers

The linking of data centers owned by the large Internet companies has become an important subcategory of optical networking. One reason is the rapid construction of data centers worldwide and the central role they play for Internet businesses, a topic addressed in Chapter 6, The Data Center: A Central Cog in the Digital Economy. These players are some of the most profitable companies in the world. Another reason why linking data centers is important is the significant and growing traffic they need to transport, which is leading to new equipment requirements; this is addressed in Chapter 5, Metro and Long-Haul Network Growth Demands Exponential Progress.

Statistics shared by two Internet giants reveal how the worlds of telecom and datacom have become intertwined.

Google has stated that a single Internet search query travels on average 2400 km before it is resolved [5]. The huge resources used by, and the sophistication of, Internet search algorithms are well documented. But what is revealing is how resolving a query can involve distributed data centers. Moreover, no matter how advanced Google’s data center architecture and search algorithms are; dense wavelength-division multiplexing communications must play a role given the distances involved.

A second example comes from Facebook, where a single user request (a hypertext transfer protocol or http request) to one of the company’s web servers generates a near 1000-fold increase in internal server-to-server traffic [6]. What this highlights is how much internal data center traffic is generated from a simple transfer across the wide area network. Clearly, the amount of traffic in and between a company’s data centers is far greater than the flow of traffic from the wide area network to and from the data centers. Indeed, estimates suggest that three quarters of data center traffic remains inside the data center [7].

Data centers can be distributed across several buildings in the same industrial park (access network distances), across a metropolitan area (metro distances), or, as we have seen, across countries or continents (long-haul distances). The amount of capacity needed to link data centers between local buildings or across a metro is huge: tens and even hundreds of terabits of capacity. A terabit is 1000 Gb. Links between data centers over long-haul distances are more akin to traditional long-haul dense wavelength-division multiplexing traffic. But unlike a telco’s more general traffic, these tend to be point-to-point links.

2.3.2 The Importance of Optics for Communications

The discussion here has focused on Layer 4 as a communications layer, but storage and computing are also required. Storage is used, e.g., in IP routers to buffer the incoming IP packet streams until they are processed and routed. The network also needs to be controlled, and that requires software running on general-purpose microprocessors used in telecom platforms and, increasingly as telcos adopt datacom practices, on servers too.

From a communications perspective, optics is fundamental in enabling telecom networking. Electrical cables do not have the same reach as fiber, nor can they match fiber’s information-carrying capacity, i.e., its bandwidth. Telecom started out over a century ago sending telephone calls over copper wire as analog signals. Aggregating all the calls coupled with other services running on the network soon required large-capacity pipes, and this accelerated further with the advent of digital data and services, causing telcos to turn to optical technology to send the summed traffic over long distances. Now phone calls—even high-definition voice calls—are but a trickle in the deluge of digital data types sent across the global telecom network.

Optical technology is the foundational layer on which the telecom network is constructed. And whenever optics is involved, so exists a market opportunity for silicon photonics.

2.4 The Data Center—Layer 3

The data center is a cavernous warehouse crammed with information technology equipment. Walk inside one and you will see rows and rows of racks, each hosting tens of servers stacked on top of each other (Fig. 2.4). The largest data centers can host 100,000–200,000 servers.

image
Figure 2.4 Equipment inside a Facebook data center. © Copyright Facebook & Steve Tague Studios.

Other key IT equipment in the data center includes storage and networking. Storage is needed to hold the data processed by the servers, while networking connects the storage with the servers. Because workloads require huge computational resources—servers organized in clusters—how the data center’s equipment is networked is key to its processing performance, efficiency, and overall costs. This is the subject of Chapter 7, Data Center Architectures and Opportunities for Silicon Photonics.

Data centers have several important metrics. One is size, measured in terms of the floor space a data center occupies. The largest Internet businesses run huge data centers that can occupy more than 1 million square feet of floor area, and are referred to as mega- or hyperscale data centers.

Another important data center measure is power consumption. This includes the power consumed by all the equipment plus the additional power needed for the cooling systems to extract the heat generated by the equipment. The goal is to maximize the power efficiency of equipment while minimizing the power consumed for cooling. The issue of power consumption in the data center is also addressed in Chapter 6, The Data Center: A Central Cog in the Digital Economy.

A further distinction associated with data centers is the type of company operating them. The data center is at the core of the Internet business players such as Amazon, Facebook, Google, and Microsoft. These companies architect their data centers to maximize the efficiency of their operations. But since their operations differ, so do their requirements. IT is fundamental to all their businesses, though, and each introduced efficiency directly impacts their bottom line.

Large enterprises typically run smaller-sized data centers. Such data centers are clearly important for these companies’ operations but are not their central businesses. As such, they will not employ as many data center staff as the Internet businesses, nor will they be at the forefront of driving technological progress. Indeed, these enterprises will be the main beneficiaries once the technological advances driven by the Internet businesses become mainstream.

2.4.1 Data Center Networking is a Key Opportunity for Silicon Photonics

The data center is one of the most dynamic arenas driving technological innovation, whether in servers, storage, networking, or software. As such the data center represents a key market opportunity for silicon photonics. Server racks continue to evolve and need faster networking between them. And while copper cabling plays an important role in linking equipment in the data center—what we refer to as Layer 3—optical interconnect is gaining in importance as the amount of traffic rattling around the data center grows.

Copper cabling can link equipment up to 10 m apart but it is bulky, which can restrict equipment air flow and impede cooling. Copper cabling is also heavy: the sheer weight of connections on the front panel of servers or switches has been known to cause disconnects to equipment.

Optical connections are lighter, less bulky, and cover much greater distances—500 m, 2 km, and 10 km—more than enough to span the largest data centers.

Such connections are situated on the front of the equipment, referred to as the faceplate. The faceplate typically supports a mix of interfaces and technologies: electrical interfaces using copper cabling as well as optical links using fiber connectors and optical modules. Optical modules are units that plug into the faceplate. Modules support all sorts of speeds and distances within the data center (see Appendix 1: Optical Communications Primer). Speeds include 1, 10, 40, and 100 Gb/s; copper links may range up to a few meters, while optical spans up to 10 km. The optical modules, pluggable or fixed, can also use dense wavelength-division multiplexing to enable Layer 4 metro and long-haul distances. Such optical connections have become an early and obvious market for the silicon photonics players to target.

2.5 Platforms—Layer 2

The next layer down, Layer 2, is the equipment itself. For servers, these are typically boxes or rack units containing a printed circuit board on which sits the chips and optics. Rack units can be stacked in a chassis, up to 2 m high (Fig. 2.5). Data center managers can populate the chassis with as many racks as required, while individual racks can be changed without affecting the platform as a whole.

image
Figure 2.5 Cisco Systems’ Nexus 7700 switch rack. From Cisco.

We define Layer 2 as communications within equipment—between boards or between rack units in a chassis. Typical distances are a few meters.

Telecom equipment is built differently from the IT equipment used in the data center. Telecom systems have slots across a shelf, each slot housing a printed circuit board vertically. The platform can be made up of one shelf or several stacked shelves. Telecom equipment is also built to a more stringent standard and must undergo lengthy qualification before deployment. This is because telecom equipment may be deployed in the network for 20 years or more, whereas equipment in the data center may be replaced every 2–3 years typically.

The backplane of a platform refers to the internal bus that connects the cards within the chassis. There has been repeated talk of systems moving from an electrical to an optical-based backplane, and such platforms have started to emerge commercially [8]. Oracle is one vendor that has announced switches—used to connect servers in the data center—that have an optical backplane. Another company is Ericsson, which has developed a server that uses optics to connect various elements making up the system. Such a design is known as a disaggregated server and is discussed in Chapter 7, Data Center Architectures and Opportunities for Silicon Photonics.

But Oracle and Ericsson are trailblazers; electrical signaling continues to be the dominant approach. Backplane electrical signaling is already at 28 Gb/s, and work is underway to double the rate to 56 Gb/s. The distances of electrical interfaces for backplanes at such high speeds are up to a meter.

Needless to say, silicon photonics’ opportunities at Layer 2 include all applications of optics for communications within a system, including the backplane.

2.6 The Silicon Chip—Layer 1

At the bottommost layer sits the silicon chip. Inside the chip, the internal signaling is sent across electrical busses to other digital logic building blocks. Microprocessors, the chips at the heart of servers, are made up of multiple central processing units or cores that are connected and operate in parallel, as well as memory (storage) blocks and peripheral circuits.

Examples of chips for telecom include the network processor, a specialized processor designed for processing traffic in the form of Ethernet frames and IP packets, and the coherent digital signal processor used for high-speed metro and long-distance connections. A digital signal processor is a chip designed to do lots of math—multiplies and adds—as quickly as possible. Layer 1 interconnect covers communications within chips, between chips on the same card, or between the chip and the faceplate (Fig. 2.6).

image
Figure 2.6 Line card showing connectivity to chips, faceplate, and backplane. Synopsys.

Layer 1 is probably the most exciting opportunity for silicon photonics—exciting in the sense that it promises larger volumes and because optics will bring to chips far more information-carrying capacity or bandwidth than electrical input–output. And that promises new styles of system designs.

Longer term still, the technology could even be used to connect the functional blocks within a chip. If the data center with its huge computing, storage, and interconnect resources can be seen as a hyperscale distributed computer, silicon photonics linking multiple processor cores and memory within a chip can be viewed as performing a similar function, just on a scale two layers down. However, these are the shortest distances and copper has enough performance headroom to be the technology of choice for the foreseeable future.

The layered model representation of datacom and telecom highlights how optics are present at each of the four layers. What differs are the reaches, which diminish with each descending layer. Other differences include prices and unit volumes. Simply put, the use of optics over shorter distances equates to larger volumes overall, optics within chips being the extreme example of that.

2.7 Telecom and Datacom Industry Challenges

The leading US telco, AT&T, describes its role as providing fast, secure connectivity to everything on the Internet, independent of location and device [9]. This is an apt description for the role of the telecom industry in general.

The telcos provide mobile and fixed connectivity, and their networks carry IP traffic that continues to grow exponentially. Cisco Systems, a leading provider of IP core routers, forecasts that globally, IP traffic will nearly triple between 2015 and 2020, a compound annual growth rate of 22% [10].

With such rapid growth, it doesn’t take many years before the consequences in terms of traffic on the network are felt. For telcos, traffic growth means having to invest continually in network infrastructure, addressing pinch points as they arise, whether in the long-haul, metro, or access networks that collectively make up Layer 4.

The telcos also operate in a competitive and regulated marketplace. The fierce competition comes not just from other telcos but from nimble Internet giants that deliver their services over the telcos’ networks. The complaint often heard from telcos is that they have all the costs of running and investing in their networks while Internet businesses make lots of money with services that ride on top. Examples include streaming video service providers and messaging companies, known as over-the-top players.

Given telcos have annual revenues totaling $2 trillion dollars, not everyone is sympathetic. The New York Times in an editorial called this argument disingenuous. Telcos make money by charging customers monthly fees for accessing the network; if revenues were inadequate, operators would raise prices or become insolvent, it said [11].

A positive aspect of the competition is that the telcos are transforming their services and embracing practices pioneered by the Internet businesses. Telcos are also acquiring IT technology services companies, building their own data centers, and offering their own cloud-based services.

2.7.1 Approaching the Traffic-Carrying Capacity of Fiber

The telcos are being challenged financially to invest in their networks as demand for capacity grows faster than their revenues (Fig. 2.7). What they need are technologies that help them add capacity to their networks at a lower cost-per-bit.

image
Figure 2.7 Telco revenues are not growing as fast as Internet traffic. Based on revenue figures from Ovum, traffic volumes from Cisco.

But telcos must address the problem of optical fiber reaching its capacity limit. They will no longer be able to keep scaling capacity as they have over the last 20 years. New technologies, perhaps even new fiber, will be needed to support the growth in bandwidth demand.

These problems will not be solved by continuing with the present setup; they require technology innovation—hence the rallying cry for solutions that extend optics beyond what is used today. This issue is explored in Chapter 5, Metro and Long-Haul Network Growth Demands Exponential Progress.

2.7.2 Internet Businesses Have Their Own Challenges

Internet businesses’ very success is forcing them to intervene to spur technological developments. What helps is that their growth and buzz means that their requests are met by a receptive vendor community.

Internet content providers have the financial clout to develop their own solutions or entice vendors to change direction and develop solutions for them ahead of the market. Indeed, the current period has brought about the most fundamental rethink of IT in decades—everything is up for review.

There are good reasons for this. One is that data center workloads are evolving quickly. The largest data center operators employ custom equipment configurations tailored to their core business tasks. Efficiency is key; the software algorithms and data center equipment the Internet content providers run are their engines of growth and they will do everything—including spurring the making of custom equipment—to achieve an edge.

The changing workloads are not just leading to a rethink regarding computing but also a reimagining of networking and data storage. And it is happening on many levels: where data centers are located, their interconnect requirements, how servers are designed, whether data transfers between the server processors and storage can be reduced to conserve power, and how best to connect servers, not just locally but across and between data centers.

2.7.3 Approaching the End of Moore’s law

Evolving workloads are not their only cause for change. The fundamental driver of first the microelectronics industry and now the nanoelectronics industry, described by chip pioneer Gordon E. Moore in his famous law 50 years ago, is running out of steam. The chip giants are continuing to shrink the transistor, the elemental switch used to make integrated circuits. But the performance benefits described by Moore’s law, as will be explained in Chapter 3, The Long March to a Silicon-Photonics Union, no longer scale.

Shrinking the transistor further is becoming exorbitantly expensive, which means the most basic economic engine of the information age is slowing down. The end of Moore’s law is the backdrop to the changes taking place in the cloud, and inevitably it will make cloud optimization more challenging.

The Internet content providers are therefore thinking anew, and there is no agreement on the best approach to improving computing efficiency. It also explains why the web giants are developing industry initiatives such as Facebook’s Open Compute Project [12], spurring the development of additional Ethernet rates, and creating industry consortia for such tasks as developing optics to connect chips [13]. In the future this period will be viewed as one of great upheaval, but it is also a period of significant opportunity, for new players and new technologies.

2.8 Silicon Photonics: Why the Technology Is Important for All the Layers

Silicon photonics is a technology that has the potential to tackle applications across the layers: from long-distance, high-capacity telecom links at Layer 4 to compact optical devices doing battle with copper in the data center and down to Layer 1 distances.

Silicon photonics is coming to prominence because of its potential to tackle key emerging systems issues at the chip level (Layer 1) and for equipment (Layer 2 and Layer 3), namely their growing bandwidth requirements and power consumption.

The issues of connectivity and power consumption have become hugely important considerations regarding the data center. Interconnect costs in the data center and between data centers are considerable, and the market is already feeling the effects of Moore’s law coming to an end. New technologies and approaches are needed to drive chip and system design forward—systems in the data center and equipment for Layer 4.

Silicon photonics, which shares a common base for optics and electronics, is one important technology in the toolkit of tricks engineers are considering for what is referred to as Beyond or More than Moore’s law. This includes the promise of merging optics and electronics, thereby avoiding the challenges of bandwidth-reach and the power required to send signals between chips, and thus improving the overall system performance. Silicon photonics can also reduce total power consumption: long resistive electrical traces require energy to drive the signal, and the energy requirement goes up with data rate.

Yet another promise of silicon photonics is integration: the ability to combine numerous optical functions. Subsuming more and more functionality on a chip has been fundamental in the success of the chip industry. Optics is not like chip-making, as will be explained in Chapter 3, The Long March to a Silicon-Photonics Union, but optical integration has been fundamental in reducing system costs. And it is integration that will enable novel silicon photonics applications beyond just telecom and datacom, such as sensors, for example.

We see silicon photonics as a hugely promising optical technology that will play an integral role in the evolution of cloud computing and telecom at each of the four layers outlined. Silicon photonics is also a technology that benefits optical integrated designs and also the comingling of optics and electronics. Optical and electrical integration promises to lower cost—purchasing multiple parts is more expensive than buying fewer integrated devices because prices are marked up for each part, known as margin stacking.

In Chapter 3, The Long March to a Silicon-Photonics Union, we look at the origins of silicon photonics, how it differs from the chip industry, and how a slow but increasing union between electronics and optics is taking place.

Key Takeaways

• Telecom and datacom form a complex system that can be analyzed using a four-layered model.

• Each of the layers can be viewed as self-contained. The layers also share common attributes: all use communications, computing, and storage. What distinguishes the layers is distance or reach, pricing, and volumes.

• Optical technology plays an important role at each of the layers and, as such, so will silicon photonics.

• Silicon photonics will bring several important benefits to the layers including size, cost, and power consumption efficiencies for components and equipment.

• The greatest contribution of silicon photonics, however, will be to advance performance metrics beyond what is possible with existing technologies. With this will come new, more efficient system designs for networking, computing, and storage—attributes demanded by the telcos and especially the Internet content providers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset