Chapter 5

Metro and Long-Haul Network Growth Demands Exponential Progress

Abstract

The relentless growth in traffic is presenting designers with the challenge of impending network capacity exhaust. Network operators have experienced a continual reduction in the cost of optical transport, measured in cost per transported bit. But with network traffic now filling fibers, this cost will rise. Internet content providers, experiencing the fastest traffic growth as a consequence of their large-scale data centers, are set to meet this challenge first.

The Internet content providers and telcos will do everything they can to make best use of their existing capacity, while equipment vendors will focus on cost-reducing optical transport equipment at each end of the link. Optical transport systems can already fill a fiber with data. The next step will be to make such systems more compact, more power-efficient, and cheaper. This will require innovation at the system, optical module, and circuit levels. Silicon photonics is already being used for optical transport for telcos and the Internet content providers; the rise of transport costs will further spur the technology’s use.

Keywords

Dense wavelength-division multiplexing; data center interconnect; modulation; optical transport; photonic integrated circuit; silicon photonics; traffic growth; cost-per-bit; coherent optics; direct detection

Growth is what matters. People don’t take big risks and do interesting things to attack flat or contracting markets.

Andrew Schmitt [1]

With the relentless growth in traffic, telecom needs something new. You can’t just take what we have and continue to cost-reduce.

Karen Liu [2]

5.1 The Changing Nature of Telecom

The first transcontinental telephone call was made just over a century ago. The conversation took place between inventor Alexander Graham Bell in New York City and his San Francisco–based former assistant, Thomas Augustus Watson [3].

One hundred years later, voice remains a core telecom service, although it is now a stream of 1s and 0s. And in this new digital world, the world’s leading communications service providers (CSPs) or telcos—the likes of AT&T, China Mobile, Deutsche Telekom, and NTT—offer much more than voice. The telcos deliver entertainment services such as streamed video and music as well as business services distributed over their sophisticated fixed and mobile networks. Indeed, perhaps the telcos’ most prized subscriber offering is not so much a service as a piece of hardware: the smartphone, as powerful as a laptop computer yet small enough to be inseparable from its user.

Telcos recognize the importance of owning the connection linking a subscriber to their networks. Their strategy is to secure a subscriber via a handset or a broadband residential gateway and then sell them services. But such connectivity is also central to the businesses of the Internet content providers, such as Google, Apple, Facebook, Microsoft, and Amazon, that deliver “over-the-top” services over those same telcos’ networks.

Fixed and mobile telcos dominate the service providers’ revenues, based on data from market research firm, Ovum, as shown in Fig. 5.1. But their revenues are largely flat and are expected to remain so through 2020. There are good reasons for this: the telecommunications industry is fiercely competitive, markets are regulated heavily, and there is a limit as to how much a subscriber or household will pay each month for telecom services.

image
Figure 5.1 Global service provider revenue forecast (millions), 2008–20. Courtesy of Ovum.

The ability to deliver services over a secure network to millions of users at home, at work, and when on the move remains a key telco strength. And telcos are seeking new opportunities to grow their businesses. These include investing in TV, video, and content [4], cloud services [5], Internet businesses [6] and, in the case of Softbank, acquiring the leading chip player, ARM Holdings [7]. Another development that promises to have a huge impact across many industries is the Internet of Things, an emerging market that will expand greatly the number of machines linked to the Internet.

The Internet content providers, in contrast to the telcos, are much younger companies and are experiencing sharp growth with their massive customer numbers and targeted advertising, the subject of Chapter 6, The Data Center: A Central Cog in the Digital Economy. Their growing revenues and continual striving to advance their businesses using datacom and telecom technologies are an attractive lure for equipment makers and optical module and component companies. As analyst Andrew Schmitt notes, technology firms do not take big risks and do interesting things to attack flat or contracting markets [1].

Fig. 5.2 shows the capital expenditures of the different types of service providers, forecasted through to 2020. Capital expenditure, or capex, refers to how much service providers spend on equipment and premises. Mobile CSPs—wireless carriers such as Verizon Wireless and T-Mobile—also include buying radio spectrum as part of their capex.

image
Figure 5.2 Global service provider capital expenditure forecast (millions), 2008–20. Ovum.

Telcos also account for the bulk of the overall providers’ capex spending, as shown in Fig. 5.2. Approximately 10% of the telcos’ capex is spent on optical networking. This makes telcos the dominant customers for optical communications, having overseen the critical technological steps that have given rise to cost-effective, wavelength-division-multiplexed optical transmission. The telcos’ involvement is not happenstance; their businesses depend on such advances.

Meanwhile, the strong growth of the Internet content providers, forecast to continue in the coming years, means their spending cannot be ignored. The Internet businesses are experiencing rapid growth in traffic, driving their demand for networking capacity. And that is leading them to drive new requirements for optical systems and components.

Equipment vendors are responding to the Internet content providers’ demands with optical transmission gear tailored to their needs. This is a significant development. It is the first time in recent history that custom optical equipment has been made for an end customer other than a telco—an equipment category known as data center interconnect and discussed in Section 5.4.

The continual growth in traffic is presenting telcos and Internet content providers with a new challenge: the relentless growth in traffic will cause transport costs to finally rise.

Similar to Moore’s law, network operators have experienced a continual reduction in the cost of optical transport, measured in cost per transported bit. But network traffic growth is filling up fibers—for years seen as having boundless capacity—and this will raise the cost of optical transport. Internet content providers, with their large-scale data centers, are experiencing the fastest growth in traffic and will meet this challenge first, as discussed in Section 5.2.

The impending rise in transport costs will have important consequences:

• The Internet content providers and the telcos will do everything they can to fully use their existing installed fiber.

• Optical transport equipment vendors will focus on cost-reducing their platforms. Systems can already fill a fiber with data. The next step will be to make such systems more compact, more power-efficient, and cheaper.

Both developments will require innovation at the systems, optical module, and component levels. Optical integration—whether in the form of smaller pluggable modules, embedded modules, new superchannel transponder designs, and even space-division multiplexing—will all play a role.

This chapter highlights how silicon photonics has already established itself as a design alternative to indium phosphide, a notable achievement given that the technology has only recently been deployed for long-distance optical transport.

Silicon photonics has yet to demonstrate a telling advantage. Line-side optics volumes are relatively modest, making it harder for silicon photonics to differentiate itself. But the advancement in signaling schemes and the close interworking of optics and electronics, coupled with the trend to develop ever more compact platforms, indicate opportunities for silicon photonics.

In turn, leading optical transport vendors now have silicon photonics technology in-house. And it is the systems vendors, with their larger investment budgets and need for product differentiation, that will likely advance silicon photonics for optical transport more than the optical component and optical module makers.

5.2 Internet Businesses Have the Fastest Network Traffic Growth

Internet traffic carried by the telcos’ networks is growing exponentially each year, at an estimated average rate of 20–30%. There are several reasons for such growth: more subscribers and machines are connecting to the network each year, and the nature of the interactions is evolving. Enterprises’ data is increasingly being stored remotely—in the cloud—while the use of video, a particularly demanding service in terms of network capacity, is growing.

Although the telcos’ 20–30% annual growth is significant, it turns out to be at the modest end of the spectrum. Cable operators are experiencing 60% growth annually, whereas the Internet content providers are seeing 80–100% growth year-on-year [8]. With an annual near-doubling of traffic, it does not take many years to stretch the capabilities of optical transport systems and the network.

In the early 2000s, typical dense wavelength-division multiplexing systems supported up to 80 10-Gb wavelengths, or 0.8 Tb. Nortel Networks, then the leader in such systems, first announced in 1999 its OPTera 1600G system that supported as many as 160 10-Gb wavelengths, a huge capacity at the time. But this was during the optical boom and bandwidth exuberance; no telco needed such capacity.

Today’s long-haul systems support 96 100-Gb wavelengths, nearly 10 Tb of capacity overall. And state-of-the-art systems implementing advanced modulation schemes that support 200 Gb on an optical carrier or wavelength further boost the system’s optical transport capacity to 20–25 Tb, albeit over shorter link distances than systems operating at 100 Gb/s. Nokia’s PSE-2-based systems can support up to 35 Tb over a fiber in the C band but only when the distances are short: 100–150 km [9]. Nokia also announced that its system will support the L band that will double the capacity to 70 Tb: 35 Tb across the C band and 35 Tb across the L band [10].

The optical industry is thus rising to the challenge of accommodating the exponential growth of Internet traffic. But hurdles remain. Commercial systems are approaching the upper limit of how much can be carried on a fiber, known as the nonlinear Shannon limit [11,12], for transmission distances of hundreds and thousands of kilometers.

The first thing to note about the nonlinear Shannon limit is that the traffic-carrying capacity of a fiber varies with distance, as shown in Fig. 5.3. Much more data can be sent over shorter spans of a few hundred kilometers than over distances of several thousands of kilometers.

image
Figure 5.3 How the nonlinear Shannon limit of fiber varies with distance. Based on information from Bell Labs.

The y-axis of the Fig. 5.3 graph is a measure of the efficiency of the transported data in terms of how well a transport system is using the fiber’s available bandwidth. This is referred to as the spectral efficiency and is measured in bits/s/Hz. Note the nonlinear Shannon limit defines the fiber’s maximum spectral efficiency possible for any given distance.

Lastly, Fig. 5.3 also shows the upper boundary of the capacity-filling performance achieved by leading-edge optical systems demonstrated in research laboratories, confirming that systems are already doing an excellent job in exploiting the capacity of fiber.

Current commercial optical transport systems transport 100-Gb over a 50-GHz wide channel, a spectral efficiency of 2 bits/s/Hz. Fig. 5.3 shows how distances in excess of 10,000 km can be supported using such a scheme.

The need to transport more traffic has led to the adoption of higher transmission rates and higher spectral efficiencies. But these benefits come at the expense of transmission distance. Note, this is a relatively new problem; previous generations of optical transport systems used transmission schemes with a spectral efficiency well below the nonlinear Shannon limit.

Designers are working to reduce the cost of transmission by better using the available capacity. This involves using more complex signaling schemes to effectively increase the spectral efficiency. But because much of the existing capacity is already being exploited, the scope for cost reduction of future systems is limited. This is the significance of optical transport systems approaching the nonlinear Shannon limit.

5.3 The Market Should Expect Cost-per-Transmitted-Bit to Rise

Once a fiber’s transmission band reaches its data capacity-carrying limit, the cost hikes up when transmitting the very next bit. That is because the two options available to network planners today involve upfront investment when adding more link capacity.

The planners can either light a separate fiber, or they can use an additional part of the fiber’s spectrum—the L band—alongside the C band, the spectral band used currently for optical transport based on dense wavelength-division multiplexing. However, both cases increase the cost-per-bit, a scenario not experienced during the recent history of optical transmission.

Fig. 5.4 summarizes the impact of the per-bit transmission capital cost as data rates increase to meet growing bandwidth requirements.

image
Figure 5.4 Relative transmission cost by data rate. Ovum.

To be able to compare the relative costs of the different transmission approaches that have been adopted, several assumptions are made. The first is that the optical transport systems are fully loaded with line cards used for optical transmission.

The analysis compares the cost of a 1000-km link and assumes optical amplifiers are spaced every 80 km along the link. Other elements used include multidegree reconfigurable add-drop multiplexers (ROADMs) that support newer 100- and 400-Gb wavelengths or lightpaths, simpler ROADMs for 10- and 40-Gb lightpaths, and fixed optical add-drop multiplexers for the older 2.5-Gb wavelengths. A ROADM is a network element used to switch optical wavelengths between fibers as well as to add new wavelengths or drop wavelengths at a network hub site. A multidegree ROADM allows for such lightpaths to be switched between multiple fiber pairs in a flexible way.

Other assumptions include the use of 40 channels for the 2.5-Gb network, 88 channels at 10 Gb, and 96 channels for the rest. Cost is based on 2015 prices and normalized with respect to 2.5 Gb. The data comes from Ovum’s 2015 Optical Networking forecast [13].

The ways used to increase fiber capacity are data rate, the spectral width a wavelength occupies, and the number of wavelength-division-multiplexed channels used. At 2.5 Gb/s, the spectral width is 100 GHz and the maximum capacity is 40 channels, for a total capacity of 100 Gb.

Using 10-Gb lightpaths presents technical challenges. Simply put, the wavelengths are dispersed as they travel across the fiber, and dispersion compensators—extra components—are needed, thereby increasing equipment cost. But the increased data rate, narrower (50 GHz) spectral width, and the network’s ability to transmit 88 wavelength-division-multiplexed channels results in a per-bit cost that is lower than at 2.5-Gb/s transmission.

The deployment of the 40-Gb data rate was short-lived between 2005 and 2009, before it started to be superseded by 100-Gb lightpath deployments, but it still drove down the per-bit cost. Systems supported 96 channels at 50-GHz spacing for transmission in the C band. The latest network deployments, using 100- and 200-Gb lightpaths, have a cost based on multidegree ROADMs and 96 channels in the C band.

Once a network planner needs to transmit more data than a fiber’s C band can support, they must either light a new fiber or transmit in the L band alongside the C band. Both scenarios increase the cost of transmission. Both capital and operational costs increase if current technology is used, since the operator is effectively running two networks to accommodate traffic growth. The cost increases because all the common equipment—the amplifiers and the ROADMs—must be duplicated to support the extra traffic. And extra equipment is needed at both ends of the fiber.

To continue reducing transmission cost, what is needed is to integrate common equipment so that it can be shared among multiple transmission bands. To reduce operational costs, transmission equipment and the space they occupy and power they consume must come down. This implies that greater integration will be needed to realize a reduced transmission cost. This is where silicon photonics can play a role.

5.4 Data Center Interconnect Equipment

Section 5.1 mentioned the significance of the Internet content providers in terms of their spending on equipment and how this has resulted in optical equipment vendors developing data center interconnect platforms tailored to those providers’ needs. These data center interconnect platforms highlight several important trends:

• Optical transport equipment is becoming denser in terms of the transmission capacity that can be fitted in a box, and this trend will continue.

• A single, albeit stackable, platform is now capable of filling several fibers’ worth of capacity.

• Data center interconnect vendors are already offering platforms that use both the C and the L bands to expand dense wavelength-division-multiplexed optical transport.

• Silicon photonics is now being used to link data centers.

• Internet content providers are looking for even cheaper solutions than that offered by the relatively new data center interconnect platforms.

5.4.1 New Requirements and New Optical Platform Form Factors

Data center interconnect platforms are a recent development; Infinera was first to market in 2014 with its Cloud Xpress platform [14]. Other optical transport vendors were already offering equipment that was being used to connect data centers, but Infinera’s Cloud Xpress was the first example of a style of platform tailored to the data center that differed from traditional telecom equipment, as is now explained.

Data center managers want to connect sites using point-to-point links, whereas the telcos’ Layer 4 networks carry a variety of traffic types and services between many locations, requiring a multipoint-to-multipoint network topology.

Data center managers thus want equipment that can send lots of data without needing the common equipment technologies telcos require. Common equipment such as ROADMs is not needed when the link is point-to-point, and optical amplification is not needed if the data centers link distances are short enough. Other attributes required for the data center include power consumption efficiencies and compactness—conserving power and floor space are key operational expenses that concern data center managers.

Data center managers are used to platforms that scale by simply adding and stacking more cards in slim boxes in a rack. Such cards are referred to as “pizza boxes” (some deep pan, some thin-based) due to their dimensions. This is how data center servers are designed, and it is how managers want their optical transmission equipment.

Cisco Systems’ NCS 2015 chassis-based optical transport platform for telcos, and the NCS 1002, its first data center interconnect platform, highlight these differences. The NCS 2015 chassis has 15 card slots, each supporting 200 Gb of traffic. The total 3 Tb of optical transport capacity require a total of 12 rack units, a rack unit being a universal unit of measure to determine the height consumed by a box or stacked platform. In contrast, Cisco’s NCS 1002 data center interconnect product crams 2 Tb of line-side capacity into just two rack units, improving capacity density fourfold compared to its telco chassis.

Fig. 5.5 shows the Cisco NCS 2015 chassis and NCS 1002 data center interconnect platforms.

image
Figure 5.5 Cisco Systems’ NCS 2015 and NCS 1002 platforms. From Cisco.

5.4.2 From Cloud Xpress to Cloud Xpress 2

It is informative to look at how other optical equipment makers have responded with their data center interconnect platform designs.

Infinera’s first-to-market Cloud Xpress platform uses the vendor’s own 500-Gb photonic integrated circuit (PIC) chipset in a 2-rack-unit (2RU) box. These boxes can be stacked 16 high to create an 8-Tb line-side capacity optical transport platform [14].

Ciena introduced its Waveserver, which supports 400-Gb line-side capacity using its own optical design and coherent digital signal processing chips, known as DSP-ASICs [15]. Coherent optics is discussed in Section 5.5.1 and in Appendix 2, Optical Transmission Techniques for Layer 4 Networks. The 400 Gb/s is achieved using two carrier signals, each carrying 200 Gb/s. The Waveserver achieves 200 Gb per wavelength by using a more advanced modulation scheme than Infinera’s first-generation Cloud Xpress.

The complete platform accommodates up to 44 Waveserver stackable units, yielding 88 wavelengths, a total capacity of 17.6 Tb. Ciena says it can achieve more—up to 25 Tb of capacity—by creating narrower channels using its coherent DSP-ASIC to shape the optical pulses before transmission.

ADVA Optical Networking has advanced data center interconnect further with its CloudConnect platform, which comes in different configurations [16]. All the CloudConnect platforms use a QuadFlex card whose line-side interface supports 100-Gb ultralong-haul to 400-Gb metro-regional transmission, in increments of 100 Gb. Two carriers are used, but different modulation schemes are implemented to achieve the 200-, 300-, and 400-Gb line rates.

The CloudConnect is notable in that it can double total line-side capacity by also using the L band. Using two platforms, a total of 51.2-Tb line-side capacity can be achieved: 25.6 Tb across each of the bands.

Yet another platform, Coriant’s Groove G30, hosts eight CFP2-Analog Coherent Optics (CFP2-ACO) pluggable optical modules on a 1-rack-unit card. Each CFP2-ACO supports 100, 150, and 200 Gb using different modulation schemes. Up to 128 wavelengths fit in the C band for a total capacity of 25.6 Tb [17].

A single G30 platform supports 42 cards, resulting in a total line-side capacity of 67.2 Tb. This exceeds ADVA’s two-platform capacity, but three fiber pairs are needed to carry the full 67-Tb capacity on a single platform.

In effect, vendors can now design pizza boxes with terabit line-side capacities and stack them so high in one platform that multiple fibers are needed to benefit from all the line-side capacity.

Cisco Systems’ NCS 1002 platform supports 250 Gb/s on a single wavelength, more than the other vendors’ 200 Gb. This reduces the number of line-side pluggable modules needed on the equipment’s faceplate: for Cisco’s platform, four 250-Gb CFP2-ACO optical modules can deliver a terabit of capacity instead of five 200-Gb CFP2-ACOs. Overall, Cisco uses 96 wavelengths instead of 120 to deliver a total platform capacity of 24 Tb [18].

Infinera has since launched its second-generation Cloud Xpress 2. It is the vendor’s first commercial platform to use its latest-generation PIC and DSP-ASIC that support 1.2 Tb of capacity [19].

Fig. 5.6 summarizes the data center interconnect platforms discussed in terms of their line-side capacity normalized to a single rack unit and plotted over time.

image
Figure 5.6 Data center interconnect introduction dates and per-rack-unit capacity.

What is evident is that platform density is increasing, a trend that will continue. Moreover, the two highest density platforms, Coriant’s Groove G30 and Infinera’s Cloud Xpress 2, are based on PICs, with the highest capacity platform using silicon photonics.

5.5 The Role of Silicon Photonics for Data Center Interconnect

The platforms shown in Fig. 5.6 use various line-side photonics approaches. These include PIC technology, custom line-side optics designed on a line card, and pluggable optical modules in the form of the CFP2-ACO. With the CFP2-ACO, the module houses the optics used for coherent transmission while the accompanying DSP-ASIC resides on the line card.

Infinera’s platform uses its 1.2-Tb PIC: six wavelengths each supporting 200 Gb/s of traffic. Note that this is not the maximum capacity of Infinera’s latest PIC technology: it has a DSP-ASIC and PIC combination—its Infinite Capacity Engine—that can support up to 12 channels, each at 200 Gb [20,21]. However, the company is using a trimmed-down six-wavelength version for its Cloud Xpress 2.

In contrast, Coriant’s G30 uses eight CFP2-ACOs, which when operated at 200 Gb/s equates to a line-side density of 1.6 Tb per rack unit. Coriant says that it has its own silicon photonics technology while also collaborating with strategic partners. The company says that its platform is using both silicon photonics and indium phosphide CFP2-ACO pluggable modules.

What can be concluded is that silicon photonics is already competitive for data center interconnect applications in terms of reach and capacity density but is up against the incumbent technology of indium phosphide.

Infinera’s latest-generation indium phosphide PIC was 4 years in development. Used in the Cloud Xpress 2, Infinera has increased line-side density nearly fivefold compared to its first-generation product. Meanwhile, both indium phosphide and silicon photonics designs are being used for the CFP2-ACO optical module in Coriant’s platform.

Infinera is unique in that it has reserved its indium phosphide PIC expertise for use in its own platforms; it does not sell its integrated photonics chips to third parties. Selling to third parties is important for silicon photonics vendors if they want to be profitable, given the relatively low volumes associated with line-side interfaces. But several optical transport vendors—Ciena, Coriant, Huawei, Cisco Systems, and Nokia—have their own silicon photonics expertise and can develop their own solutions for the data center interconnect market.

Currently the most integrated silicon photonics design is Acacia’s single-chip transceiver that supports up to 200 Gb. This is only a sixth of the capacity of Infinera’s newest PIC in the Cloud Xpress 2, although Acacia’s design integrates the transmitter and receiver on one chip while Infinera uses separate transmitter and receiver PICs. And as of this writing, it has been 18 months since Acacia announced its chip; one can assume the company is well advanced in developing its next-generation photonic integrated chip design, its next product with even more advanced features.

Meanwhile, the next line-side pluggable optical module development after the CFP2-ACO will be the CFP8-ACO, a module that is flatter but comparable to the CFP2 in size. However, it will support a wider interface such that the module will support up to four wavelengths, each at speeds up to 400 Gb/s, for a total line-side capacity of 1.6 Tb per module [22].

5.5.1 Direct Detection Opportunity for Silicon Photonics

The data center interconnect platforms outlined in Section 5.4 all use coherent optics, the de facto transmission scheme used to transmit 100 Gb and faster wavelengths over distances of hundreds and thousands of kilomoters.

But many connections linking equipment between data centers are less than 100 km apart. Microsoft, e.g. classifies its data centers into two categories: linking switch buildings that make up a large-scale data center spread across a campus, and linking buildings across a metropolitan area. Linking adjacent buildings requires a 2-km link typically, whereas links in a metro area must span up to 70 km.

As Brad Booth, principal architect for Microsoft’s Azure Global Networking Services, explains, coherent devices have been designed for ultralong-haul transmission, with all kinds of extra features. Such coherent devices have a relatively high power consumption, and need to be housed in a separate platform. When Microsoft looked at coherent optics for the data center, it concluded such optics was extremely costly, says Booth.

Instead what Microsoft wanted was a pluggable optical module that fits into its switch equipment to link data centers over both reaches, a connection that was very low cost but very high bandwidth, says Booth.

Accordingly, Microsoft dismissed coherent optics, choosing a simpler direct detection scheme instead. Coherent detection allows for the recovery at the receiver of all the information associated with the transmitted signal such as its phase and amplitude. Digital signal processing algorithms can then use this signal information to counter the impairments introduced on the channel [23]. This is why coherent optics always has an associated DSP-ASIC chip.

Direct detection has been the traditional scheme used for dense wavelength-division multiplexing optical transmission at 2.5, 10, and 40 Gb. With direct detection, the output of the photodetector at the receiver is an electrical current that is proportional to the squared measure (the complex electric field) of the received optical signal. What is important here is the inherent squaring introduced by the photodetector; the squaring destroys signal information that is not available at the receiver but is available using coherent detection.

This information is important and explains why 100-Gb optical transport based on coherent optical transmission achieves a longer reach than 10-Gb direct detection despite the higher speed. But direct detection does the job for data center interconnect, where the link distances in question are 100 km or shorter.

Microsoft is working with chip company Inphi to develop a 100-Gb QSFP28 pluggable direct-detect module. Note, the QSFP28 module is mainly used for short-reach optics within the data center, not line-side optics for dense wavelength-division multiplexing optical transport.

The QSFP28 uses two wavelengths at 25 GBd/s combined with a modulation scheme (4-level pulse amplitude modulation, also known as PAM4) that encodes two bits on each signal duration or symbol. The result is 50 Gb per wavelength or 100 Gb overall. Inphi is using silicon photonics to implement what it calls the ColorZ QSFP28 module. Using the ColorZ will enable up to 4 Tb of capacity on a single fiber over a reach of up to 80 km. This design is referred to by Microsoft as Madison Phase 1.0.

Microsoft is in discussion with several vendors about developing a second-generation design, Madison 1.5, that will achieve more capacity in the C band by using 100 Gb over a single wavelength. Using multiple modules, capacity will be extended from 6.4 to 7.2 Tb over the C band.

A third design, Madison 2.0, will be a “coherent-lite” design, according to Booth, achieving speeds above the 100 Gb achieved using direct detection and PAM4. It will use 400-Gb lightpaths and achieve a total capacity of 38 Tb over the C band. The coherent optics will not fit inside a QSFP28 pluggable but will be implemented using on-board optics.

Microsoft is also leading an industry initiative known as the Consortium of On-Board Optics, or COBO, to develop standardized on-board optics. Such designs bring optics closer to a card’s chips and increase the interface density of platforms—just what is needed for linking data centers. The COBO module would be placed next to the coherent-lite DSP-ASIC, or potentially the optics and the coherent chip could be built together [24,25].

Microsoft’s Madison initiative is an example of an Internet content provider spurring an initiative that is leading to novel optical designs. The initiative also highlights how a semiconductor company—Inphi—can suddenly become a silicon photonics player. Lastly, it shows how the optical industry is delivering capacity in ever smaller form factors to drive down cost.

The Inphi product is one of many expected direct-detection module solutions.

Silicon photonics start-up Ranovus has announced a 200- Gb/s interface in a CFP2 form factor that will support links up to 130 km. The design uses four wavelengths each at 50 Gb/s using 25-GBd optics and PAM4 modulation. Up to 96 50-Gb channels can be fitted in the C band to achieve a total transmission bandwidth of 4.8 Tb [26].

5.6 Tackling Continual Traffic Growth

We have highlighted the importance of increasing line-side capacity for data center interconnect applications, but the issue of making best use of fiber capacity is also central for the telcos. This section looks at the approaches being considered to cope with continual traffic growth, both by increasing the capacity available within existing fiber networks and by developing new fiber optics.

The starting assumption is that the service providers, telcos and Internet players alike, will always seek to make best use of the equipment and fiber assets they already own; service providers only spend money when they have to.

5.6.1 Flexible Grid

One approach telcos are adopting is to move to a “flexible” grid. Operators traditionally have used fixed-sized channels across the fiber’s C band spectrum, inside which sit the wavelengths or lightpaths that carry the data being transmitted (see Fig. 5.7). For dense wavelength-division multiplexing, these fixed channels are typically 50 GHz wide, allowing 96 wavelengths to fit across the C band. By having flexibility to position where these channel boundaries reside, 37.5-GHz wide channels can be used, e.g., allowing more wavelengths to be squeezed across the fiber’s spectrum.

image
Figure 5.7 (A) Conventional and (B) flexible grid illustrated. Japanese Journal of Applied Physics.

We have already encountered this with Ciena’s Waveserver data center interconnect product that supports 88 wavelengths for a total capacity of 17.6 Tb. Ciena achieves up to 25 Tb of capacity by using a flexible grid (as discussed in Section 5.4.2).

A major simulation study by telecom operator Telefonica showed that moving to a flexigrid network and getting rid of its rigid 50-GHz wide channels would delay the need for its network in Spain to be upgraded. Instead of an upgrade in 2019, introducing flexible grid ROADMs would support traffic growth till 2024 [27].

The UK telco BT conducted a trial with Chinese equipment vendor Huawei and showed that a 200-Gb lightpath can fit in a 33.3-GHz wide channel. BT expects that the C band can accommodate as much as 30 Tb but believes that this is close to the limit [28].

5.6.2 Flexible-Rate Transponders

One research development is a flexible-rate transponder that can adapt the modulation scheme used and hence the data carried over lightpaths depending on the communication channel. This concept, known as the sliceable bit-rate variable transponder, is still at an early stage [29]. The transponder would generate a very high-capacity transmission line, 10 Tb or greater. It would generate superchannels—optical channels made up of multiple wavelengths—that would be provisioned on demand.

The large multiterabit superchannel would be segmented within the network using flexible grid ROADMs that would direct parts of the superchannel to different destinations. Such a sliceable transponder promises several benefits:

• The multiterabit slice could be repartitioned based on demand. This would occur occasionally rather than in a highly dynamic way, adding extra capacity between destinations when needed. Accordingly, the sliced multiterabit superchannel would end up going to fewer destinations over time as a result of continual traffic growth.

• The sliceable transponder promises cost reduction through greater component integration. This is where indium phosphide and silicon photonics would come in, integrating multiple optical modulators on one large transponder, e.g., and even the lasers and receivers. Such an integrated transponder would clearly benefit data center interconnect platform density.

The sliceable transponder, should it become adopted, would only be deployed after 2020 at the earliest [29]. Much development work would be needed first to integrate 10 Tb into a transponder.

5.6.3 Using More Fiber Bands

After that, additional spectral bands in the fiber will need to be used. Using the L band is the next obvious step to increase the amount of data that can be sent across a fiber. Recall that Nokia and ADVA have already taken this approach for their platforms, supporting data transmission in the C and L bands. Telcos foresee using spectrum even wider than the C and L bands combined. This approach could extend the capacity of a single-mode fiber from 30 to 100 Tb (Table 5.1).

Table 5.1

Silica Optical Fiber Transmission Bands

Name O Band E Band S Band C Band L Band
Wavelength range (nm) 1260–1360 1360–1460 1460–1530 1530–1565 1565–1625
Note Original band Water peak band; loss reduced to below 0.5 dB/km  Erbium amplifier Specially designed erbium amplifier

Image

But once the nonlinear Shannon limit is reached in a transmission band, the only option is to light up a new fiber or band. Lighting a new fiber is fine if you are a telco that owns plenty of spare fiber, but it does not lead to greater efficiencies and the transport cost-per-bit rises, as detailed in Section 5.3. Optical research labs are also grappling with this issue and are already thinking about what can be done.

5.6.4 Space-Division Multiplexing: The Next Big Thing?

Space-division multiplexing promises to boost the capacity of a fiber by a factor of between 10 and 100 using parallel transmission paths. With space-division multiplexing, signals are physically aligned to the fiber and launched along these multiple paths (see Fig. 5.8).

image
Figure 5.8 Conventional single-mode fiber and multicore fiber.a

The simplest way to create parallel paths is to bundle several standard fibers together. Note that this is different from simply adding and lighting a new fiber. The spatial-multiplexing equipment would use all the fibers, whereas lighting a new fiber would require a new optical transport system to be added at each end.

Alternatively, new types of fiber could be used that support multiple paths (cores) and even multiple light transmissions (optical modes) down each path. But this requires a new fiber type to be developed and installed, a hugely expensive endeavor.

Nokia Bell Labs claimed an industry first in late 2015 when it demonstrated a 60-km span of what it calls coupled 3-core fiber [30]. For the demo, Bell Labs generated twelve 2.5-Gb signals that were split down three paths, each path carrying in effect 10 Gb of data.

As suggested by the name—coupled 3-core fiber—the three signals in these cores interact strongly with each other. Digital signal processing is needed to restore the multiple transmissions at the receiver. To do this, Bell Labs employs a technique called multiple input, multiple output or MIMO that is already used for other forms of communications, such as cellular and digital subscriber line broadband. Bell Labs’ industry-first achievement with this demo was being able to do MIMO processing for optical in real time. Until now, such high data rates required a brief transmission with the restoration at the receiver being done offline due to the intensive computation involved.

In effect, the transmission rotates the signals arbitrarily, while using MIMO at the receiver rotates them back. The signals are garbled up due to the rotation, and undoing the rotation is called MIMO, explains Bell Labs’ Peter Winzer.

Will operators adopt such advanced technology and deploy a new style of fiber? Spatial multiplexing is a topic that to date has largely been dismissed as an extravagant research activity rather than a technology that promises to solve, or at least postpone, a looming capacity crunch.

Even if it does get deployed, it will not be for another decade and likely longer, and it will be expensive. Yet besides adding spectral bands and then more fiber, the industry is not offering an alternative to keep scaling capacity.

Telcos are not about to start replacing their huge worldwide fiber cable investments with novel fiber that has yet to even be specified. But simply lighting a new single-mode fiber and adding optical transport equipment at each end will not reduce the cost-per-bit, as discussed. Only by further integrating spatial multiplexing systems will cost come down by sharing equipment across multiple parallel transmission paths.

This is what Bell Labs is tackling. Winzer says that to get the cost-per-bit down, new levels of integration will be needed. Integration will happen first across the transponders and amplifiers; fiber will come last, he says.

This is just the sort of opportunity where silicon photonics can lead. This development is a decade out and will require novel designs, and 10 years is plenty of time for silicon photonics to become more commercially mature.

5.7 Pulling It All Together

Network capacity is becoming a key challenge facing service providers due to the continual growth in traffic. Fiber capacity, while vast, is finite and is now being exhausted. Transport cost is set to rise as new network investments will be required for common equipment shared by all the wavelengths, not just new equipment at the fiber end points. Capacity exhaust is the main issue, not the typical system design issues of power, cost, and size reduction, although all these play a role too.

The strategies available to the telcos and Internet content providers are to exploit the full capacity of the existing installed fiber and only then pay for the network upgrade. Then the story becomes cost optimization.

This will likely be done in two stages. First, cost-reducing the end equipment based on power consumption and size will require continual integration to bring more capacity into smaller and smaller form factors. This means working to fit the current capacity of a fiber into a single shelf and ultimately into a module and a chip. Second, new techniques to expand the transmission capacity of current fiber will be needed. Spatial multiplexing, a promising technique but still far from commercialization, will require unprecedented degrees of integration to feed light down a fiber’s multiple paths and modes.

Silicon photonics has a role in all these developments. As has been highlighted, the Internet content providers have become important technology drivers. But cost is key. If silicon photonics can enable cheaper integrated solutions, it will finally leapfrog indium phosphide for what many consider the endgame of line-side transmission.

Key Takeaways

• The Internet content providers are shaking up the traditional telecom market. Unlike the telcos, these newer players are experiencing sharp revenue growth.

• The Internet content providers are also experiencing far faster traffic growth than the telcos. The telcos’ core network traffic is growing annually at between 20% and 30%, while for the Internet content providers it is more like 80–100%.

• The requirements of the Internet content providers have given rise to a new category of optical transport platform, known as data center interconnect. These platforms condense as much line-side capacity into as small a space as possible.

• The industry should prepare for the cost per transported bit to rise once the C band fills up.

• Optical designers need to develop equipment that drives down the cost of transporting bits. This requires greater performance efficiencies, made all the more challenging as systems use more complex signaling techniques to cope with the continual growth in traffic.

• To advance capacity further, other fiber spectral bands besides the C band will be needed. After that, the next option is moving to new fiber, either conventionally by lighting fresh fiber or using spatial multiplexing and parallel transmission paths.

• Space-division multiplexing requires a new style of optical design, with integration taking place across these parallel transmission paths. To make such systems cost-effective and commercial will require significant investment, development work, and integration.

• Silicon photonics is already being deployed for line-side optics. It is also being used for linking data centers: in coherent optics with data center interconnect and direct detection modules with a range of up to 130 km.

• Silicon photonics has yet to demonstrate a telling advantage when compared to indium phosphide. And given that line-side optics volumes will be relatively modest, silicon photonics will find it hard to differentiate itself, at least in the coming years.

• System vendors are better positioned than the optical component vendors to drive forward silicon photonics development for line-side optics. This is also true in the data center, as will be described in Chapter 7, Data Center Architectures and Opportunities for Silicon Photonics.

• We expect silicon photonics to play an increasingly important role, but we do not yet see it displacing indium phosphide. The Layer 4 telecom network trends will only benefit the technology going forward, and silicon photonics could gain an edge longer term.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset