© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
C. RichardUnderstanding Semiconductorshttps://doi.org/10.1007/978-1-4842-8847-4_10

10. The Future of Semiconductors and Electronic Systems

Corey Richard1  
(1)
San Francisco, CA, USA
 

As transistors continue to shrink, confronting their physical limitations becomes unavoidable. Lithographic “stencils” can only etch patterns so small, and molecules, after all, can only be divided so many times. Yet, the doubling of computing power every 18–24 months, as Moore predicted, does not have to end any time soon. You could have made a lot of money over the years betting against all the prognosticators claiming the “end of Moore’s Law.” There are currently many promising research areas, both within existing technology architectures and brand-new sources of computing power, that will continue the technological march of the semiconductor industry for years to come.

Prolonging Moore’s Law – Sustaining Technologies

Within traditional silicon engineering, renewed focus has shifted from shrinking component sizes toward improving design efficiency and integration, as well as exploring new materials and design methods to prolong our ability to keep pace with Moore’s prediction. This can be seen by the industry’s consistent investment in Research and Development, trailing only pharmaceuticals and biotech as a percentage of sales.

2.5 and3D Die stacking is a promising technology already in use for memory and certain processing applications that enables hardware designers to place multiple die on top of one another, connected by a metal wire called a through silicon via (TSV). We saw examples of die stacking in Figures 5-2 and 6-13. By building up instead of out, the data transfer rate between die is vastly improved, costs are reduced, power is conserved, and space is preserved, enabling more transistors to fit on a given substrate. Though promising, 2.5/3D systems greatly increase the complexity of existing designs. Stacking unique processing centers on top of one another can enable tighter integration and boost system performance but will require much more intricate data flow schemas and system architectures. The cost to design a 3nm SoC already costs between $$500 million to $$1.5 billion, according to research and consulting firm IBS (International Business Strategies) (Lapedus, 2019). Stacked logic will undoubtedly increase design costs further, though high-performance applications like AI will likely continue to drive demand for increasingly complex system architectures. To scale such design techniques, new development tools that encompass all aspects of the design flow will be needed. This presents unique challenges to EDA companies who are better equipped to solve pointed problems at individual design steps.

Gate-All-Around (GAA) Transistors and new channel materials are a promising next step in the evolution of transistor technology and offer the most concrete path to prolonging geometric scaling. From their roots in the 1960s, Planar Transistors were the dominant transistor structure until about a decade ago. At the 20nm process node and below, planar transistors suffer from crippling leakage issues, seeping current even when turned off and costing significant overall system power. Starting in 2011, these planar devices have been phased out in favor of FinFETs for most advanced ICs (Lapedus, 2021). Instead  of controlling current flow from source to drain across a 2D plane, FinFETs shift gate structure to encompass the channel across three dimensions, which reduces leakage issues and enables greater control at lower voltages (Lapedus, 2021). In FinFET transistors, the bottom of the channel is still connected to the underlying silicon substrate, however, which allows some current to leak out even if the transistor is turned off (Ye et al., 2019). As fabs approach the 3nm and 2nm process nodes, leakage issues have become a critical issue once again. With gate control across four dimensions, Nanowire and Nanosheet Gate-All-Around (GAA) Transistors aim to solve these issues. GAA Transistors completely wrap channel passages with a vertical gate structure, achieving greater control and resolving many of the FinFET leakage problems (Lapedus, 2021). New CMOS technologies are increasingly expensive at smaller nodes and the process technology required to implement GAA transistors is no different. GAA transistors present unique deposition and etch challenges that likely require new channel materials like strained Silicon Germanium (SiGe) to mitigate electron mobility issues (Angelov et al., 2019). We can see the structural differences between Planar, FinFET, and GAA Transistors illustrated in Figure 10-1.

An illustration of the evolution of transistors. The standard M O S F E T and Fin FET labeled source, gate, drain, and channel. Transistors of the future include planar FET, Fin FET, G A A FET in Nanoware, and M B C F E T in Nanosheet.

Figure 10-1

Evolution of Transistors – Current vs. Future

As transistors become smaller and more packed together, heat dissipation becomes a major performance constraint, forcing many devices to run below their maximum speed to avoid overheating (Angelov et al., 2019). Traditionally, silicon has been used as the main channel material, but has power density (the amount of heat an IC can remove) constraints that limit many of the performance advantages from smaller GAA transistors (Ye et al., 2019). One solution to resolve these power density issues is to use channel materials with greater electron mobility. In strain engineering, atoms in elements like strained silicon germanium (SiGe) are stretched apart from one another, which allows electrons to pass through more easily while reducing the amount of heat that is released by a circuit (Cross, 2016). Other promising alternative channel materials include gallium arsenide (GaAs), gallium nitride (GaN), and other lll-V elements (Ye et al., 2019). Together, GAA transistor structures and new channel materials like SiGe can both enable geometric scaling of transistors as well as boost functional performance. Companies have already invested billions into the technology, with Samsung planning to introduce the world’s first Nanosheet GAAs at the 3nm process node in 2022 or 2023 (Lapedus, 2021).

Custom Silicon and Specialized Accelerators may not boost the nominal computing power of entire circuit generations, but they have proven a successful method for functional scaling of application categories and performance improvements of specific products. A great example of this is the functional scaling of GPU processors, whose power has been increasing at an exponential pace. While CPUs are good at doing a bunch of generalized tasks in a sequence, GPUs are particularly good at doing vast numbers of repetitive calculations at the same time, making them particularly well suited for use in artificial intelligence and computer vision, which require quick processing of such calculations. Since 2012, NVIDIA’s GPUs’ ability to perform key AI calculations have approximately doubled every year, having increased 317 times through May 2020 (Mims, 2020). The phenomena has been dubbed Huang’s Law, named after NVIDIA’s CEO Jensen Huang. Functional improvements in key areas like memory and GPUs can continue improving IC performance, even if geometric scaling slows down. In addition to functional scaling of widely adoptable subsystems, custom silicon design can boost performance through tighter integration using existing technology. Companies like Apple, Google, Facebook, and Tesla are all developing custom chips to power everything from VR headsets to autonomous driving systems. Though cost-prohibitive to smaller players that depend on fabless design companies to design their chips, large product companies are increasingly building custom chips in-house. These companies can afford the cost of building internal engineering groups, and by shifting from commodification to full customization, they are able to sustain competitive advantages in performance that may not be possible when working with third-party providers. In-house design teams also provide quality control advantages and reduce the need to disclose sensitive information.

Structures made from new materials like Graphene carbon nanotubes, and other 2D transistors offer another promising way to extend Moore’s Law. As we move toward the point where transistors are only a couple of atoms wide, there becomes a risk that transistor gates (which control current flow) are no longer wide enough to prevent electrons from passing straight through them. This is due to a phenomenon known as quantum tunneling, whereby an electron can disappear on one side of a physical barrier and turn up on the other (Fisk, 2020). In anticipation of this problem, scientists have been exploring materials thin enough to enable continual shrinking of ICs without suffering from such tunneling interference. There are numerous candidate materials under development, though two have received considerable attention. Graphene, the strongest known material with a thickness of only 1 atom, is well suited to resist quantum tunneling (Kingatua, 2020). Future transistors could be made of carbon nanotubes, complementary structures made of rolled up graphene sheets that take advantage of graphene’s unique properties (Bourzac, 2019). MIT, Stanford, IBM, and other researchers have already built functional chips using graphene and carbon nanotubes (Shulaker et al., 2013). Though promising, carbon nanotubes are difficult to manufacture and will require plenty of additional research before they might be ready for market. We can see graphene sheets and carbon nanotubes illustrated in Figure 10-2.

An illustration of two Graphene sheets in a flat manner and rolled up into a carbon nanotube.

Figure 10-2

Graphene Sheets and Carbon Nanotubes

Optical Chips and Optical Interconnects aim to harness light instead of electrons as the main signal carrier within and between electronic devices (Minzioni et al., 2019). While a copper wire is limited by one data signal at a time, a single optical wire can transmit multiple data signals using different wavelengths of light (Kitayama et al., 2019). Venture-backed Ayar Labs and teams of researchers at MIT and UC Berkeley working on a DARPA-funded project called the Photonically Optimized Embedded Microprocessors (POEM) Project have already begun commercializing photonic chip technology (Matheson, 2018). Ayar has targeted chip-to-chip communication, creating input-output (I/O) optical interconnects that are much faster and more power efficient than traditional copper wiring (Matheson, 2018). The significance and breadth of this work has the potential to be huge. You can have the most powerful car engine on earth, but if the pipe from the engine to the gas tank takes too long to deliver fuel, your car will accelerate slowly. By the same token, processors can only process information as fast as it can be retrieved and delivered to the rest of the system. An SoC may be twice as powerful as its predecessor, but if the circuitry connecting it to other parts of the system is slow, then the extra power is rendered useless. The research on Optical Chips and Interconnects conducted by the POEM Program and at Ayar labs is aimed at alleviating these data-transmission bottlenecks and fully unlocking the power of Moore’s Law (Matheson, 2018).

Overcoming Moore’s Law – New Technologies

Outside of advances in traditional silicon engineering techniques, there are some truly fascinating technologies in development that could launch us into a post-Moore computing renaissance.

Quantum computing is one technology that gets plenty of press, though experts have many reservations about its practical limitations. In digital computers, a bit must be either a 0 (the transistor is off, no current passes through) or a 1 (the transistor is on, current passes through). In quantum computing on the other hand, a qubit can exist as a 0, 1, or a combination of both at any given time (Brant, 2020). The physics of quantum computing are complicated, but in essence, quantum computers harness the superposition of qubits in conjunction with a phenomenon called quantum entanglement, where two particles are tied to one another over a distance, to execute exponentially more complex calculations than modern computers can handle (Jazaeri et al., 2019). Superposition refers to a property of quantum particles that enables it to exist in two states (in this case 0 and 1) at the same time. Researchers and developers of the technology are striving to achieve quantum supremacy, when a quantum computer that functions according to the laws of quantum physics performs better at a given task performed by classical computers that function according to the laws of classical physics like Isaac Newton’s law of motion (Gibney, 2019).

Quantum Transistors provide an interesting twist to quantum computing that could theoretically harness current fabrication technologies to create small quantum devices capable of delivering vastly increased computational power. Quantum transistors harness the power of quantum tunneling and quantum entanglement to process and store information (Benchoff, 2019). In quantum mechanics, tunneling occurs when a particle slips directly through a physical barrier, unlike in classical mechanics where particles cannot pass through such barriers. This becomes more of a problem at the subatomic level and is one of the main challenges to further shrinking of transistor sizes, as we discussed above. At smaller geometries, those barriers become smaller and electrons can easily pass through these tiny barriers. We can see quantum tunneling illustrated in Figure 10-3. Tunneling and entanglement are still not fully understood and maintaining the atomic-scale control necessary to harness such forces is an incredibly difficult engineering challenge in need of considerable investment and inquiry. Quantum Transistors are still in early development, though researchers have been able to develop working prototypes as proof of concept.

Two illustrations of quantum tunnelling. The different barrier shapes of ball motion in a parabolic curve in classical mechanics and straight in quantum mechanics.

Figure 10-3

Quantum Tunneling

Potential applications of Quantum Technology include data security, medical research, complex simulations, and other problems requiring exponential increases in processing power. Though the tech has great potential, it is prone to frequent errors and requires ultra-precise environmental conditions to function, needing advanced cryogenic technology that can cool quantum devices to temperatures approaching physical limits (Emilio, 2020). While such constraints may not sound promising, just remember, the computers of the 1950s that barely fit in a large auditorium now fit in your pocket. Venture-backed companies like Rigetti Computing, as well as big players like Google and IBM, have invested considerable resources into this technology (Shieber & Coldewey, 2020). In 2019, a team of Physicists led by John Martinis working with Google and the University of California, Santa Barbara, claimed to have achieved quantum supremacy, when an experimental quantum computer they built was able to calculate the solution to a problem that would have taken a supercomputer an estimated 10,000 years to complete (Savage, 2020). We can see a picture of a live IQM Quantum Computer in Espoo, Finland (Left) and a better lit rendering of a quantum computer (right) in Figure 10-4 (IQM Quantum Computers, 2020).

A photo and an illustration of cylindrical-shaped quantum computers view 4 floors in 3 D and 2 D.

Figure 10-4

Quantum Computers - IQM Quantum Computer in Sepoo, Finland (Left) vs. Rendering of Quantum Computer (Right)

Assuming quantum computers are one day viable, it is highly unlikely they will displace the classical computers we use today – the two are uniquely suited to solving different sets of problems and would be more complements to one another than interchangeable replacements. In Cryptography, for example, Quantum Computers can be used to thwart hackers and make cloud computing more secure. In Medicine and Materials, Quantum Computing power can help make simulations more powerful, accelerating development of new treatments and compounds. In Machine Learning, Quantum Computers can help train ML models more quickly, shortening the time it takes to process massive amounts of data. Finally, as the volume of data grows year over year, Quantum Computers can help with search functions, turning vast seas of individual data points into useful information. Google, the leader in search, has been researching and developing quantum computing technology since 2016 and hopes to have a useful quantum computer by 2029 (Porter, 2021). We can see these various application areas depicted in Figure 10-5.

An illustration of quantum computing. The four factors are named cryptography, medicine and materials, machine learning, and searching big data.

Figure 10-5

Quantum Computing Applications

Neuromorphic Computing Technologies are modeled after living processing structures like neurons and present a fascinating potential avenue for computing progress beyond Moore’s Law. Research published in the journal Nature Electronics claims that scientists have successfully created a neurotransistor which can simultaneously store and process information, greatly improving processing speed, as well as exhibiting characteristics of plasticity; a breakthrough that enables neurotransistors to learn from and change tasks like a human brain (Baek et al., 2020). Additional research at Intel aims to use neuromorphic computing research for AI and other applications using spiking neural networks (SNN), which act like silicon-based neurons in neural networks resembling similar networks within the human brain (Intel Labs, n.d.). The Human Brain Project (HBP), a widely recognized research project focused on building our understanding of neuroscience and computing, has invested significant resources into neuromorphic computing, one of their twelve target focus areas (Human Brain Project, n.d.). The HBP is a collaborative research effort that allows anyone to register and request compute time on one of their neuromorphic machines. It has used both a SpiNNaker (SNN) System, similar to Intel’s system leveraging numerical models deployed on custom digital circuitry in addition to a BrainScaleS System employing analog and mixed-signal components to emulate neurons and synapses (Human Brain Project, n.d.). DNA, which stores information millions of times more efficiently than the most complicated SoCs, has also been the focus of study as a potential alternative for data storage (Linder, 2020). Figure 10-6 highlights the connection between neuromorphic technology and the neurons in our brain.

An illustration of a closeup view of a neuromorphic algorithm in the human body's nervous system on the head side.

Figure 10-6

Neuromorphic Technology

We can think of prolonging and overcoming technologies in terms of geometric and functional scaling. Prolonging technologies focus on developing existing transistor technologies that can geometrically scale by making components smaller and more efficient. Overcoming technologies, on the other hand, focus on deriving greater performance for a given feature size or rendering feature sizes irrelevant by transforming computing structures to new base components and system architectures. We will need both in the coming decades to fuel the relentless pace of technological development and innovation.

In Figure 10-7, we revisit our functional vs. geometric diagram to compare the potential impacts of technologies that may prolong Moore’s Law versus those that seek to overcome it. While Geometric Scaling technologies aim to make transistors smaller, traditional Functional Scaling technologies squeeze more performance out of a given node or feature size. Unlike functional scaling using existing transistor technology, functional scaling here can also be achieved by paradigm shifts in computing performance, irrespective of nm size.

A line graph has a decreasing parabolic curve. The relation between size versus frequency in hertz is classified in 2 factors named functional and geometric scaling.

Figure 10-7

Geometric vs. Scaling Technologies

Chapter Ten Summary

In this chapter, we first tackled sustaining technologies that aim to prolong Moore’s law:
  1. 1.

    2.5/3D Die Stacking allows us to build up instead of out, increasing connectivity (performance), boosting energy efficiency (power), and conserving valuable real estate (area).

     
  2. 2.

    Gate-All-Around (GAA) Nanosheet Transistors are a promising next step in the evolution of CMOS technology and are set to pick up where FinFETs leave off in the upcoming years.

     
  3. 3.

    Custom Silicon like highly integrated ASICs and IC accelerators can be optimized for specific applications and functions.

     
  4. 4.

    2D Graphene Transistors and Carbon Nanotubes have unique properties that hold promise of prolonging geometric scaling.

     
  5. 5.

    Optical Chips and Optical Interconnects harness light photons to accelerate data transfer speeds and reduce latency throughout the electronic systems.

     
We next tackled new technologies meant to overcome Moore’s Law:
  1. 1.

    Quantum Computers and Quantum Transistors use qubits, superposition, and quantum entanglement to address specific problem sets that even the most powerful digital supercomputers are incapable or inefficient at solving.

     
  2. 2.

    Neuromorphic Computing Technology models biological processing centers like the nervous system in an effort to create new computing paradigms.

     

Details aside, what is most impressive about the future of semiconductors and electrical systems is that we are much closer to the beginning of this technology than to where it will ultimately lead us. Brilliant minds across the world are collaborating daily on incredible breakthroughs. Though many of them might seem inconceivable today, tomorrow they could be part of our daily lives.

Your Personal SAT (Semiconductor Awareness Test)

To be sure that your knowledge builds exponentially throughout the book, here are five questions relating to the previous chapter.
  1. 1.

    What advantages does stacking have over traditional “two-dimensional” ICs? Disadvantages?

     
  2. 2.

    Compare and contrast the structure of planar, FinFET, and GAA transistors. What gives GAA an advantage?

     
  3. 3.

    Name three promising channel materials. Why are these materials so important?

     
  4. 4.

    Describe the difference between a bit and a qubit. Which applications might quantum computers be well suited for over traditional digital devices?

     
  5. 5.

    What questions do geometric scaling and functional scaling ask us? Classify each of the technologies covered in this chapter as more geometrically oriented or functionally oriented in nature.

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset