CHAPTER 1
Quantum Inflection Points

Jim Gable, President, Anametric

Inflection is a wonderfully nuanced word, denoting a change of pitch or tone or even simply a modulation of the voice. In mathematics, the definition of an inflection is less ambiguous, describing a transition from a concave to a convex curve. In a roundabout way, we have carried this mathematical definition back to everyday meaning, where we see inflection points as transitions—heralding significant changes in our lives, our industries, even our history.

In considering the implications of quantum computing and AI, it’s reasonable to pause and ask if we are close to useful quantum computers at all. There are not many implications for AI if not. Are quantum computers no more realistic than floating cities? No, but how can we tell? Truth be told, even after tremendous investments of time and funding around the world, today’s quantum computers don’t currently offer many practical benefits. Yet, these same investments of money and careers by some of the world’s brightest people indicate their extraordinary faith that useful quantum computers will emerge, perhaps soon within the current decade.

While acknowledging the potential of quantum computers, we should also note their limitations. Quantum computers are not universally superior to classical computers. It makes absolutely no sense to try to run PowerPoint on a quantum computer. In fact, one of the more promising applications of quantum computing, HHL,* a core component of many proposed quantum machine learning accelerations, is only partially quantum in nature. HHL is likely to play a significant role deep inside future AI software accelerated by quantum computing. Even the mighty Shor’s algorithm is mostly classical in operation. Thus, the future, if it is to be quantum in nature, will be dominated by hybrid architectures: partly quantum and partly classical.

So how do we measure progress toward this not-entirely-mythical but not-entirely-present new branch of computing? People use various proxies to gauge progress: investments by government, industry, and venture capitalists; patent counts; jobs and startup companies; and academic papers and articles. Such proxies represent a mostly inadequate stand-in for the generally accepted scientific and engineering benchmarks common in classical computing, such as instructions per second, memory size, and storage capacities.

This metric failure is not due to a lack of trying. Industry pundits continue to attempt to recast the most basic units in classical computing, things like the binary digit (bit), operations per second, and terabytes of storage into the quantum realm. Logically, we turn to quantum computing’s most basic datum, the qubit, for analogies, so unsurprisingly the most common benchmark in today’s quantum computers is the number of qubits. While this may seem entirely reasonable and obvious, the closer one looks, the weaker it appears—almost as if measuring the quantum industry itself is subject to a kind of uncertainty principle.

Many people describe the current state of quantum computing as being similar to the 1940s in classical computing. The huge, supercooled chandelier-style quantum computers at IBM, Google, and elsewhere may someday look as quaint as the ENIAC computer looks to us today. Much like the early days of classical computing, it’s not even clear which kind of qubit will be the ultimate “winner” in the future. We are still in the “my qubit is better than your qubit“ stage. While the supercooled transmon-based quantum computers dominate today, ion trap-based quantum computers have recently announced possibly more advanced hardware. In the wings wait a range of potentially superior technologies based on neutral atoms, spin qubits, topological states, and photonics. This industry could witness waves of leapfrogging technologies before a long-term winner emerges.

Additionally, just counting qubits in a quantum computer today doesn’t tell you enough. Different quantum computers have different error rates, decoherence times, internal connectivity, and entanglement structures. These design factors are so fundamental that they can overwhelm the systems entirely. Today, a 30 qubit quantum computer may be much more useful for many tasks than a 50 qubit version, in large part depending on these other design factors.

If we cannot easily count the number of qubits, what else can we do to track industry progress? What are useful quantum inflection points? An example is the “Quantum Supremacy“ milestone reached late in 2019 when Google demonstrated its quantum computer performing a function dramatically faster than any classical computer of any size. Some disputed the magnitude of the quantum advantage over classical computing, but it was clearly dramatic. No doubt this was an important milestone, although the word supremacy proved to be a bit misleading. While this was the first demonstrated algorithm that ran far faster than even the largest supercomputers, it did not represent a generally useful application.

The world wants useful quantum computers and results that matter in practical terms. That won’t happen overnight, but there are future milestones to track. A true inflection point marks a turning, an event that marks a sea change in the industry. With this in mind, here are five future quantum inflection points to watch for:

  • Quantum advantage: A useful algorithm exceeds classical computers.
  • Quantum repeater: Quantum communications at extended distances.
  • Quantum memory: Quantum information stored for longer periods.
  • Room temperature operation: Quantum technology escapes the lab environment.
  • Y2Q: The year a “cryptographically relevant” quantum computer appears.

Quantum advantage is a more modest phrase for a more significant milestone: the demonstration of a practical, useful algorithm that runs much more efficiently, and faster, on a quantum computer than on any purely classical computer. Such algorithms have been proven to exist in theory, but they require much more capable hardware than exists today. Quantum advantage could change everything if it leads to a provable economic advantage for business or government applications. The catch is that no one today really knows how many stable, logical qubits this will take. Forecasts range between 80 and 200 stable, logical qubits, but it also depends heavily on the fine details mentioned earlier. It’s worth noting too that this first demonstration of a useful quantum algorithm might not be one for machine learning or AI.

A quantum repeater is a concept from quantum networking. These repeaters will someday allow qubits to travel long distances—a major limitation today. Quantum communications can use existing telecom fiber cables, but quantum information cannot be transmitted much further than 100 km and not at all through any of the existing routers, switches, or classical repeaters. This limitation stems from a concept called the no cloning theorem, which states that it’s not possible to copy a quantum state in its entirety. This concept is so different from classical binary information that it can be hard to grasp. Computers and networks today constantly copy data to transfer and manipulate data. A quantum repeater needs to overcome this obstacle. In fact, the term repeater is a misnomer since we simply can’t copy and repeat quantum data like classical data. But by any name, a true quantum repeater would transform today’s networks, especially for confidential communications. Effective quantum networks could also allow small quantum computers to directly communicate with each other, enabling scaling beyond the limitations of a single chip in the profound cold of a dilution refrigerator. This might be an important milestone for machine learning and AI, which require much larger quantum computers.

A true quantum memory would represent a fundamental building block for future quantum repeaters and quantum computers. The central problem with qubits is that they don’t last very long, typically under a tenth of a second. Again, the contrast to classical memory is sobering since we store hundreds of gigabytes of data on our phones indefinitely. Long-lasting qubits will change the science and the industry. There are many potential solutions in labs today such as trapped ions, supercooled semiconductors, and diamond vacancies. Quantum memory could, conceptually, allow recursive computation in quantum computers, enabling far reaching capabilities. Essentially, today’s quantum algorithms must all run in a single pass. You can run only the number of quantum operations than can completed before the qubits expire—in a fraction of second. And after you read the results, the quantum information is gone. Long-lived quantum memory could allow much-longer iterative computation in quantum computers. Fully quantum memory, with both data as well as address bits (especially important for HHL) represented in superposition, holds incredible potential. What should be understood here is that true quantum memory unlocks most of the constraints on quantum computing. Machine learning, chemical modeling, process optimization, and advanced AI would all be in reach.

Room temperature operation can drive mass adoption of quantum processing in a wide range of devices. The mainstream approaches to quantum computing require temperatures colder than deep space or vacuum chambers, neither of which scale well. It’s hard to imagine an iPhone when all you have to work with are vacuum tubes, even tiny ones. If one of the outlier technologies such as photonics prove practical, the potential scalability enabled might supplant the other approaches. Maybe it will be the last jump in the quantum leapfrog race. This also could take quantum computing out of clouds (networked) and into devices like laptops, phones, and—naturally—robots.

Y2Q is possibly the most famous, and most feared, inflection point in quantum technology. This is the day that a sufficiently advanced quantum computer exists—one that is capable of using Shor’s algorithm to break some of the world’s most commonly used encryption protocols. Potentially catastrophic, this breakthrough could expose all of our currently private communications, such as encrypted emails, secured websites, and personal health information, and even allow someone to undetectably impersonate anyone else, including the government or a bank. Some people compare this to a race for the first nuclear weapon. Matching the concern and potential for global catastrophe, much work is needed to shore up our current encryption standards with advanced technologies like Post-Quantum Cybersecurity (PQC) and quantum safe communications like Quantum Key Distribution (QKD). While there is currently a great deal of investment in various mitigations, there are those who present good arguments that, for some situations, it is already too little and too late. While this seems separate from AI advances, it’s actually the combination of machine learning, AI concepts, and Shor’s algorithm that can create a constantly probing, always morphing, computerized threat. AI already plays a growing role in cybersecurity, for both offense and defense. And the winner of that struggle can determine the shape of our future society.

If we are truly reliving the 1940s with respect to quantum technologies, then it could be too early to matter—or the most exciting time possible. There is much uncertainty but also amazing progress. By looking beyond simple qubit counts, we can quickly find many dimensions of quantum information science to track. They all touch each other in one way or another, building a foundation for future revolutions in our understanding of basic science and also changing our world.

Note

  1. *   HHL is a quantum algorithm for linear systems formulated in 2009 by Aram Harrow, Avinatan Hassidim, and Seth Lloyd.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset