Quantinuum and Microsoft achieve breakthrough that unlocks a new era of reliable quantum computing

Quantinuum’s System Model H2 enabled Microsoft’s qubit-virtualization system to generate the most reliable logical qubits ever recorded, a breakthrough with wide ranging implications for everyone in quantum computing, accelerating progress and challenging current assumptions about the timeline toward large scale reliable quantum computing.

April 3, 2024

By Ilyas Khan, Chief Product Officer and Jenni Strabley, Senior Director Offering Management

A screenshot of a computerDescription automatically generated

Quantinuum and Microsoft have announced a vital breakthrough in quantum computing that Microsoft described as “a major achievement for the entire quantum ecosystem.”

By combining Microsoft’s innovative qubit-virtualization system with the unique architectural features and fidelity of Quantinuum’s System Model H2 quantum computer, our teams have demonstrated the most reliable logical qubits on record with logical circuit error rates 800 times lower than the corresponding physical circuit error rates. 

A graph with blue text and blue squaresDescription automatically generated

This achievement is not just monumental for Quantinuum and Microsoft, but it is a major advancement for the entire quantum ecosystem. It is a crucial milestone on the path to building a hybrid supercomputing system that can truly transform research and innovation across many industries for decades to come. It also further bolsters H2’s title as the highest performing quantum computer in the world.

Entering a new era of quantum computing

Historically, there have been widely held assumptions about the physical qubits needed for large scale fault-tolerant quantum computing and the timeline to quantum computers delivering real-world value. It was previously thought that an achievement like this one was still years away from realization – but together, Quantinuum and Microsoft proved that fault-tolerant quantum computing is in fact a reality.

In enabling today’s announcement, Quantinuum’s System Model H2 becomes the first quantum computer to advance to Microsoft’s Level 2 – Resilient phase of quantum computing – an incredible milestone. Until now, no other computer had been capable of producing reliable logical qubits. 

Using Microsoft’s qubit-virtualization system, our teams used reliable logical qubits to perform 14,000 individual instances of a quantum circuit with no errors, an overall result that is unprecedented. Microsoft also demonstrated multiple rounds of active syndrome extraction – an essential error correction capability for measuring and detecting the occurrence of errors without destroying the quantum information encoded in the logical qubit. 

As we prepare to bring today’s logical quantum computing breakthrough to commercial users, there is palpable anticipation about what this new era means for our partners, customers, and the global quantum computing ecosystem that has grown up around our hardware, middleware, and software. 

Collaborating to reach a new era

To understand this achievement, it is helpful to shed some light on the joint work that went into it. Our breakthrough would not have been possible without the close collaboration of the two exceptional teams at Quantinuum and Microsoft over many years.

Building on a relationship that stretches back five years, we collaborated with Microsoft Azure Quantum at a very deep level to best execute their innovative qubit-virtualization system, including error diagnostics and correction. The Microsoft team was able to optimize their error correction innovation, reducing an original estimate of 300 required physical qubits 10-fold, to create four logical qubits with only 30 physical qubits, bringing it into scope for the 32-qubit H2 quantum computer.

This massive compression of the code and efficient virtualization challenges a consensus view about the resources needed to do fault-tolerant quantum computing, where it has been routinely stated that a logical qubit will require hundreds, even thousands of physical qubits. Through our collaboration, Microsoft’s far more efficient encoding was made possible by architectural features unique to the System Model H2, including our market-leading 99.8% two-qubit gate fidelity, 32 fully-connected qubits, and compatibility with Quantum Intermediate Representation (QIR).

Thanks to this powerful combination of collaboration, engineering excellence, and resource efficiency, quantum computing has taken a major step into a new era, introducing reliable logical qubits which will soon be available to industrial and research users.

Understanding today’s error correction breakthrough

It is widely recognized that for a quantum computer to be useful, it must be able to compute correctly even when errors (or faults) occur – this is what scientists and engineers describe as fault-tolerance. 

In classical computing, fault-tolerance is well-understood and we have come to take it for granted. We always assume that our computers will be reliable and fault-free. Multiple advances over the course of decades have led to this state of affairs, including hardware that is incredibly robust and error rates that are very low, and classical error correction schemes that are based on the ability to copy information across multiple bits, to create redundancy. 

Getting to the same point in quantum computing is more challenging, although the solution to this problem has been known for some time. Qubits are incredibly delicate since one must control the precise quantum states of single atoms, which are prone to errors. Additionally, we must abide by a fundamental law of quantum physics known as the no cloning theorem, which says that you can’t just copy qubits – meaning some of the techniques used in classical error correction are unavailable in quantum machines. 

The solution involves entangling groups of physical qubits (thereby creating a logical qubit), storing the relevant quantum information in the entangled state, and, via some complex functions, performing computations with error correction. This process is all done with the sole purpose of creating logical qubit errors lower than the errors at the physical level.

However, implementing quantum error correction requires a significant number of qubit operations. Unless the underlying physical fidelity is good enough, implementing a quantum error correcting code will add more noise to your circuit than it takes away. No matter how clever you are in implementing a code, if your physical fidelity is poor, the error correcting code will only introduce more noise. But, once your physical fidelity is good enough (aka when the physical error rate is “below threshold”), then you will see the error correcting code start to actually help: producing logical errors below the physical errors. 

A close-up of a computer chipDescription automatically generated
System Model H2 ion-trap quantum computer chip showing the “racetrack” trap design
Quantinuum’s fault-tolerance roadmap

Today’s results are an exciting marker on the path to fault-tolerant quantum computing. The focus must and will now shift from quantum computing companies simply stating the number of qubits they have to explaining their connectivity, the underlying quality of the qubits with reference to gate fidelities, and their approach to fault-tolerance.

Our H-Series hardware roadmap has not only focused on scaling qubits, but also developing useable quantum computers that are part of a vertically integrated stack. Our work across the full stack includes major advances at every level, for instance just last month we proved that our qubits could scale when we announced solutions to the wiring problem and the sorting problem. By maintaining higher qubit counts and world class fidelity, our customers and partners are able to advance further and faster in fields such as material science, drug discovery, AI and finance.

In 2025, we will introduce a new H-Series quantum computer, Helios, that takes the very best the H-Series has to offer, improving both physical qubit count and physical fidelity. This will take us and our users below threshold for a wider set of error correcting codes and make that device capable of supporting at least 10 highly reliable logical qubits. 

A path to real-world impact

As we build upon today’s milestone and lead the field on the path to fault-tolerance, we are committed to continuing to make significant strides in the research that enables the rapid advance of our technologies. We were the first to demonstrate real-time quantum error correction (meaning a fully-fault tolerant QEC protocol), a result that meant we were the first to show: repeated real-time error correction, the ability to perform quantum "loops" (repeat-until-success protocols), and real-time decoding to determine the corrections during the computation. We were the first to create non-Abelian topological quantum matter and braid its anyons, leading to topological qubits.

The native flexibility of our QCCD architecture has allowed us to efficiently investigate a large variety of fault-tolerant methods, and our best-in-class fidelity means we expect to lead the way in achieving reduced error rates with additional error correcting codes – and supporting our partners to do the same. We are already working on making reliable quantum computing a commercial reality so that our customers and partners can unlock the enormous real-world economic value that is waiting to be unleashed by the development of these systems. 

In the short term – with a hybrid supercomputer powered by a hundred reliable logical qubits, we believe that organizations will be able to start to see scientific advantages and will be able to accelerate valuable progress toward some of the most important problems that mankind faces such as modelling the materials used in batteries and hydrogen fuel cells or accelerating the development of meaning-aware AI language models. Over the long-term, if we are able to scale closer to ~1,000 reliable logical qubits, we will be able to unlock the commercial advantages that can ultimately transform the commercial world. 

Quantinuum customers have always been able to operate the most cutting-edge quantum computing, and we look forward to seeing how they, and our own world-leading teams, drive ahead developing new solutions based on the state-of-the-art tools we continue to put into their hands. We were the early leaders in quantum computing and now we are thrilled to be positioned at the forefront of fault-tolerant quantum computing. We are excited to see what today’s milestone unlocks for our customers in the days ahead.

For more information
About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
January 22, 2025
Quantum Computers Will Make AI Better
Today’s LLMs are often impressive by past standards – but they are far from perfect

Quietly, and determinedly since 2019, we’ve been working on Generative Quantum AI. Our early focus on building natively quantum systems for machine learning has benefitted from and been accelerated by access to the world’s most powerful quantum computers, and quantum computers that cannot be classically simulated.

Our work additionally benefits from being very close to our Helios generation quantum computer, built in Colorado, USA. Helios is 1 trillion times more powerful than our H2 System, which is already significantly more advanced than all other quantum computers available.

While tools like ChatGPT have already made a profound impact on society, a critical limitation to their broader industrial and enterprise use has become clear. Classical large language models (LLMs) are computational behemoths, prohibitively huge and expensive to train, and prone to errors that damage their credibility.

Training models like ChatGPT requires processing vast datasets with billions, even trillions, of parameters. This demands immense computational power, often spread across thousands of GPUs or specialized hardware accelerators. The environmental cost is staggering—simply training GPT-3, for instance, consumed nearly 1,300 megawatt-hours of electricity, equivalent to the annual energy use of 130 average U.S. homes.

This doesn’t account for the ongoing operational costs of running these models, which remain high with every query. 

Despite these challenges, the push to develop ever-larger models shows no signs of slowing down.

Enter quantum computing. Quantum technology offers a more sustainable, efficient, and high-performance solution—one that will fundamentally reshape AI, dramatically lowering costs and increasing scalability, while overcoming the limitations of today's classical systems. 

Quantum Natural Language Processing: A New Frontier

At Quantinuum we have been maniacally focused on “rebuilding” machine learning (ML) techniques for Natural Language Processing (NLP) using quantum computers. 

Our research team has worked on translating key innovations in natural language processing — such as word embeddings, recurrent neural networks, and transformers — into the quantum realm. The ultimate goal is not merely to port existing classical techniques onto quantum computers but to reimagine these methods in ways that take full advantage of the unique features of quantum computers.

We have a deep bench working on this. Our Head of AI, Dr. Steve Clark, previously spent 14 years as a faculty member at Oxford and Cambridge, and over 4 years as a Senior Staff Research Scientist at DeepMind in London. He works closely with Dr. Konstantinos Meichanetzidis, who is our Head of Scientific Product Development and who has been working for years at the intersection of quantum many-body physics, quantum computing, theoretical computer science, and artificial intelligence.

A critical element of the team’s approach to this project is avoiding the temptation to simply “copy-paste”, i.e. taking the math from a classical version and directly implementing that on a quantum computer. 

This is motivated by the fact that quantum systems are fundamentally different from classical systems: their ability to leverage quantum phenomena like entanglement and interference ultimately changes the rules of computation. By ensuring these new models are properly mapped onto the quantum architecture, we are best poised to benefit from quantum computing’s unique advantages. 

These advantages are not so far in the future as we once imagined – partially driven by our accelerating pace of development in hardware and quantum error correction.

Making computers “talk”- a short history

The ultimate problem of making a computer understand a human language isn’t unlike trying to learn a new language yourself – you must hear/read/speak lots of examples, memorize lots of rules and their exceptions, memorize words and their meanings, and so on. However, it’s more complicated than that when the “brain” is a computer. Computers naturally speak their native languages very well, where everything from machine code to Python has a meaningful structure and set of rules. 

In contrast, “natural” (human) language is very different from the strict compliance of computer languages: things like idioms confound any sense of structure, humor and poetry play with semantics in creative ways, and the language itself is always evolving. Still, people have been considering this problem since the 1950’s (Turing’s original “test” of intelligence involves the automated interpretation and generation of natural language).

Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. 

Initial ML approaches were largely “statistical”: by analyzing large amounts of text data, one can identify patterns and probabilities. There were notable successes in translation (like translating French into English), and the birth of the web led to more innovations in learning from and handling big data.

What many consider “modern” NLP was born in the late 2000’s, when expanded compute power and larger datasets enabled practical use of neural networks. Being mathematical models, neural networks are “built” out of the tools of mathematics; specifically linear algebra and calculus. 

Building a neural network, then, means finding ways to manipulate language using the tools of linear algebra and calculus. This means representing words and sentences as vectors and matrices, developing tools to manipulate them, and so on. This is precisely the path that researchers in classical NLP have been following for the past 15 years, and the path that our team is now speedrunning in the quantum case.

Quantum Word Embeddings: A Complex Twist

The first major breakthrough in neural NLP came roughly a decade ago, when vector representations of words were developed, using the frameworks known as Word2Vec and GloVe (Global Vectors for Word Representation). In a recent paper, our team, including Carys Harvey and Douglas Brown, demonstrated how to do this in quantum NLP models – with a crucial twist. Instead of embedding words as real-valued vectors (as in the classical case), the team built it to work with complex-valued vectors.

In quantum mechanics, the state of a physical system is represented by a vector residing in a complex vector space, called a Hilbert space. By embedding words as complex vectors, we are able to map language into parameterized quantum circuits, and ultimately the qubits in our processor. This is a major advance that was largely under appreciated by the AI community but which is now rapidly gaining interest.

Using complex-valued word embeddings for QNLP means that from the bottom-up we are working with something fundamentally different. This different “geometry” may provide advantage in any number of areas: natural language has a rich probabilistic and hierarchical structure that may very well benefit from the richer representation of complex numbers.

The Quantum Recurrent Neural Network (RNN)

Another breakthrough comes from the development of quantum recurrent neural networks (RNNs). RNNs are commonly used in classical NLP to handle tasks such as text classification and language modeling. 

Our team, including Dr. Wenduan Xu, Douglas Brown, and Dr. Gabriel Matos, implemented a quantum version of the RNN using parameterized quantum circuits (PQCs). PQCs allow for hybrid quantum-classical computation, where quantum circuits process information and classical computers optimize the parameters controlling the quantum system.

In a recent experiment, the team used their quantum RNN to perform a standard NLP task: classifying movie reviews from Rotten Tomatoes as positive or negative. Remarkably, the quantum RNN performed as well as classical RNNs, GRUs, and LSTMs, using only four qubits. This result is notable for two reasons: it shows that quantum models can achieve competitive performance using a much smaller vector space, and it demonstrates the potential for significant energy savings in the future of AI.

In a similar experiment, our team partnered with Amgen to use PQCs for peptide classification, which is a standard task in computational biology. Working on the Quantinuum System Model H1, the joint team performed sequence classification (used in the design of therapeutic proteins), and they found competitive performance with classical baselines of a similar scale. This work was our first proof-of-concept application of near-term quantum computing to a task critical to the design of therapeutic proteins, and helped us to elucidate the route toward larger-scale applications in this and related fields, in line with our hardware development roadmap.

Quantum Transformers - The Next Big Leap

Transformers, the architecture behind models like GPT-3, have revolutionized NLP by enabling massive parallelism and state-of-the-art performance in tasks such as language modeling and translation. However, transformers are designed to take advantage of the parallelism provided by GPUs, something quantum computers do not yet do in the same way.

In response, our team, including Nikhil Khatri and Dr. Gabriel Matos, introduced “Quixer”, a quantum transformer model tailored specifically for quantum architectures. 

By using quantum algorithmic primitives, Quixer is optimized for quantum hardware, making it highly qubit efficient. In a recent study, the team applied Quixer to a realistic language modeling task and achieved results competitive with classical transformer models trained on the same data. 

This is an incredible milestone achievement in and of itself. 

This paper also marks the first quantum machine learning model applied to language on a realistic rather than toy dataset. 

This is a truly exciting advance for anyone interested in the union of quantum computing and artificial intelligence, and is in danger of being lost in the increased ‘noise’ from the quantum computing sector where organizations who are trying to raise capital will try to highlight somewhat trivial advances that are often duplicative.

Quantum Tensor Networks. A Scalable Approach

Carys Harvey and Richie Yeung from Quantinuum in the UK worked with a broader team that explored the use of quantum tensor networks for NLP. Tensor networks are mathematical structures that efficiently represent high-dimensional data, and they have found applications in everything from quantum physics to image recognition. In the context of NLP, tensor networks can be used to perform tasks like sequence classification, where the goal is to classify sequences of words or symbols based on their meaning.

The team performed experiments on our System Model H1, finding comparable performance to classical baselines. This marked the first time a scalable NLP model was run on quantum hardware – a remarkable advance. 

The tree-like structure of quantum tensor models lends itself incredibly well to specific features inherent to our architecture such as mid-circuit measurement and qubit re-use, allowing us to squeeze big problems onto few qubits.

Since quantum theory is inherently described by tensor networks, this is another example of how fundamentally different quantum machine learning approaches can look – again, there is a sort of “intuitive” mapping of the tensor networks used to describe the NLP problem onto the tensor networks used to describe the operation of our quantum processors.

What we’ve learned so far

While it is still very early days, we have good indications that running AI on quantum hardware will be more energy efficient. 

We recently published a result in “random circuit sampling”, a task used to compare quantum to classical computers. We beat the classical supercomputer in time to solution as well as energy use – our quantum computer cost 30,000x less energy to complete the task than Frontier, the classical supercomputer we compared against. 

We may see, as our quantum AI models grow in power and size, that there is a similar scaling in energy use: it’s generally more efficient to use ~100 qubits than it is to use ~10^18 classical bits.

Another major insight so far is that quantum models tend to require significantly fewer parameters to train than their classical counterparts. In classical machine learning, particularly in large neural networks, the number of parameters can grow into the billions, leading to massive computational demands. 

Quantum models, by contrast, leverage the unique properties of quantum mechanics to achieve comparable performance with a much smaller number of parameters. This could drastically reduce the energy and computational resources required to run these models.

The Path Ahead

As quantum computing hardware continues to improve, quantum AI models may increasingly complement or even replace classical systems. By leveraging quantum superposition, entanglement, and interference, these models offer the potential for significant reductions in both computational cost and energy consumption. With fewer parameters required, quantum models could make AI more sustainable, tackling one of the biggest challenges facing the industry today.

The work being done by Quantinuum reflects the start of the next chapter in AI, and one that is transformative. As quantum computing matures, its integration with AI has the potential to unlock entirely new approaches that are not only more efficient and performant but can also handle the full complexities of natural language. The fact that Quantinuum’s quantum computers are the most advanced in the world, and cannot be simulated classically, gives us a unique glimpse into a future. 

The future of AI now looks very much to be quantum and Quantinuum’s Gen QAI system will usher in the era in which our work will have meaningful societal impact.

technical
All
Blog
December 9, 2024
Q2B 2024: The Roadmap to Quantum Value

At this year’s Q2B Silicon Valley conference from December 10th – 12th in Santa Clara, California, the Quantinuum team will be participating in plenary and case study sessions to showcase our quantum computing technologies. 

Schedule a meeting with us at Q2B

Meet our team at Booth #G9 to discover how Quantinuum is charting the path to universal, fully fault-tolerant quantum computing. 

Join our sessions: 

Tuesday, Dec 10, 10:00 - 10:20am PT

Plenary: Advancements in Fault-Tolerant Quantum Computation: Demonstrations and Results

There is industry-wide consensus on the need for fault-tolerant QPU’s, but demonstrations of these abilities are less common. In this talk, Dr. Hayes will review Quantinuum’s long list of meaningful demonstrations in fault-tolerance, including real-time error correction, a variety of codes from the surface code to exotic qLDPC codes, logical benchmarking, beyond break-even behavior on multiple codes and circuit families.

View the presentation

Wednesday, Dec 11, 4:30 – 4:50pm PT

Keynote: Quantum Tokens: Securing Digital Assets with Quantum Physics

Mitsui’s Deputy General Manager, Quantum Innovation Dept., Corporate Development Div., Koji Naniwada, and Quantinuum’s Head of Cybersecurity, Duncan Jones will deliver a keynote presentation on a case study for quantum in cybersecurity. Together, our organizations demonstrated the first implementation of quantum tokens over a commercial QKD network. Quantum tokens enable three previously incompatible properties: unforgeability guaranteed by physics, fast settlement without centralized validation, and user privacy until redemption. We present results from our successful Tokyo trial using NEC's QKD commercial hardware and discuss potential applications in financial services.

Details on the case study

Wednesday, Dec 11, 5:10 – 6:10pm PT

Quantinuum and Mitsui Sponsored Happy Hour

Join the Quantinuum and Mitsui teams in the expo hall for a networking happy hour. 

events
All