Blog

Discover how we are pushing the boundaries in the world of quantum computing

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
technical
All
July 2, 2025
Cracking the code of superconductors: Quantum computers just got closer to the dream

In a new paper in Nature Physics, we've made a major breakthrough in one of quantum computing’s most elusive promises: simulating the physics of superconductors. A deeper understanding of superconductivity would have an enormous impact: greater insight could pave the way to real-world advances, like phone batteries that last for months, “lossless” power grids that drastically reduce your bills, or MRI machines that are widely available and cheap to use.  The development of room-temperature superconductors would transform the global economy.

A key promise of quantum computing is that it has a natural advantage when studying inherently quantum systems, like superconductors. In many ways, it is precisely the deeply ‘quantum’ nature of superconductivity that makes it both so transformative and so notoriously difficult to study.

Now, we are pleased to report that we just got a lot closer to that ultimate dream.

Making the impossible possible

To study something like a superconductor with a quantum computer, you need to first “encode” the elements of the system you want to study onto the qubits – in other words, you want to translate the essential features of your material onto the states and gates you will run on the computer.

For superconductors in particular, you want to encode the behavior of particles known as “fermions” (like the familiar electron). Naively simulating fermions using qubits will result in garbage data, because qubits alone lack the key properties that make a fermion so unique.

Until recently, scientists used something called the “Jordan-Wigner” encoding to properly map fermions onto qubits. People have argued that the Jordan-Wigner encoding is one of the main reasons fermionic simulations have not progressed beyond simple one-dimensional chain geometries: it requires too many gates as the system size grows.  

Even worse, the Jordan-Wigner encoding has the nasty property that it is, in a sense, maximally non-fault-tolerant: one error occurring anywhere in the system affects the whole state, which generally leads to an exponential overhead in the number of shots required. Due to this, until now, simulating relevant systems at scale – one of the big promises of quantum computing – has remained a daunting challenge.

Theorists have addressed the issues of the Jordan-Wigner encoding and have suggested alternative fermionic encodings. In practice, however, the circuits created from these alternative encodings come with large overheads and have so far not been practically useful.

We are happy to report that our team developed a new way to compile one of the new, alternative, encodings that dramatically improves both efficiency and accuracy, overcoming the limitations of older approaches. Their new compilation scheme is the most efficient yet, slashing the cost of simulating fermionic hopping by an impressive 42%. On top of that, the team also introduced new, targeted error mitigation techniques that ensure even larger systems can be simulated with far fewer computational "shots"—a critical advantage in quantum computing.

Using their innovative methods, the team was able to simulate the Fermi-Hubbard model—a cornerstone of condensed matter physics— at a previously unattainable scale. By encoding 36 fermionic modes into 48 physical qubits on System Model H2, they achieved the largest quantum simulation of this model to date.

This marks an important milestone in quantum computing: it demonstrates that large-scale simulations of complex quantum systems, like superconductors, are now within reach.

Unlocking the Quantum Age, One Breakthrough at a Time

This breakthrough doesn’t just show how we can push the boundaries of what quantum computers can do; it brings one of the most exciting use cases of quantum computing much closer to reality. With this new approach, scientists can soon begin to simulate materials and systems that were once thought too complex for the most powerful classical computers alone. And in doing so, they’ve unlocked a path to potentially solving one of the most exciting and valuable problems in science and technology: understanding and harnessing the power of superconductivity.

The future of quantum computing—and with it, the future of energy, electronics, and beyond—just got a lot more exciting.

technical
All
July 1, 2025
Quantinuum with partners Princeton and NIST deliver seminal result in quantum error correction

In an experiment led by Princeton and NIST, we’ve just delivered a crucial result in Quantum Error Correction (QEC), demonstrating key principles of scalable quantum computing developed by Drs Peter Shor, Dorit Aharonov, and Michael Ben-Or. In this latest paper, we showed that by using “concatenated codes” noise can be exponentially suppressed — proving that quantum computing will scale.

When noise is low enough, the results are transformative

Quantum computing is already producing results, but high-profile applications like Shor’s algorithm—which can break RSA encryption—require error rates about a billion times lower than what today’s machines can achieve.

Achieving such low error rates is a holy grail of quantum computing. Peter Shor was the first to hypothesize a way forward, in the form of quantum error correction. Building on his results, Dorit Aharanov and Michael Ben-Or proved that by concatenating quantum error correcting codes, a sufficiently high-quality quantum computer can suppress error rates arbitrarily at the cost of a very modest increase in the required number of qubits.  Without that insight, building a truly fault-tolerant quantum computer would be impossible.

Their results, now widely referred to as the “threshold theorem”, laid the foundation for realizing fault-tolerant quantum computing. At the time, many doubted that the error rates required for large-scale quantum algorithms could ever be achieved in practice. The threshold theorem made clear that large scale quantum computing is a realistic possibility, giving birth to the robust quantum industry that exists today.

Realizing a legendary vision

Until now, nobody has realized the original vision for the threshold theorem. Last year, Google performed a beautiful demonstration of the threshold theorem in a different context (without concatenated codes). This year, we are proud to report the first experimental realization of that seminal work—demonstrating fault-tolerant quantum computing using concatenated codes, just as they envisioned.

The benefits of concatenation

The team demonstrated that their family of protocols achieves high error thresholds—making them easier to implement—while requiring minimal ancilla qubits, meaning lower overall qubit overhead. Remarkably, their protocols are so efficient that fault-tolerant preparation of basis states requires zero ancilla overhead, making the process maximally efficient.

This approach to error correction has the potential to significantly reduce qubit requirements across multiple areas, from state preparation to the broader QEC infrastructure. Additionally, concatenated codes offer greater design flexibility, which makes them especially attractive. Taken together, these advantages suggest that concatenation could provide a faster and more practical path to fault-tolerant quantum computing than popular approaches like the surface code.

We’re always looking forward

From a broader perspective, this achievement highlights the power of collaboration between industry, academia, and national laboratories. Quantinuum’s commercial quantum systems are so stable and reliable that our partners were able to carry out this groundbreaking research remotely—over the cloud—without needing detailed knowledge of the hardware. While we very much look forward to welcoming them to our labs before long, its notable that they never need to step inside to harness the full capabilities of our machines.

As we make quantum computing more accessible, the rate of innovation will only increase. The era of plug-and-play quantum computing has arrived. Are you ready?

technical
All
June 26, 2025
Quantinuum Overcomes Last Major Hurdle to Deliver Scalable Universal Fault-Tolerant Quantum Computers by 2029

Quantum computing companies are poised to exceed $1 billion in revenues by the close of 2025, according to McKinsey & Company, underscoring how today’s quantum computers are already delivering customer value in their current phase of development.

This figure is projected to reach upwards of $37 billion by 2030, rising in parallel with escalating demand, as well as with the scale of the machines and the complexity of problem sets of which they will be able to address.  

Several systems on the market today are fault-tolerant by design, meaning they are capable of suppressing error-causing noise to yield reliable calculations. However, the full potential of quantum computing to tackle problems of true industrial relevance, in areas like medicine, energy, and finance, remains contingent on an architecture that supports a fully fault-tolerant universal gate set with repeatable error correction—a capability that, until now, has eluded the industry.  

Quantinuum is the first—and only—company to achieve this critical technical breakthrough, universally recognized as the essential precursor to scalable, industrial-scale quantum computing. This milestone provides us with the most de-risked development roadmap in the industry and positions us to fulfill our promise to deliver our universal, fully fault-tolerant quantum computer, Apollo, by 2029.

In this regard, Quantinuum is the first company to step from the so-called “NISQ” (noisy intermediate-scale quantum) era towards utility-scale quantum computers.

Unpacking our achievement: first, a ‘full’ primer

A quantum computer uses operations called gates to process information in ways that even today’s fastest supercomputers cannot. The industry typically refers to two types of gates for quantum computers:

  • Clifford gates, which can be easily simulated by classical computers, and are relatively easy to implement; and
  • Non-Clifford gates, which are usually harder to implement, but are required to enable true quantum computation (when combined with their siblings).

A system that can run both gates is classified as universal and has the machinery to tackle the widest range of problems. Without non-Clifford gates, a quantum computer is non-universal and restricted to smaller, easier sets of tasks - and it can always be simulated by classical computers. This is like painting with a full palette of primary colors, versus only having one or two to work with. Simply put, a quantum computer that cannot implement ‘non-Clifford’ gates is not really a quantum computer.

A fault-tolerant, or error-corrected, quantum computer detects and corrects its own errors (or faults) to produce reliable results. Quantinuum has the best and brightest scientists dedicated to keeping our systems’ error rates the lowest in the world.

For a quantum computer to be fully fault-tolerant, every operation must be error-resilient, across Clifford gates and non-Clifford gates, and thus, performing “a full gate set” with error correction. While some groups have performed fully fault-tolerant gate sets in academic settings, these demonstrations were done with only a few qubits and error rates near 10%—too high for any practical use.

Today, we have published two papers that establishes Quantinuum as the first company to develop a complete solution for a universal fully fault-tolerant quantum computer with repeatable error correction, and error rates low enough for real-world applications.

This is where the magic happens

The first paper describes how scientists at Quantinuum used our System Model H1-1 to perfect magic state production, a crucial technique for achieving a fully fault-tolerant universal gate set. In doing so, they set a record magic state infidelity (7x10-5), 10x better than any previously published result.

Our simulations show that our system could reach a magic state infidelity of 10^-10, or about one error per 10 billion operations, on a larger-scale computer with our current physical error rate. We anticipate reaching 10^-14, or about one error per 100 trillion operations, as we continue to advance our hardware. This means that our roadmap is now derisked.

Setting a record magic state infidelity was just the beginning. The paper also presents the first break-even two-qubit non-Clifford gate, demonstrating a logical error rate below the physical one. In doing so, the team set another record for two-qubit non-Clifford gate infidelity (2x10-4, almost 10x better than our physical error rate). Putting everything together, the team ran the first circuit that used a fully fault-tolerant universal gate set, a critical moment for our industry.

Flipping the switch

In the second paper, co-authored with researchers at the University of California at Davis, we demonstrated an important technique for universal fault-tolerance called “code switching”.

Code switching describes switching between different error correcting codes. The team then used the technique to demonstrate the key ingredients for universal computation, this time using a code where we’ve previously demonstrated full error correction and the other ingredients for universality.

In the process, the team set a new record for magic states in a distance-3 error correcting code, over 10x better than the best previous attempt with error correction. Notably, this process only cost 28 qubits instead of hundreds. This completes, for the first time, the ingredient list for a universal gate setin a system that also has real-time and repeatable QEC.

To perform "code switching", one can implement a logical gate between a 2D code and a 3D code, as pictured above. This type of advanced error correcting process requires Quantinuum's reconfigurable connectivity.
Fully equipped for fault-tolerance

Innovations like those described in these two papers can reduce estimates for qubit requirements by an order of magnitude, or more, bringing powerful quantum applications within reach far sooner.

With all of the required pieces now, finally, in place, we are ‘fully’ equipped to become the first company to perform universal fully fault-tolerant computing—just in time for the arrival of Helios, our next generation system launching this year, and what is very likely to remain as the most powerful quantum computer on the market until the launch of its successor, Sol, arriving in 2027.

technical
All
June 10, 2025
Our Hardware is Now Running Quantum Transformers!

If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.

The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.

Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.

Why this matters: Quantum AI, born native

This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.

Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.  

Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.

That’s what we’ve built.

What makes Quixer different?

Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.

Quixer is different: it’s not a translation – it's an innovation.

With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.

As quantum computing advances toward fault tolerance, Quixer is built to scale with it.

What’s next for Quixer?

We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.

This is just the beginning.

Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.

This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.

Stay tuned. The revolution is only getting started.

events
All
June 9, 2025
Join us at ISC25

Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!

As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.

Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.

  • Our industry-leading quantum computer holds the record for performance with a Quantum Volume of 2²³ = 8,388,608 and the highest fidelity on a commercially available QPU available to our users every time they access our systems.
  • Our systems have been validated by a #1 ranking against competitors in a recent benchmarking study by Jülich Research Centre.
  • We’ve laid out a clear roadmap to reach universal, fully fault-tolerant quantum computing by the end of the decade and will launch our next-generation system, Helios, later this year.
  • We are advancing real-world hybrid compute with partners such as RIKEN, NVIDIA, SoftBank, STFC Hartree Center and are pioneering applications such as our own GenQAI framework.
Exhibit Hall

From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.

Presentations & Demos

Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.

Multicore World Networking Event

  • Monday, June 9 | 7:00pm – 9:00 PM at Hofbräu Wirtshaus Esplanade
    In partnership with Multicore World, join us for a Quantinuum-sponsored Happy Hour to explore the present and future of quantum computing with Quantinuum CCO, Dr. Nash Palaniswamy, and network with our team.
    Register here

H1 x CUDA-Q Demonstration

  • All Week at Booth B40
    We’re showcasing a live demonstration of NVIDIA’s CUDA-Q platform running on Quantinuum’s industry-leading quantum hardware. This new integration paves the way for hybrid compute solutions in optimization, AI, and chemistry.
    Register for a demo

HPC Solutions Forum

  • Wednesday, June 11 | 2:20 – 2:40 PM
    “Enabling Scientific Discovery with Generative Quantum AI” – Presented by Maud Einhorn, Technical Account Executive at Quantinuum, discover how hybrid quantum-classical workflows are powering novel use cases in scientific discovery.
See You There!

Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.

We look forward to seeing you in Hamburg!

technical
All
May 27, 2025
Teleporting to new heights

Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.

Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.

Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.

Figure 1: Fidelity of two-bit state teleportation for physical qubit experiments and logical qubit experiments using the d=3 color code (Steane code). The same QASM programs that were ran during March 2024 on the Quantinuum's H2-1 device were reran on the same device on April to March 2025. Thanks to the improvements made to H2-1 from 2024 to 2025, physical error rates have been reduced leading to increased fidelity for both the physical and logical level teleportation experiments. The results imply a logical error rate that is 2.3 times smaller than the physical error rate while being statistically well separated, thus indicating the logical fidelities are below break-even for teleportation.

This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.

Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.

This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.