Quantinuum’s H-Series hits 56 physical qubits that are all-to-all connected, and departs the era of classical simulation

In collaboration with JPMorgan Chase & Co., Quantinuum’s H2-1 achieved a massive uplift in an iconic demonstration

June 5, 2024

The first half of 2024 will go down as the period when we shed the last vestiges of the “wait and see” culture that has dominated the quantum computing industry. Thanks to a run of recent achievements, we have helped to lead the entire quantum computing industry into a new, post-classical era.

Today we are announcing the latest of these achievements: a major qubit count enhancement to our flagship System Model H2 quantum computer from 32 to 56 qubits. We also reveal meaningful results of work with our partner JPMorgan Chase & Co. that showcases a significant lift in performance.

But to understand the full importance of today’s announcements, it is worth recapping the succession of breakthroughs that confirm that we are entering a new era of quantum computing in which classical simulation will be infeasible.

A historic run

Between January and June 2024, Quantinuum’s pioneering teams published a succession of results that accelerate our path to universal fault-tolerant quantum computing. 

Our technical teams first presented a long-sought solution to the “wiring problem”, an engineering challenge that affects all types of quantum computers. In short, most current designs will require an impossible number of wires connected to the quantum processor to scale to large qubit numbers. Our solution allows us to scale to high qubit numbers with no issues, proving that our QCCD architecture has the potential to scale.

Next, we became the first quantum computing company in the world to hit “three 9s” two qubit gate fidelity across all qubit pairs in a production device. This level of fidelity in 2-qubit gate operations was long thought to herald the point at which error corrected quantum computing could become a reality. It has accelerated and intensified our focus on quantum error correction (QEC). Our scientists and engineers are working with our customers and partners to achieve multiple breakthroughs in QEC in the coming months, many of which will be incorporated into products such as the H-Series and our chemistry simulation platform, InQuanto™.

Following that, with our long-time partner Microsoft, we hit an error correction performance threshold that many believed was still years away. The System Model H2 became the first – and only – quantum computer in the world capable of creating and computing with highly reliable logical (error corrected) qubits. In this demonstration, the H2-1 configured with 32 physical qubits supported the creation of four highly reliable logical qubits operating at “better than break-even”. In the same demonstration, we also shared that logical circuit error rates were shown to be up to 800x lower than the corresponding physical circuit error rates. No other quantum computing company is even close to matching this achievement (despite many feverish claims in the past 12 months).

Pushing to the limits of supercomputing … and beyond

The quantum computing industry is departing the era when quantum computers could be simulated by a classical computer. Today, we are making two milestone announcements. The first is that our H2-1 processor has been upgraded to 56 trapped-ion qubits, making it impossible to classically simulate, without any loss of the market-leading fidelity, all-to-all qubit connectivity, mid-circuit measurement, qubit reuse, and feed forward.

The second is that the upgrade of H2-1 from 32 to 56 qubits makes our processor capable of challenging the world’s most powerful supercomputers. This demonstration was achieved in partnership with our long-term collaborator JPMorgan Chase & Co. and researchers from Caltech and Argonne National Lab.

Our collaboration tackled a well-known algorithm, Random Circuit Sampling (RCS), and measured the quality of our results with a suite of tests including the linear cross entropy benchmark (XEB) – an approach first made famous by Google in 2019 in a bid to demonstrate “quantum supremacy”. An XEB score close to 0 says your results are noisy – and do not utilize the full potential of quantum computing. In contrast, the closer an XEB score is to 1, the more your results demonstrate the power of quantum computing. The results on H2-1 are excellent, revealing, and worth exploring in a little detail. Here is the complete data on GitHub.

Better qubits, better results

Our results show how far quantum hardware has come since Google’s initial demonstration. They originally ran circuits on 53 superconducting qubits that were deep enough to severely frustrate high-fidelity classical simulation at the time, achieving an estimated XEB score of ~0.002. While they showed that this small value was statistically inconsistent with zero, improvements in classical algorithms and hardware have steadily increased what XEB scores are achievable by classical computers, to the point that classical computers can now achieve scores similar to Google’s on their original circuits.

Figure 1. At N=56 qubits, the H2 quantum computer achieves over 100x higher fidelity on computationally hard circuits compared to earlier superconducting experiments. This means orders of magnitude fewer shots are required for high confidence in the fidelity, resulting in comparable total runtimes

In contrast, we have been able to run circuits on all 56 qubits in H2-1 that are deep enough to challenge high-fidelity classical simulation while achieving an estimated XEB score of ~0.35. This >100x improvement implies the following: even for circuits large and complex enough to frustrate all known classical simulation methods, the H2 quantum computer produces results without making even a single error about 35% of the time. In contrast to past announcements associated with XEB experiments, 35% is a significant step towards the idealized 100% fidelity limit in which the computational advantage of quantum computers is clearly in sight.

This huge jump in quality is made possible by Quantinuum’s market-leading high fidelity and also our unique all-to-all connectivity. Our flexible connectivity, enabled by our QCCD architecture, enables us to implement circuits with much more complex geometries than the 2D geometries supported by superconducting-based quantum computers. This specific advantage means our quantum circuits become difficult to simulate classically with significantly fewer operations (or gates). These capabilities have an enormous impact on how our computational power scales as we add more qubits: since noisy quantum computers can only run a limited number of gates before returning unusable results, needing to run fewer gates ultimately translates into solving complex tasks with consistent and dependable accuracy.

This is a vitally important moment for companies and governments watching this space and deciding when to invest in quantum: these results underscore both the performance capabilities and the rapid rate of improvement of our processors, especially the System Model H2, as a prime candidate for achieving near-term value.

So what of the comparison between the H2-1 results and a classical supercomputer? 

A direct comparison can be made between the time it took H2-1 to perform RCS and the time it took a classical supercomputer. However, classical simulations of RCS can be made faster by building a larger supercomputer (or by distributing the workload across many existing supercomputers). A more robust comparison is to consider the amount of energy that must be expended to perform RCS on either H2-1 or on classical computing hardware, which ultimately controls the real cost of performing RCS. An analysis based on the most efficient known classical algorithm for RCS and the power consumption of leading supercomputers indicates that H2-1 can perform RCS at 56 qubits with an estimated 30,000x reduction in power consumption. These early results should be seen as very attractive for data center owners and supercomputing facilities looking to add quantum computers as “accelerators” for their users. 

Where we go next

Today’s milestone announcements are clear evidence that the H2-1 quantum processor can perform computational tasks with far greater efficiency than classical computers. They underpin the expectation that as our quantum computers scale beyond today’s 56 qubits to hundreds, thousands, and eventually millions of high-quality qubits, classical supercomputers will quickly fall behind. Quantinuum’s quantum computers are likely to become the device of choice as scrutiny continues to grow of the power consumption of classical computers applied to highly intensive workloads such as simulating molecules and material structures – tasks that are widely expected to be amenable to a speedup using quantum computers.

With this upgrade in our qubit count to 56, we will no longer be offering a commercial “fully encompassing” emulator – a mathematically exact simulation of our H2-1 quantum processor is now impossible, as it would take up the entire memory of the world’s best supercomputers. With 56 qubits, the only way to get exact results is to run on the actual hardware, a trend the leaders in this field have already embraced.

More generally, this work demonstrates that connectivity, fidelity, and speed are all interconnected when measuring the power of a quantum computer. Our competitive edge will persist in the long run; as we move to running more algorithms at the logical level, connectivity and fidelity will continue to play a crucial role in performance.

“We are entirely focused on the path to universal fault tolerant quantum computers. This objective has not changed, but what has changed in the past few months is clear evidence of the advances that have been made possible due to the work and the investment that has been made over many, many years. These results show that whilst the full benefits of fault tolerant quantum computers have not changed in nature, they may be reachable earlier than was originally expected, and crucially, that along the way, there will be tangible benefits to our customers in their day-to-day operations as quantum computers start to perform in ways that are not classically simulatable. We have an exciting few months ahead of us as we unveil some of the applications that will start to matter in this context with our partners across a number of sectors.”
– Ilyas Khan, Chief Product Officer

Stay tuned for results in error correction, physics, chemistry and more on our new 56-qubit processor.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
March 25, 2025
Untangling the Mysteries of Knots with Quantum Computers

One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.

Background

In the 1980s, in the years after Richard Feynman and David Deutsch were working on their initial thoughts around quantum computing, Nicholas Cozzarelli at the University of California, Berkeley, was grappling with a biochemical riddle – “how do enzymes called topoisomerases and recombinases untangle the DNA strands that knot themselves inside cells?”

Cozarelli teamed up with mathematicians including De Witt Sumners, who recognized that these twisted strands could be modelled using the language of knots. 

Knot theory’s equations let them deduce how enzymes snipped, flipped and reattached DNA, demystifying processes essential to life. Decoding the knots in DNA proved crucial to designing better antibiotics and in advancing genetic engineering.

Cozzarelli’s team took advantage of the power of knot invariants—polynomial expressions that remain consistent markers of a knot’s identity, no matter how tangled the loops become. This is just one example of how knot theory has been used to solve real-world problems of practical value. 

Today, knot theory finds practical uses in fields as diverse as chemistry, robotics, fluid dynamics, and drug design. Measuring the invariants that characterize each knot is a challenge that scales exponentially with the complexity of the knots. 

This work shows how a quantum computer can cut through this exponential explosion, indicating that Quantinuum's next-generation systems will offer practical quantum advantage in solving knot theory problems.

In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group, explains intriguing and valuable new research into applying quantum computers to addressing problems in knot theory.

Quantifying quantum advantage for knot theory

Quantinuum’s applied quantum algorithms team has published a historic end-to-end algorithm for solving a famous problem in knot theory, via a preprint paper on the arXiv. The research team, led by Konstantinos Meichanetzidis, also included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology.

The project was motivated by building configurable and comprehensive algorithmic tools to pinpoint quantum advantage in practice. This was done by rigorously defining time and error budgets and quantifying both the classical and quantum resource requirements necessary to meet them. Considering realistic quantum and classical processors, they predict that Quantinuum’s forthcoming quantum computers meet those requirements.

Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the ether. Of course, we know today that the ether theory did not hold up under experimental scrutiny, but mathematicians have been classifying and studying knots ever since. Knot theory is intrinsically linked with many aspects of physics. For example, it naturally shows up in certain spin models in statistical mechanics. Today, physical properties of knots are important in understanding the stability of macromolecular structures, from DNA and proteins, to polymers relevant to materials design. Knots find their way into cryptography. Even the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity.

Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates.

Konstantinos Meichanetzidis, who led the project, said: “The relation between knot theory and quantum physics is the most beautiful and bizarre fact you have never heard of.” 

Four equivalent representations of the trefoil knot, the simplest knot.
These knots have different Jones polynomials, so they are not equivalent.

The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.

The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.

A graph with dots and starsAI-generated content may be incorrect.
Demonstration of the quantum algorithm on the H2 quantum computer for estimating the Jones polynomial of a link with ~100 crossings. The raw signal (orange) can be amplified (green) with error detection and mitigated via a problem-tailored method (purple), bringing the experimental estimate closer to the actual value (blue).

Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.

In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.

A graph with lines and numbersDescription automatically generated
Rigorous resource estimation of our quantum algorithm pinpoints the exponential quantum advantage quantified in terms of time-to-solution necessary for the best classical algorithms to reach the same error as the quantum algorithm. The crossover happens at large links requiring to circuits with ~85 qubits and ~8.5k two-qubit gates, assuming a two-qubit gate-fidelity of 99.99%. The classical algorithms are assumed to run on the Frontier Supercomputer.

The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.

The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically hard” problems. 

Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus making sure we instill confidence in our customers and partners.

technical
All
Blog
March 20, 2025
Initiating Impact Today: Combining the World’s Most Powerful in Quantum and Classical Compute
A diagram of a diagram of a diagramDescription automatically generated with medium confidence

Quantinuum and NVIDIA, world leaders in their respective sectors, are combining forces to fast-track commercially scalable quantum supercomputers, further bolstering the announcement Quantinuum made earlier this year about the exciting new potential in Generative Quantum AI. 

Make no mistake about it, the global quantum race is on. With over $2 billion raised by companies in 2024 alone, and over 150 new startups in the past five years, quantum computing is no longer restricted to ‘the lab’.  

The United Nations proclaimed 2025 as the International Year of Quantum Science and Technology (IYQ), and as we march toward the end of the first quarter, the old maxim that quantum computing is still a decade (or two, or three) away is no longer relevant in today’s world. Governments, commercial enterprises and scientific organizations all stand to benefit from quantum computers, led by those built by Quantinuum.

That is because, amid the flurry of headlines and social media chatter filled with aspirational statements of future ambitions shared by those in the heat of this race, we at Quantinuum continue to lead by example. We demonstrate what that future looks like today, rather than relying solely on slide deck presentations.

Our quantum computers are the most powerful systems in the world. Our H2 system, the only quantum computer that cannot be classically simulated, is years ahead of any other system being developed today. In the coming months, we’ll introduce our customers to Helios, a trillion times more powerful than H2, further extending our lead beyond where the competition is still only planning to be. 

At Quantinuum, we have been convinced for years that the impact of quantum computers on the real world will happen earlier than anticipated. However, we have known that impact will be when powerful quantum computers and powerful classical systems work together. 

This sort of hybrid ‘supercomputer’ has been referenced a few times in the past few months, and there is, rightly, a sense of excitement about what such an accelerated quantum supercomputer could achieve.

The Power of Hybrid Quantum and Classical Compute

In a revolutionary move on March 18th, 2025, at the GTC AI conference, NVIDIA announced the opening of a world-class accelerated quantum research center with Quantinuum selected as a key founding collaborator to work on projects with NVIDIA at the center. 

With details shared in an accompanying press statement and blog post, the NVIDIA Accelerated Quantum Research Center (NVAQC) being built in Boston, Massachusetts, will integrate quantum computers with AI supercomputers to ultimately explore how to build accelerated quantum supercomputers capable of solving some of the world’s most challenging problems. The center will begin operations later this year.

As shared in Quantinuum’s accompanying statement, the center will draw on the NVIDIA CUDA-Q platform, alongside a NVIDIA GB200 NVL72 system containing 576 NVIDIA Blackwell GPUs dedicated to quantum research. 

The Role of CUDA-Q in Quantum-Classical Integration  

Integrating quantum and classical hardware relies on a platform that can allow researchers and developers to quickly shift context between these two disparate computing paradigms within a single application. NVIDIA CUDA-Q platform will be the entry-point for researchers to exploit the NVAQC quantum-classical integration. 

In 2022, Quantinuum became the first company to bring CUDA-Q to its quantum systems, establishing a pioneering collaboration that continues to today. Users of CUDA-Q are currently offered access to Quantinuum’s System H1 QPU and emulator for 90 days.

Quantinuum’s future systems will continue to support the CUDA-Q platform. Furthermore, Quantinuum and NVIDIA are committed to evolving and improving tools for quantum classical integration to take advantage of the latest hardware features, for example, on our upcoming Helios generation. 

The Gen-Q-AI Moment

A few weeks ago, we disclosed high level details about an AI system that we refer to as Generative Quantum AI, or GenQAI. We highlighted a timeline between now and the end of this year when the first commercial systems that can accelerate both existing AI and quantum computers.

At a high level, an AI system such as GenQAI will be enhanced by access to information that has not previously been accessible. Information that is generated from a quantum computer that cannot be simulated. This information and its effect can be likened to a powerful microscope that brings accuracy and detail to already powerful LLM’s, bridging the gap from today’s impressive accomplishments towards truly impactful outcomes in areas such as biology and healthcare, material discovery and optimization.

Through the integration of the most powerful in quantum and classical systems, and by enabling tighter integration of AI with quantum computing, the NVAQC will be an enabler for the realization of the accelerated quantum supercomputer needed for GenQAI products and their rapid deployment and exploitation.

Innovating our Roadmap

The NVAQC will foster the tools and innovations needed for fully fault-tolerant quantum computing and will be enabler to the roadmap Quantinuum released last year.

With each new generation of our quantum computing hardware and accompanying stack, we continue to scale compute capabilities through more powerful hardware and advanced features, accelerating the timeline for practical applications. To achieve these advances, we integrate the best CPU and GPU technologies alongside our quantum innovations. Our long-standing collaboration with NVIDIA drives these advancements forward and will be further enriched by the NVAQC. 

Here are a couple of examples: 

In quantum error correction, error syndromes detected by measuring "ancilla" qubits are sent to a "decoder." The decoder analyzes this information to determine if any corrections are needed. These complex algorithms must be processed quickly and with low latency, requiring advanced CPU and GPU power to calculate and apply corrections keeping logical qubits error-free. Quantinuum has been collaborating with NVIDIA on the development of customized GPU-based decoders which can be coupled with our upcoming Helios system. 

In our application space, we recently announced the integration of InQuanto v4.0, the latest version of Quantinuum’s cutting edge computational chemistry platform, with NVIDIA cuQuantum SDK to enable previously inaccessible tensor-network-based methods for large-scale and high-precision quantum chemistry simulations.

Our work with NVIDIA underscores the partnership between quantum computers and classical processors to maximize the speed toward scaled quantum computers. These systems offer error-corrected qubits for operations that accelerate scientific discovery across a wide range of fields, including drug discovery and delivery, financial market applications, and essential condensed matter physics, such as high-temperature superconductivity.

We look forward to sharing details with our partners and bringing meaningful scientific discovery to generate economic growth and sustainable development for all of humankind.

partnership
All
Blog
March 18, 2025
Setting the Benchmark: Independent Study Ranks Quantinuum #1 in Performance

By Dr. Chris Langer

In the rapidly advancing world of quantum computing, to be a leader means not just keeping pace with innovation but driving it forward. It means setting new standards that shape the future of quantum computing performance. A recent independent study comparing 19 quantum processing units (QPUs) on the market today has validated what we’ve long known to be true: Quantinuum’s systems are the undisputed leaders in performance.

The Benchmarking Study

A comprehensive study conducted by a joint team from the Jülich Supercomputing Centre, AIDAS, RWTH Aachen University, and Purdue University compared QPUs from leading companies like IBM, Rigetti, and IonQ, evaluating how well each executed the Quantum Approximate Optimization Algorithm (QAOA), a widely used algorithm that provides a system level measure of performance. After thorough examination, the study concluded that:

“...the performance of quantinuum H1-1 and H2-1 is superior to that of the other QPUs.”

Quantinuum emerged as the clear leader, particularly in full connectivity, the most critical category for solving real-world optimization problems. Full connectivity is a huge comparative advantage, offering more computational power and more flexibility in both error correction and algorithmic design. Our dominance in full connectivity—unattainable for platforms with natively limited connectivity—underscores why we are the partner of choice in quantum computing.

Leading Across the Board

We take benchmarking seriously at Quantinuum. We lead in nearly every industry benchmark, from best-in-class gate fidelities to a 4000x lead in quantum volume, delivering top performance to our customers.

Our Quantum Charged-coupled Device (QCCD) architecture has been the foundation of our success, delivering consistent performance gains year-over-year. Unlike other architectures, QCCD offers all-to-all connectivity, world-record fidelities, and advanced features like real-time decoding. Altogether, it’s clear we have superior performance metrics across the board.

While many claim to be the best, we have the data to prove it. This table breaks down industry benchmarks, using the leading commercial spec for each quantum computing architecture.

TABLE 1. Leading commercial spec for each listed architecture or demonstrated capabilities on commercial hardware. Download Benchmarking Results

These metrics are the key to our success. They demonstrate why Quantinuum is the only company delivering meaningful results to customers at a scale beyond classical simulation limits.

Our progress builds upon a series of Quantinuum’s technology breakthroughs, including the creation of the most reliable and highest-quality logical qubits, as well as solving the key scalability challenge associated with ion-trap quantum computers — culminating in a commercial system with greater than 99.9% two-qubit gate fidelity.

From our groundbreaking progress with System Model H2 to advances in quantum teleportation and solving the wiring problem, we’re taking major steps to tackle the challenges our whole industry faces, like execution speed and circuit depth. Advancements in parallel gate execution, faster ion transport, and high-rate quantum error correction (QEC) are just a few ways we’re maintaining our lead far ahead of the competition.

This commitment to excellence ensures that we not only meet but exceed expectations, setting the bar for reliability, innovation, and transformative quantum solutions. 

Onward and Upward

To bring it back to the opening message: to be a leader means not just keeping pace with innovation but driving it forward. It means setting new standards that shape the future of quantum computing performance.

We are just months away from launching Quantinuum’s next generation system, Helios, which will be one trillion times more powerful than H2. By 2027, Quantinuum will launch the industry’s first 100-logical-qubit system, featuring best-in-class error rates, and we are on track to deliver fault-tolerant computation on hundreds of logical qubits by the end of the decade. 

The evidence speaks for itself: Quantinuum is setting the standard in quantum computing. Our unrivaled specs, proven performance, and commitment to innovation make us the partner of choice for those serious about unlocking value with quantum computing. Quantinuum is committed to doing the hard work required to continue setting the standard and delivering on our promises. This is Quantinuum. This is leadership.

Dr. Chris Langer is a Fellow, a key inventor and architect for the Quantinuum hardware, and serves as an advisor to the CEO.

_______________________________________

Citations from Benchmarking Table
1 Quantinuum. System Model H2. Quantinuum, https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h2
2 IBM. Quantum Services & Resources. IBM Quantum, https://quantum.ibm.com/services/resources
3 Quantinuum. System Model H1. Quantinuum, https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h1
4 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
5 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
6 Quantinuum. H1 Product Data Sheet. Quantinuum, https://docs.quantinuum.com/systems/data_sheets/Quantinuum%20H1%20Product%20Data%20Sheet.pdf
7 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
8 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
9 Quantinuum. H2 Product Data Sheet. Quantinuum, https://docs.quantinuum.com/systems/data_sQuantinuum. H2 Product Data Sheet. Quantinuum,heets/Quantinuum%20H2%20Product%20Data%20Sheet.pdf
10 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
11 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
12 Moses, S. A., et al. "A Race-Track Trapped-Ion Quantum Processor." Physical Review X, vol. 13, no. 4, 2023, https://journals.aps.org/prx/pdf/10.1103/PhysRevX.13.041052
13 Google Quantum AI and Collaborators. "Quantum Error Correction Below the Surface Code Threshold." Nature, vol. 638, 2024, https://www.nature.com/articles/s41586-024-08449-y
14 Bluvstein, Dolev, et al. "Logical Quantum Processor Based on Reconfigurable Atom Arrays." Nature, vol. 626, 2023, https://www.nature.com/articles/s41586-023-06927-3
15 DeCross, Matthew, et al. "The Computational Power of Random Quantum Circuits in Arbitrary Geometries." arXiv, Published on 21 June 2024, hhttps://arxiv.org/pdf/2406.02501
16 Montanez-Barrera, J. A., et al. "Evaluating the Performance of Quantum Process Units at Large Width and Depth." arXiv, 10 Feb. 2025, https://arxiv.org/pdf/2502.06471
17 Evered, Simon J., et al. "High-Fidelity Parallel Entangling Gates on a Neutral-Atom Quantum Computer." Nature, vol. 622, 2023, https://www.nature.com/articles/s41586-023-06481-y
18 Ryan-Anderson, C., et al. "Realization of Real-Time Fault-Tolerant Quantum Error Correction." Physical Review X, vol. 11, no. 4, 2021, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.11.041058
19 Carrera Vazquez, Almudena, et al. "Scaling Quantum Computing with Dynamic Circuits." arXiv, 27 Feb. 2024, https://arxiv.org/html/2402.17833v1
20 Moses, S.A.,, et al. "A Race Track Trapped-Ion Quantum Processor." arXiv, 16 May 2023, https://arxiv.org/pdf/2305.03828
21 Garcia Almeida, D., Ferris, K., Knanazawa, N., Johnson, B., Davis, R. "New fractional gates reduce circuit depth for utility-scale workloads." IBM Quantum Blog, IBM, 18 Nov. 2020, https://www.ibm.com/quantum/blog/fractional-gates
22 Ryan-Anderson, C., et al. "Realization of Real-Time Fault-Tolerant Quantum Error Correction." arXiv, 15 July 2021, https://arxiv.org/pdf/2107.07505
23 Google Quantum AI and Collaborators. “Quantum error correction below the surface code threshold.” arXiv, 24 Aug. 2024, https://arxiv.org/pdf/2408.13687v1
technical
All