Quantinuum H-Series quantum computer accelerates through 3 more performance records for quantum volume

217, 218, and 219

June 30, 2023

In the last 6 months, Quantinuum H-Series hardware has demonstrated explosive performance improvement. Quantinuum’s System Model H1-1, Powered by Honeywell, has demonstrated going from 214 = 16,384 quantum volume (QV) announced in February 2023 to now 219 = 524,288, with all the details and data released on our GitHub repository for full transparency. At a quantum volume of 524,288, H1-1 is 1000x higher than the next best reported quantum volume.

Figure 1: H-Series progress quantum volume improvement trajectory
Figure 2: Heavy output probability for the quantum volume data on H1-1 for (left) 217, (center) 218, and (right) 219

We set a big goal back in 2020 when we launched our first quantum computer, HØ. HØ was launched with six qubits and a quantum volume of 26 = 64, and at that time we made the bold and audacious commitment to increasing the quantum volume of our commercial machines 10x per year for 5 years, equating to a quantum volume of 8,388,608 or 223 by the end of 2025. In an industry that is often accused of being over-hyped, a commitment like this was easy to forget. But we did not forget. Diligently, our scientists and engineers continued to achieve world-record after world-record in a tireless and determined pursuit to systematically improve the overall performance of our quantum computers. As seen in Figure 1, from 2020 to early 2023, we have steadily been increasing the quantum volume to demonstrate that increased qubit count while reducing errors directly translates to more computational power. Just within 2023 we’ve had multiple announcements of quantum volume improvements.  In February we announced that H1-1 had leapfrogged 214 and achieved a quantum volume of 215. In May 2023, we launched H2-1 with 32 qubits at a quantum volume of 216. Now we are thrilled to announce the sequential improvements of 217, 218, and 219, all on H1-1.

Importantly, none of these results were “hero results”, meaning there are no special calibrations made just to try to make the system look better. Our quantum volume data is taken on our commercial systems interwoven with customer jobs. What we experience is what our customers experience. Instead of improving at 10x per year as we committed back in 2020, the pace of improvement over the past 6 months has been 30x, accelerating at least one year from our 5-year commitment. While these demonstrations were made using H1-1, the similarities in the designs of H1-2 (now upgraded with 20 qubits) and H2-1, our recently released second generation system, make it straightforward to share the improvements from one machine to another and achieve the same results.

In this young and rapidly evolving industry, there are and will be disagreements about which benchmarks are best to use. Quantum volume, developed by IBM, is undeniably rigorous. Quantum volume can be measured on any gate-based machine. Quantum volume has been peer-reviewed and has well defined assumptions and processes for making the measurements. Improvements in QV require consistent reductions in errors, making it likely that no matter the application, QV improvements translate to better performance. In fact, to realize the exponential increase in power that quantum computers promise, it is required to continue to reduce these error rates. The average two-qubit gate error with these three new QV demonstrations was 0.13%, the best in the industry. We measure many benchmarks, but it is for these reasons that we have adopted quantum volume as our primary system-wide benchmark to report our performance.

Putting aside the argument of which benchmark is better, year-over-year improvements in a rigorous benchmark do not happen accidentally. It can only happen because the dedicated, talented scientists and engineers that work on H-Series hardware have a deep understanding of its error model and a deep understanding of how to reduce the errors to make overall performance improvements. Equally important the talented scientists and engineers have mastery of their domain expertise and can dream-up and then implement the improvements. These validated error models become the bedrock of future systems’ design, instilling confidence that those systems will have well understood error models, and the performance of those systems can also be systematically improved and ultimate performance goals achieved. Taking nothing away from those talented scientists and engineers, but having perfect, identical qubits and employing our quantum charge coupled device (QCCD) architecture does give us an advantage that all the other architectures and other modalities do not have.

What should potential users of H-Series quantum computers take away from this write-up (and what do current users already know)?

  1. Quantinuum is committed to systematically improving the core performance of our quantum computing hardware. The better the fundamental performance, the lower the overhead will be when doing error mitigation, error detection, and ultimately error correction. This provides confidence in our ability to deliver fault-tolerant compute capabilities.
  2. Progress on your research, use-case, or application can be accelerated by getting access to H-series technology because our quantum computers can do circuits that other technologies cannot. “It actually works!” exclaim excited first-time users.
  3. Quantinuum intends to continue to be the quantum computing company that quietly over-delivers, even on big goals.

1. https://github.com/CQCL/quantinuum-hardware-quantum-volume

2. https://quantum-journal.org/papers/q-2022-05-09-707/

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
March 27, 2025
Quantinuum and Google DeepMind Unveil the Reality of the Symbiotic Relationship Between Quantum and AI

The marriage of AI and quantum computing is going to have a widespread and meaningful impact in many aspects of our lives, combining the strengths of both fields to tackle complex problems.

Quantum and AI are the ideal partners. At Quantinuum, we are developing tools to accelerate AI with quantum computers, and quantum computers with AI. According to recent independent analysis, our quantum computers are the world’s most powerful, enabling state-of-the-art approaches like Generative Quantum AI (Gen QAI), where we train classical AI models with data generated from a quantum computer.

We harness AI methods to accelerate the development and performance of our full quantum computing stack as opposed to simply theorizing from the sidelines. A paper in Nature Machine Intelligence reveals the results of a recent collaboration between Quantinuum and Google DeepMind to tackle the hard problem of quantum compilation.

The work shows a classical AI model supporting quantum computing by demonstrating its potential for quantum circuit optimization. An AI approach like this has the potential to lead to more effective control at the hardware level, to a richer suite of middleware tools for quantum circuit compilation, error mitigation and correction, even to novel high-level quantum software primitives and quantum algorithms.

An AI power-up for circuit optimization

The joint Quantinuum-Google DeepMind team of researchers tackled one of quantum computing’s most pressing challenges: minimizing the number of highly expensive but essential T-gates required for universal quantum computation. This is important specifically for the fault-tolerant regime, which is becoming increasingly relevant as quantum error correction protocols are being explored on rapidly developing quantum hardware. The joint team of researchers adapted AlphaTensor, Google DeepMind’s reinforcement learning AI system for algorithm discovery, which was introduced to improve the efficiency of linear algebra computations. The team introduced AlphaTensor-Quantum, which takes as input a quantum circuit and returns a new, more efficient one in terms of number of T-gates, with exactly the same functionality!

AlphaTensor-Quantum outperformed current state-of-the art optimization methods and matched the best human-designed solutions across multiple circuits in a thoroughly curated set of circuits, chosen for their prevalence in many applications, from quantum arithmetic to quantum chemistry. This breakthrough shows the potential for AI to automate the process of finding the most efficient quantum circuit. This is the first time that such an AI model has been put to the problem of T-count reduction at such a large scale.

A quantum power-up for machine learning

The symbiotic relationship between quantum and AI works both ways. When AI and quantum computing work together, quantum computers could dramatically accelerate machine learning algorithms, whether by the development and application of natively quantum algorithms, or by offering quantum-generated training data that can be used to train a classical AI model.

Our recent announcement about Generative Quantum AI (Gen QAI) spells out our commitment to unlocking the value of the data generated by our H2 quantum computer. This value arises from the world’s leading fidelity and computational power of our System Model H2, making it impossible to exactly simulate on any classical computer, and therefore the data it generates – that we can use to train AI – is inaccessible by any other means. Quantinuum’s Chief Scientist for Algorithms and Innovation, Prof. Harry Buhrman, has likened accessing the first truly quantum-generated training data to the invention of the modern microscope in the seventeenth century, which revealed an entirely new world of tiny organisms thriving unseen within a single drop of water.

Recently, we announced a wide-ranging partnership with NVIDIA. It charts a course to commercial scale applications arising from the partnership between high-performance classical computers, powerful AI systems, and quantum computers that breach the boundaries of what previously could and could not be done. Our President & CEO, Dr. Raj Hazra spoke to CNBC recently about our partnership. Watch the video here.

As we prepare for the next stage of quantum processor development, with the launch of our Helios system in 2025, we’re excited to see how AI can help write more efficient code for quantum computers – and how our quantum processors, the most powerful in the world, can provide a backend for AI computations.

As in any truly symbiotic relationship, the addition of AI to quantum computing equally benefits both sides of the equation.

To read more about Quantinuum and Google DeepMind’s collaboration, please read the scientific paper here.

technical
All
Blog
March 26, 2025
Quantinuum Introduces First Commercial Application for Quantum Computers

Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.

Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.

This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.

It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.

Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.

By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.

Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.

This Matters Because Cybersecurity Matters

Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.

In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.

Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.

What follows is intended as a non-technical explainer of the results in today’s Nature paper.

Certified Randomness: The First Commercial Application for Quantum Computers

For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.

At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.

In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?

This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:

  • The team sent random circuits to Quantinuum’s H2, the world’s highest performing commercially available quantum computer.
  • The quantum computer executed each circuit and returned the corresponding sample. The response times were remarkably short, and it could be proven that the circuits could not have been simulated classically within those times, even using the best-known techniques on computing resources greater than those available in the world’s most powerful classical supercomputer.
  • The randomness of the returned sample was mathematically certified using Frontier, the world’s most powerful classical supercomputer, establishing it achieved a “passing threshold” on a measure known as the “cross-entropy benchmark”. The better your quantum computer, the higher you can set the “passing threshold”. When the threshold is sufficiently high, "spoofing" the cross-entropy benchmark using only classical methods becomes inefficient.
  • Therefore, if the samples are returned quickly and meet the high threshold, the team could be confident that they were generated by a quantum computer – and thus be truly random.

This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.

Looking ahead

The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.

The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.

Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.

In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.

Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.

And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.

partnership
All
technical
All
Blog
March 25, 2025
Untangling the Mysteries of Knots with Quantum Computers

One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer potentially a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.

Quantinuum’s applied algorithms team has released an end-to-end implementation of a quantum algorithm to solve a central problem in knot theory. Along with an efficiently verifiable benchmark for quantum processors, it allows for concrete resource estimates for quantum advantage in the near-term. The research team, included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology. In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group who led the project, writes about the problem being addressed and how the team, adopting an aggressively practical mindset, quantified the resources required for quantum advantage:

Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by Lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the aether. Of course, we know today that the aether theory was falsified by the Michelson-Morley experiment, but mathematicians have been classifying, tabulating, and studying knots ever since. Regarding applications, the pure mathematics of knots can find their way into cryptography, but knot theory is also intrinsically related to many aspects of the natural sciences. For example, it naturally shows up in certain spin models in statistical mechanics, when one studies thermodynamic quantities, and the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity, to name a few examples. Remarkably, physical properties of knots are important in understanding the stability of macromolecular structures. This is highlighted by work of Cozzarelli and Sumners in the 1980’s, on the topology of DNA, particularly how it forms knots and supercoils. Their interdisciplinary research helped explain how enzymes untangle and manage DNA topology, crucial for replication and transcription, laying the foundation for using mathematical models to predict and manipulate DNA behavior, with broad implications in drug development and synthetic biology. Serendipitously, this work was carried out during the same decade as Richard Feynman, David Deutsch, and Yuri Manin formed the first ideas for a quantum computer.

Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates. The relation between knot theory and quantum physics is the most beautiful and bizarre facts you have never heard of.

The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.

Four equivalent representations of the trefoil knot, the simplest non-trivial knot.
They all have the same Jones polynomial, as it is an invariant.
These knots have different Jones polynomials, so they are not equivalent.

The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the value of the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.

Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.

Demonstration of the quantum algorithm on the H2 quantum computer for estimating the value of Jones polynomial of a link with ~100 crossings. The raw signal (orange) can be amplified (green) with error detection, and corrected via a problem-tailored error mitigation method (purple), bringing the experimental estimate closer to the actual value (blue).

In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.

The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.

Rigorous resource estimation of our quantum algorithm pinpoints the exponential quantum advantage quantified in terms of time-to-solution, namely the time necessary for the classical state-of-the-art to reach the same error as the achieved by quantum. The advantage crossover happens at large link sizes, requiring circuits with ~85 qubits and ~8.5k two-qubit gates, assuming 99.99% two-qubit gate fidelity and 30ms per circuit-layer. The classical algorithms are assumed to run on the Frontier Supercomputer.

The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically-hard” problems. Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves first carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus, making sure we instill confidence in our customers and partners.

Edited

technical
All