Quantinuum’s H-Series team has hit the ground running in 2023, achieving a new performance milestone. The H1-1 trapped ion quantum computer has achieved a Quantum Volume (QV) of 32,768 (215), the highest in the industry to date.
The team previously increased the QV to 8,192 (or 213) for the System Model H1 system in September, less than six months ago. The next goal was a QV of 16,384 (214). However, continuous improvements to the H1-1's controls and subsystems advanced the system enough to successfully reach 214 as expected, and then to go one major step further, and reach a QV of 215.
The Quantum Volume test is a full-system benchmark that produces a single-number measure of a quantum computer’s general capability. The benchmark takes into account qubit number, fidelity, connectivity, and other quantities important in building useful devices.1 While other measures such as gate fidelity and qubit count are significant and worth tracking, neither is as comprehensive as Quantum Volume which better represents the operational ability of a quantum computer.
Dr. Brian Neyenhuis, Director of Commercial Operations, credits reductions in the phase noise of the computer’s lasers as one key factor in the increase.
"We've had enough qubits for a while, but we've been continually pushing on reducing the error in our quantum operations, specifically the two-qubit gate error, to allow us to do these Quantum Volume measurements,” he said.
The Quantinuum team improved memory error and elements of the calibration process as well.
“It was a lot of little things that got us to the point where our two-qubit gate error and our memory error are both low enough that we can pass these Quantum Volume circuit tests,” he said.
The work of increasing Quantum Volume means improving all the subsystems and subcomponents of the machine individually and simultaneously, while ensuring all the systems continue to work well together. Such a complex task takes a high degree of orchestration across the Quantinuum team, with the benefits of the work passed on to H-Series users.
To illustrate what this 5-digit Quantum Volume milestone means for the H-Series, here are 5 perspectives that reflect Quantinuum teams and H-Series users.
Dr. Henrik Dreyer is Managing Director and Scientific Lead at Quantinuum’s office in Munich, Germany. In the context of his work, an improvement in Quantum Volume is important as it relates to gate fidelity.
“As application developers, the signal-to-noise ratio is what we're interested in,” Henrik said. “If the signal is small, I might run the circuits 10 times and only get one good shot. To recover the signal, I have to do a lot more shots and throw most of them away. Every shot takes time."
“The signal-to-noise ratio is sensitive to the gate fidelity. If you increase the gate fidelity by a little bit, the runtime of a given algorithm may go down drastically,” he said. “For a typical circuit, as the plot shows, even a relatively modest 0.16 percentage point improvement in fidelity, could mean that it runs in less than half the time.”
To demonstrate this point, the Quantinuum team has been benchmarking the System Model H1 performance on circuits relevant for near-term applications. The graph below shows repeated benchmarking of the runtime of these circuits before and after the recent improvement in gate fidelity. The result of this moderate change in fidelity is a 3x change in runtime. The runtimes calculated below are based on the number of shots required to obtain accurate results from the benchmarking circuit – the example uses 430 arbitrary-angle two-qubit gates and an accuracy of 3%.
Dr. Natalie Brown and Dr, Ciaran Ryan-Anderson both work on quantum error correction at Quantinuum. They see the QV advance as an overall boost to this work.
“Hitting a Quantum Volume number like this means that you have low error rates, a lot of qubits, and very long circuits,” Natalie said. “And all three of those are wonderful things for quantum error correction. A higher Quantum Volume most certainly means we will be able to run quantum error correction better. Error correction is a critical ingredient to large-scale quantum computing. The earlier we can start exploring error correction on today’s small-scale hardware, the faster we’ll be able to demonstrate it at large-scale.”
Ciaran said that H1-1's low error rates allow scientists to make error correction better and start to explore decoding options.
“If you can have really low error rates, you can apply a lot of quantum operations, known as gates,” Ciaran said. "This makes quantum error correction easier because we can suppress the noise even further and potentially use fewer resources to do it, compared to other devices.”
“This accomplishment shows that gate improvements are getting translated to full-system circuits,” said Dr. Charlie Baldwin, a research scientist at Quantinuum.
Charlie specializes in quantum computing performance benchmarks, conducting research with the Quantum Economic Development Consortium (QED-C).
“Other benchmarking tests use easier circuits or incorporate other options like post-processing data. This can make it more difficult to determine what part improved,” he said. “With Quantum Volume, it’s clear that the performance improvements are from the hardware, which are the hardest and most significant improvements to make.”
“Quantum Volume is a well-established test. You really can’t cheat it,” said Charlie.
Dr. Ross Duncan, Head of Quantum Software, sees Quantum Volume measurements as a good way to show overall progress in the process of building a quantum computer.
“Quantum Volume has merit, compared to any other measure, because it gives a clear answer,” he said.
“This latest increase reveals the extent of combined improvements in the hardware in recent months and means researchers and developers can expect to run deeper circuits with greater success.”
Quantinuum’s business model is unique in that the H-Series systems are continuously upgraded through their product lifecycle. For users, this means they continually and immediately get access to the latest breakthroughs in performance. The reported improvements were not done on an internal testbed, but rather implemented on the H1-1 system which is commercially available and used extensively by users around the world.
“As soon as the improvements were implemented, users were benefiting from them,” said Dr. Jenni Strabley, Sr. Director of Offering Management. “We take our Quantum Volume measurement intermixed with customers’ jobs, so we know that the improvements we’re seeing are also being seen by our customers.”
Jenni went on to say, “Continuously delivering increasingly better performance shows our commitment to our customers’ success with these early small-scale quantum computers as well as our commitment to accuracy and transparency. That’s how we accelerate quantum computing.”
This latest QV milestone demonstrates how the Quantinuum team continues to boost the performance of the System Model H1, making improvements to the two-qubit gate fidelity while maintaining high single-qubit fidelity, high SPAM fidelity, and low cross-talk.
The average single-qubit gate fidelity for these milestones was 99.9955(8)%, the average two-qubit gate fidelity was 99.795(7)% with fully connected qubits, and state preparation and measurement fidelity was 99.69(4)%.
For both tests, the Quantinuum team ran 100 circuits with 200 shots each, using standard QV optimization techniques to yield an average of 219.02 arbitrary angle two-qubit gates per circuit on the 214 test, and 244.26 arbitrary angle two-qubit gates per circuit on the 215 test.
The Quantinuum H1-1 successfully passed the quantum volume 16,384 benchmark, outputting heavy outcomes 69.88% of the time, and passed the 32,768 benchmark, outputting heavy outcomes 69.075% of the time. The heavy output frequency is a simple measure of how well the measured outputs from the quantum computer match the results from an ideal simulation. Both results are above the two-thirds passing threshold with high confidence. More details on the Quantum Volume test can be found here.
Quantum Volume data and analysis code can be accessed on Quantinuum’s GitHub repository for quantum volume data. Contemporary benchmarking data can be accessed at Quantinuum’s GitHub repository for hardware specifications.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
The most common question in the public discourse around quantum computers has been, “When will they be useful?” We have an answer.
Very recently in Nature we announced a successful demonstration of a quantum computer generating certifiable randomness, a critical underpinning of our modern digital infrastructure. We explained how we will be taking a product to market this year, based on that advance – one that could only be achieved because we have the world’s most powerful quantum computer.
Today, we have made another huge leap in a different domain, providing fresh evidence that our quantum computers are the best in the world. In this case, we have shown that our quantum computers can be a useful tool for advancing scientific discovery.
Our latest paper shows how our quantum computer rivals the best classical approaches in expanding our understanding of magnetism. This provides an entry point that could lead directly to innovations in fields from biochemistry, to defense, to new materials. These are tangible and meaningful advances that will deliver real world impact.
To achieve this, we partnered with researchers from Caltech, Fermioniq, EPFL, and the Technical University of Munich. The team used Quantinuum’s System Model H2 to simulate quantum magnetism at a scale and level of accuracy that pushes the boundaries of what we know to be possible.
As the authors of the paper state:
“We believe the quantum data provided by System Model H2 should be regarded as complementary to classical numerical methods, and is arguably the most convincing standard to which they should be compared.”
Our computer simulated the quantum Ising model, a model for quantum magnetism that describes a set of magnets (physicists call them ‘spins’) on a lattice that can point up or down, and prefer to point the same way as their neighbors. The model is inherently “quantum” because the spins can move between up and down configurations by a process known as “quantum tunneling”.
Researchers have struggled to simulate the dynamics of the Ising model at larger scales due to the enormous computational cost of doing so. Nobel laureate physicist Richard Feynman, who is widely considered to be the progenitor of quantum computing, once said, “it is impossible to represent the results of quantum mechanics with a classical universal device.” When attempting to simulate quantum systems at comparable scales on classical computers, the computational demands can quickly become overwhelming. It is the inherent ‘quantumness’ of these problems that makes them so hard classically, and conversely, so well-suited for quantum computing.
These inherently quantum problems also lie at the heart of many complex and useful material properties. The quantum Ising model is an entry point to confront some of the deepest mysteries in the study of interacting quantum magnets. While rooted in fundamental physics, its relevance extends to wide-ranging commercial and defense applications, including medical test equipment, quantum sensors, and the study of exotic states of matter like superconductivity.
Instead of tailored demonstrations that claim ‘quantum advantage’ in contrived scenarios, our breakthroughs announced this week prove that we can tackle complex, meaningful scientific questions difficult for classical methods to address. In the work described in this paper, we have proved that quantum computing could be the gold standard for materials simulations. These developments are critical steps toward realizing the potential of quantum computers.
With only 56 qubits in our commercially available System Model H2, the most powerful quantum system in the world today, we are already testing the limits of classical methods, and in some cases, exceeding them. Later this year, we will introduce our massively more powerful 96-qubit Helios system - breaching the boundaries of what until recently was deemed possible.
The marriage of AI and quantum computing is going to have a widespread and meaningful impact in many aspects of our lives, combining the strengths of both fields to tackle complex problems.
Quantum and AI are the ideal partners. At Quantinuum, we are developing tools to accelerate AI with quantum computers, and quantum computers with AI. According to recent independent analysis, our quantum computers are the world’s most powerful, enabling state-of-the-art approaches like Generative Quantum AI (Gen QAI), where we train classical AI models with data generated from a quantum computer.
We harness AI methods to accelerate the development and performance of our full quantum computing stack as opposed to simply theorizing from the sidelines. A paper in Nature Machine Intelligence reveals the results of a recent collaboration between Quantinuum and Google DeepMind to tackle the hard problem of quantum compilation.
The work shows a classical AI model supporting quantum computing by demonstrating its potential for quantum circuit optimization. An AI approach like this has the potential to lead to more effective control at the hardware level, to a richer suite of middleware tools for quantum circuit compilation, error mitigation and correction, even to novel high-level quantum software primitives and quantum algorithms.
The joint Quantinuum-Google DeepMind team of researchers tackled one of quantum computing’s most pressing challenges: minimizing the number of highly expensive but essential T-gates required for universal quantum computation. This is important specifically for the fault-tolerant regime, which is becoming increasingly relevant as quantum error correction protocols are being explored on rapidly developing quantum hardware. The joint team of researchers adapted AlphaTensor, Google DeepMind’s reinforcement learning AI system for algorithm discovery, which was introduced to improve the efficiency of linear algebra computations. The team introduced AlphaTensor-Quantum, which takes as input a quantum circuit and returns a new, more efficient one in terms of number of T-gates, with exactly the same functionality!
AlphaTensor-Quantum outperformed current state-of-the art optimization methods and matched the best human-designed solutions across multiple circuits in a thoroughly curated set of circuits, chosen for their prevalence in many applications, from quantum arithmetic to quantum chemistry. This breakthrough shows the potential for AI to automate the process of finding the most efficient quantum circuit. This is the first time that such an AI model has been put to the problem of T-count reduction at such a large scale.
The symbiotic relationship between quantum and AI works both ways. When AI and quantum computing work together, quantum computers could dramatically accelerate machine learning algorithms, whether by the development and application of natively quantum algorithms, or by offering quantum-generated training data that can be used to train a classical AI model.
Our recent announcement about Generative Quantum AI (Gen QAI) spells out our commitment to unlocking the value of the data generated by our H2 quantum computer. This value arises from the world’s leading fidelity and computational power of our System Model H2, making it impossible to exactly simulate on any classical computer, and therefore the data it generates – that we can use to train AI – is inaccessible by any other means. Quantinuum’s Chief Scientist for Algorithms and Innovation, Prof. Harry Buhrman, has likened accessing the first truly quantum-generated training data to the invention of the modern microscope in the seventeenth century, which revealed an entirely new world of tiny organisms thriving unseen within a single drop of water.
Recently, we announced a wide-ranging partnership with NVIDIA. It charts a course to commercial scale applications arising from the partnership between high-performance classical computers, powerful AI systems, and quantum computers that breach the boundaries of what previously could and could not be done. Our President & CEO, Dr. Raj Hazra spoke to CNBC recently about our partnership. Watch the video here.
As we prepare for the next stage of quantum processor development, with the launch of our Helios system in 2025, we’re excited to see how AI can help write more efficient code for quantum computers – and how our quantum processors, the most powerful in the world, can provide a backend for AI computations.
As in any truly symbiotic relationship, the addition of AI to quantum computing equally benefits both sides of the equation.
To read more about Quantinuum and Google DeepMind’s collaboration, please read the scientific paper here.
Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.
Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.
This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.
It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.
Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.
By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.
Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.
Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.
In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.
Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.
What follows is intended as a non-technical explainer of the results in today’s Nature paper.
For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.
At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.
In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?
This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:
This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.
The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.
The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.
Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.
In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.
Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.
And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.