Quantinuum’s recent announcement about its breakthrough on topological qubits garnered headlines across both the specialist scientific media as well as those more broadly interested in the advances that will make quantum computing useful more quickly than anticipated. However, hidden in the details was a reference to a technology that is as rare as it is valuable. The fact is that the topological qubit that was generated could only have been done via Quantinuum’s H-Series quantum processors due to their various qualities and functions of which measurement and ‘feed-forward’ is critical.
As we know, great advances are often built on the back of little-known utilities - functions and tools that rarely get mentioned. These are sometimes technological constructs that might seem simple on the surface, but which are difficult (in the case of feed-forward make that “very difficult” to create), and without which critical advances would remain merely theoretical.
As detailed in two manuscripts that have been uploaded onto the pre-print repository, arXiv, Quantinuum researchers and their collaborators successfully demonstrated, for the first time, a large-scale implementation of a long-standing theory in quantum information science; namely the use of measurement and feed-forward (see below for a detailed explanation of what this means) to efficiently generate long-range entangled states.
The two experiments, conducted with research partners at the California Institute of Technology, Harvard University, the University of Sydney, the Perimeter Institute for Theoretical Physics and the University of California, Davis, used Quantinuum’s trapped ion quantum computers, Powered by Honeywell, to show how feed-forward enables success by dramatically reducing the resources required to produce highly-entangled quantum states and topologically ordered phases, one of the most exciting areas of research in modern physics.
Feed-forward uses selective measurements during the execution of a quantum circuit and adapts future operations depending on those measurement results. To be successful in running an adaptive quantum circuit, several challenging requirements must be met: (1) a select group of qubits must be measured in the middle of a circuit with high fidelity, and without accidentally measuring other qubits, and (2) the measurement results must be sent to a classical computer and quickly processed to create instructions to be fed-forward to the quantum computer on the fly - all of which must be done fast enough to prevent the active qubits from decohering.
Once these requirements are met, the feed-forward capabilities let quantum computers create long-range entangled states which are emerging as central to various branches of modern physics such as quantum error correction codes and the study of spin liquids in condensed matter. It is also the essential component of topological order and could enable the simulation of quantum systems beyond the reach of classical computation.
In the paper “Topological Order from Measurements and Feed-Forward on a Trapped Ion Quantum Computer”, Quantinuum, working with colleagues from the California Institute of Technology and Harvard University use feed-forward to explore topologically ordered phases of matter.
Separately, a different team of scientists from Quantinuum, the University of Sydney, the Perimeter Institute for Theoretical Physics and the University of California, Davis, used feed-forward to explore adaptive quantum circuits in “Experimental Demonstration of the Advantage of Adaptive Quantum Circuits”.
Two of Quantinuum’s physicists who worked on both experiments, Henrik Dreyer and Michael Foss-Feig, offered some observations on the work.
“While it has been clear to theorists that feed-forward would be a useful primitive, doing it with low errors has turned out to be very challenging. The H-Series systems have made it possible to use this primitive efficiently,” said Henrik, managing director and scientific lead at Quantinuum’s office in Munich, Germany.
Michael, who is based at Quantinuum’s world-leading quantum computing laboratory outside of Denver, Colorado, also described feed-forward and adaptive quantum circuits as a jump toward meaningful simulations.
“This capability speeds up the timeline for new scientific discoveries,” he said.
These successful experiments proved that feed-forward operations reduce the quantum resources required for certain algorithms and are a valuable building block for more advanced research.
"I am really excited by the opportunities opened up by this demonstration: using wave-function collapse is a very powerful tool for preparing very exotic entangled states further down the road, where there are no good scalable alternatives," said Dr. Ruben Verresen, a physicist at Harvard University and a co-author of the topological order paper.
The authors note that “the primary technical challenge in implementing adaptive circuits is the requirement to perform partial measurements of a subset of qubits in the middle of a quantum circuit with minimal cross-talk on unmeasured qubits, return those results to a classical computer for processing, and then condition future operations on the results of that processing in real time.”
The paper describes how quantum hardware has now reached a state where adaptive quantum circuits are possible and can outperform unitary circuits. The experiment detailed in the paper “firmly establishes that given access to the same amount of quantum computational resources with respect to available gates and circuit depth, adaptive quantum circuits can perform tasks that are impossible for quantum circuits without feedback.”
Henrik and Michael noted that the adaptive circuit research provides concrete evidence not only that feed-forward works, but that it now works well enough to achieve tasks that would not be possible without it.
“We were trying to find a metric by which somebody can look at our data produced by a shallow adaptive circuit, and convince themselves it could not have been produced with a unitary circuit of the same depth,” Michael said. The metric proposed in the adaptive circuits paper achieved exactly that.
Demonstrating this technique required significant performance from the H1-1.
“It's a huge challenge to implement this in a way that works well,” Michael said.
Quantinuum’s H-Series has the capabilities that are crucial to this work: high fidelity gates, low state preparation and measurement (SPAM) error, low memory error, the ability to perform mid-circuit measurement, and all-to-all connectivity.
The feed-forward theory has been well-known for years but challenging to execute in practice, and as the paper states:
“While individual elements of this triad have been demonstrated in the context of error correction and topological order, combining all of these ingredients into one experimental platform has proven elusive since the inception of this idea more than a decade ago. Here, we demonstrate for the first time the deterministic, high-fidelity preparation of long-range entangled quantum states using a protocol with constant depth, using Quantinuum’s H-Series programmable Ytterbium ion trap quantum computer.”
The authors also note that “the all-to-all connectivity of the device was vital for the implementation of the periodic two-dimensional geometry and the conditional dynamics.”
In summary – these papers showcase state-of-the-art demonstrations of what can be done with quantum computers today but are only a preview of what will be done tomorrow.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
The most common question in the public discourse around quantum computers has been, “When will they be useful?” We have an answer.
Very recently in Nature we announced a successful demonstration of a quantum computer generating certifiable randomness, a critical underpinning of our modern digital infrastructure. We explained how we will be taking a product to market this year, based on that advance – one that could only be achieved because we have the world’s most powerful quantum computer.
Today, we have made another huge leap in a different domain, providing fresh evidence that our quantum computers are the best in the world. In this case, we have shown that our quantum computers can be a useful tool for advancing scientific discovery.
Our latest paper shows how our quantum computer rivals the best classical approaches in expanding our understanding of magnetism. This provides an entry point that could lead directly to innovations in fields from biochemistry, to defense, to new materials. These are tangible and meaningful advances that will deliver real world impact.
To achieve this, we partnered with researchers from Caltech, Fermioniq, EPFL, and the Technical University of Munich. The team used Quantinuum’s System Model H2 to simulate quantum magnetism at a scale and level of accuracy that pushes the boundaries of what we know to be possible.
As the authors of the paper state:
“We believe the quantum data provided by System Model H2 should be regarded as complementary to classical numerical methods, and is arguably the most convincing standard to which they should be compared.”
Our computer simulated the quantum Ising model, a model for quantum magnetism that describes a set of magnets (physicists call them ‘spins’) on a lattice that can point up or down, and prefer to point the same way as their neighbors. The model is inherently “quantum” because the spins can move between up and down configurations by a process known as “quantum tunneling”.
Researchers have struggled to simulate the dynamics of the Ising model at larger scales due to the enormous computational cost of doing so. Nobel laureate physicist Richard Feynman, who is widely considered to be the progenitor of quantum computing, once said, “it is impossible to represent the results of quantum mechanics with a classical universal device.” When attempting to simulate quantum systems at comparable scales on classical computers, the computational demands can quickly become overwhelming. It is the inherent ‘quantumness’ of these problems that makes them so hard classically, and conversely, so well-suited for quantum computing.
These inherently quantum problems also lie at the heart of many complex and useful material properties. The quantum Ising model is an entry point to confront some of the deepest mysteries in the study of interacting quantum magnets. While rooted in fundamental physics, its relevance extends to wide-ranging commercial and defense applications, including medical test equipment, quantum sensors, and the study of exotic states of matter like superconductivity.
Instead of tailored demonstrations that claim ‘quantum advantage’ in contrived scenarios, our breakthroughs announced this week prove that we can tackle complex, meaningful scientific questions difficult for classical methods to address. In the work described in this paper, we have proved that quantum computing could be the gold standard for materials simulations. These developments are critical steps toward realizing the potential of quantum computers.
With only 56 qubits in our commercially available System Model H2, the most powerful quantum system in the world today, we are already testing the limits of classical methods, and in some cases, exceeding them. Later this year, we will introduce our massively more powerful 96-qubit Helios system - breaching the boundaries of what until recently was deemed possible.
The marriage of AI and quantum computing is going to have a widespread and meaningful impact in many aspects of our lives, combining the strengths of both fields to tackle complex problems.
Quantum and AI are the ideal partners. At Quantinuum, we are developing tools to accelerate AI with quantum computers, and quantum computers with AI. According to recent independent analysis, our quantum computers are the world’s most powerful, enabling state-of-the-art approaches like Generative Quantum AI (Gen QAI), where we train classical AI models with data generated from a quantum computer.
We harness AI methods to accelerate the development and performance of our full quantum computing stack as opposed to simply theorizing from the sidelines. A paper in Nature Machine Intelligence reveals the results of a recent collaboration between Quantinuum and Google DeepMind to tackle the hard problem of quantum compilation.
The work shows a classical AI model supporting quantum computing by demonstrating its potential for quantum circuit optimization. An AI approach like this has the potential to lead to more effective control at the hardware level, to a richer suite of middleware tools for quantum circuit compilation, error mitigation and correction, even to novel high-level quantum software primitives and quantum algorithms.
The joint Quantinuum-Google DeepMind team of researchers tackled one of quantum computing’s most pressing challenges: minimizing the number of highly expensive but essential T-gates required for universal quantum computation. This is important specifically for the fault-tolerant regime, which is becoming increasingly relevant as quantum error correction protocols are being explored on rapidly developing quantum hardware. The joint team of researchers adapted AlphaTensor, Google DeepMind’s reinforcement learning AI system for algorithm discovery, which was introduced to improve the efficiency of linear algebra computations. The team introduced AlphaTensor-Quantum, which takes as input a quantum circuit and returns a new, more efficient one in terms of number of T-gates, with exactly the same functionality!
AlphaTensor-Quantum outperformed current state-of-the art optimization methods and matched the best human-designed solutions across multiple circuits in a thoroughly curated set of circuits, chosen for their prevalence in many applications, from quantum arithmetic to quantum chemistry. This breakthrough shows the potential for AI to automate the process of finding the most efficient quantum circuit. This is the first time that such an AI model has been put to the problem of T-count reduction at such a large scale.
The symbiotic relationship between quantum and AI works both ways. When AI and quantum computing work together, quantum computers could dramatically accelerate machine learning algorithms, whether by the development and application of natively quantum algorithms, or by offering quantum-generated training data that can be used to train a classical AI model.
Our recent announcement about Generative Quantum AI (Gen QAI) spells out our commitment to unlocking the value of the data generated by our H2 quantum computer. This value arises from the world’s leading fidelity and computational power of our System Model H2, making it impossible to exactly simulate on any classical computer, and therefore the data it generates – that we can use to train AI – is inaccessible by any other means. Quantinuum’s Chief Scientist for Algorithms and Innovation, Prof. Harry Buhrman, has likened accessing the first truly quantum-generated training data to the invention of the modern microscope in the seventeenth century, which revealed an entirely new world of tiny organisms thriving unseen within a single drop of water.
Recently, we announced a wide-ranging partnership with NVIDIA. It charts a course to commercial scale applications arising from the partnership between high-performance classical computers, powerful AI systems, and quantum computers that breach the boundaries of what previously could and could not be done. Our President & CEO, Dr. Raj Hazra spoke to CNBC recently about our partnership. Watch the video here.
As we prepare for the next stage of quantum processor development, with the launch of our Helios system in 2025, we’re excited to see how AI can help write more efficient code for quantum computers – and how our quantum processors, the most powerful in the world, can provide a backend for AI computations.
As in any truly symbiotic relationship, the addition of AI to quantum computing equally benefits both sides of the equation.
To read more about Quantinuum and Google DeepMind’s collaboration, please read the scientific paper here.
Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.
Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.
This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.
It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.
Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.
By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.
Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.
Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.
In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.
Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.
What follows is intended as a non-technical explainer of the results in today’s Nature paper.
For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.
At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.
In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?
This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:
This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.
The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.
The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.
Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.
In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.
Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.
And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.