The marriage of AI and quantum computing is going to have a widespread and meaningful impact in many aspects of our lives, combining the strengths of both fields to tackle complex problems.
Quantum and AI are the ideal partners. At Quantinuum, we are developing tools to accelerate AI with quantum computers, and quantum computers with AI. According to recent independent analysis, our quantum computers are the world’s most powerful, enabling state-of-the-art approaches like Generative Quantum AI (Gen QAI), where we train classical AI models with data generated from a quantum computer.
We harness AI methods to accelerate the development and performance of our full quantum computing stack as opposed to simply theorizing from the sidelines. A paper in Nature Machine Intelligence reveals the results of a recent collaboration between Quantinuum and Google DeepMind to tackle the hard problem of quantum compilation.
The work shows a classical AI model supporting quantum computing by demonstrating its potential for quantum circuit optimization. An AI approach like this has the potential to lead to more effective control at the hardware level, to a richer suite of middleware tools for quantum circuit compilation, error mitigation and correction, even to novel high-level quantum software primitives and quantum algorithms.
The joint Quantinuum-Google DeepMind team of researchers tackled one of quantum computing’s most pressing challenges: minimizing the number of highly expensive but essential T-gates required for universal quantum computation. This is important specifically for the fault-tolerant regime, which is becoming increasingly relevant as quantum error correction protocols are being explored on rapidly developing quantum hardware. The joint team of researchers adapted AlphaTensor, Google DeepMind’s reinforcement learning AI system for algorithm discovery, which was introduced to improve the efficiency of linear algebra computations. The team introduced AlphaTensor-Quantum, which takes as input a quantum circuit and returns a new, more efficient one in terms of number of T-gates, with exactly the same functionality!
AlphaTensor-Quantum outperformed current state-of-the art optimization methods and matched the best human-designed solutions across multiple circuits in a thoroughly curated set of circuits, chosen for their prevalence in many applications, from quantum arithmetic to quantum chemistry. This breakthrough shows the potential for AI to automate the process of finding the most efficient quantum circuit. This is the first time that such an AI model has been put to the problem of T-count reduction at such a large scale.
The symbiotic relationship between quantum and AI works both ways. When AI and quantum computing work together, quantum computers could dramatically accelerate machine learning algorithms, whether by the development and application of natively quantum algorithms, or by offering quantum-generated training data that can be used to train a classical AI model.
Our recent announcement about Generative Quantum AI (Gen QAI) spells out our commitment to unlocking the value of the data generated by our H2 quantum computer. This value arises from the world’s leading fidelity and computational power of our System Model H2, making it impossible to exactly simulate on any classical computer, and therefore the data it generates – that we can use to train AI – is inaccessible by any other means. Quantinuum’s Chief Scientist for Algorithms and Innovation, Prof. Harry Buhrman, has likened accessing the first truly quantum-generated training data to the invention of the modern microscope in the seventeenth century, which revealed an entirely new world of tiny organisms thriving unseen within a single drop of water.
Recently, we announced a wide-ranging partnership with NVIDIA. It charts a course to commercial scale applications arising from the partnership between high-performance classical computers, powerful AI systems, and quantum computers that breach the boundaries of what previously could and could not be done. Our President & CEO, Dr. Raj Hazra spoke to CNBC recently about our partnership. Watch the video here.
As we prepare for the next stage of quantum processor development, with the launch of our Helios system in 2025, we’re excited to see how AI can help write more efficient code for quantum computers – and how our quantum processors, the most powerful in the world, can provide a backend for AI computations.
As in any truly symbiotic relationship, the addition of AI to quantum computing equally benefits both sides of the equation.
To read more about Quantinuum and Google DeepMind’s collaboration, please read the scientific paper here.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
The most common question in the public discourse around quantum computers has been, “When will they be useful?” We have an answer.
Very recently in Nature we announced a successful demonstration of a quantum computer generating certifiable randomness, a critical underpinning of our modern digital infrastructure. We explained how we will be taking a product to market this year, based on that advance – one that could only be achieved because we have the world’s most powerful quantum computer.
Today, we have made another huge leap in a different domain, providing fresh evidence that our quantum computers are the best in the world. In this case, we have shown that our quantum computers can be a useful tool for advancing scientific discovery.
Our latest paper shows how our quantum computer rivals the best classical approaches in expanding our understanding of magnetism. This provides an entry point that could lead directly to innovations in fields from biochemistry, to defense, to new materials. These are tangible and meaningful advances that will deliver real world impact.
To achieve this, we partnered with researchers from Caltech, Fermioniq, EPFL, and the Technical University of Munich. The team used Quantinuum’s System Model H2 to simulate quantum magnetism at a scale and level of accuracy that pushes the boundaries of what we know to be possible.
As the authors of the paper state:
“We believe the quantum data provided by System Model H2 should be regarded as complementary to classical numerical methods, and is arguably the most convincing standard to which they should be compared.”
Our computer simulated the quantum Ising model, a model for quantum magnetism that describes a set of magnets (physicists call them ‘spins’) on a lattice that can point up or down, and prefer to point the same way as their neighbors. The model is inherently “quantum” because the spins can move between up and down configurations by a process known as “quantum tunneling”.
Researchers have struggled to simulate the dynamics of the Ising model at larger scales due to the enormous computational cost of doing so. Nobel laureate physicist Richard Feynman, who is widely considered to be the progenitor of quantum computing, once said, “it is impossible to represent the results of quantum mechanics with a classical universal device.” When attempting to simulate quantum systems at comparable scales on classical computers, the computational demands can quickly become overwhelming. It is the inherent ‘quantumness’ of these problems that makes them so hard classically, and conversely, so well-suited for quantum computing.
These inherently quantum problems also lie at the heart of many complex and useful material properties. The quantum Ising model is an entry point to confront some of the deepest mysteries in the study of interacting quantum magnets. While rooted in fundamental physics, its relevance extends to wide-ranging commercial and defense applications, including medical test equipment, quantum sensors, and the study of exotic states of matter like superconductivity.
Instead of tailored demonstrations that claim ‘quantum advantage’ in contrived scenarios, our breakthroughs announced this week prove that we can tackle complex, meaningful scientific questions difficult for classical methods to address. In the work described in this paper, we have proved that quantum computing could be the gold standard for materials simulations. These developments are critical steps toward realizing the potential of quantum computers.
With only 56 qubits in our commercially available System Model H2, the most powerful quantum system in the world today, we are already testing the limits of classical methods, and in some cases, exceeding them. Later this year, we will introduce our massively more powerful 96-qubit Helios system - breaching the boundaries of what until recently was deemed possible.
Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.
Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.
This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.
It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.
Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.
By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.
Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.
Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.
In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.
Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.
What follows is intended as a non-technical explainer of the results in today’s Nature paper.
For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.
At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.
In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?
This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:
This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.
The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.
The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.
Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.
In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.
Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.
And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.
One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer potentially a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.
Quantinuum’s applied algorithms team has released an end-to-end implementation of a quantum algorithm to solve a central problem in knot theory. Along with an efficiently verifiable benchmark for quantum processors, it allows for concrete resource estimates for quantum advantage in the near-term. The research team, included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology. In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group who led the project, writes about the problem being addressed and how the team, adopting an aggressively practical mindset, quantified the resources required for quantum advantage:
Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by Lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the aether. Of course, we know today that the aether theory was falsified by the Michelson-Morley experiment, but mathematicians have been classifying, tabulating, and studying knots ever since. Regarding applications, the pure mathematics of knots can find their way into cryptography, but knot theory is also intrinsically related to many aspects of the natural sciences. For example, it naturally shows up in certain spin models in statistical mechanics, when one studies thermodynamic quantities, and the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity, to name a few examples. Remarkably, physical properties of knots are important in understanding the stability of macromolecular structures. This is highlighted by work of Cozzarelli and Sumners in the 1980’s, on the topology of DNA, particularly how it forms knots and supercoils. Their interdisciplinary research helped explain how enzymes untangle and manage DNA topology, crucial for replication and transcription, laying the foundation for using mathematical models to predict and manipulate DNA behavior, with broad implications in drug development and synthetic biology. Serendipitously, this work was carried out during the same decade as Richard Feynman, David Deutsch, and Yuri Manin formed the first ideas for a quantum computer.
Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates. The relation between knot theory and quantum physics is the most beautiful and bizarre facts you have never heard of.
The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.
The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the value of the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.
Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.
In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.
The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.
The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically-hard” problems. Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves first carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus, making sure we instill confidence in our customers and partners.
Edited