When thinking about changes in phases of matter, the first images that come to mind might be ice melting or water boiling. The critical point in these processes is located at the boundary between the two phases – the transition from solid to liquid or from liquid to gas.
Phase transitions like these get right to the heart of how large material systems behave and are at the frontier of research in condensed matter physics for their ability to provide insights into emergent phenomena like magnetism and topological order. In classical systems, phase transitions are generally driven by thermal fluctuations and occur at finite temperature. On the contrary, quantum systems can exhibit phase transitions even at zero temperatures; the residual fluctuations that control such phase transitions at zero temperature are due to entanglement and are entirely quantum in origin.
Quantinuum researchers recently used the H1-1 quantum computer to computationally model a group of highly correlated quantum particles at a quantum critical point — on the border of a transition between a paramagnetic state (a state of magnetism characterized by a weak attraction) to a ferromagnetic one (characterized by a strong attraction).
Simulating such a transition on a classical computer is possible using tensor network methods, though it is difficult. However, generalizations of such physics to more complicated systems can pose serious problems to classical tensor network techniques, even when deployed on the most powerful supercomputers. On a quantum computer, on the other hand, such generalizations will likely only require modest increases in the number and quality of available qubits.
In a technical paper submitted to the arXiv, Probing critical states of matter on a digital quantum computer, the Quantinuum team demonstrated how the powerful components and high fidelity of the H-Series digital quantum computers could be harnessed to tackle a 128-site condensed matter physics problem, combining a quantum tensor network method with qubit reuse to make highly productive use of the 20-qubit H1-1 quantum computer.
Reza Haghshenas, Senior Advanced Physicist, and the lead author the paper said, “This is the kind of problem that appeals to condensed-matter physicists working with quantum computers, who are looking forward to revealing exotic aspects of strongly correlated systems that are still unknown to the classical realm. Digital quantum computers have the potential to become a versatile tool for working scientists, particularly in fields like condensed matter and particle physics, and may open entirely new directions in fundamental research.”
Tensor networks are mathematical frameworks whose structure enables them to represent and manipulate quantum states in an efficient manner. Originally associated with the mathematics of quantum mechanics, tensor network methods now crop up in many places, from machine learning to natural language processing, or indeed any model with a large number of interacting, high-dimensional mathematical objects.
The Quantinuum team described using a tensor network method--the multi-scale entanglement renormalization ansatz (MERA)--to produce accurate estimates for the decay of ferromagnetic correlations and the ground state energy of the system. MERA is particularly well-suited to studying scale invariant quantum states, such as ground states at continuous quantum phase transitions, where each layer in the mathematical model captures entanglement at different scales of distance.
“By calculating the critical state properties with MERA on a digital quantum computer like the H-Series, we have shown that research teams can program the connectivity and system interactions into the problem,” said Dave Hayes, Lead of the U.S. quantum theory team at Quantinuum and one of the paper’s authors. “So, it can, in principle, go out and simulate any system that you can dream of.”
In this experiment, the researchers wanted to accurately calculate the ground state of the quantum system in its critical state. This quantum system is composed of many tiny quantum magnets interacting with one another and pointing in different directions, known as a quantum spin model. In the paramagnetic phase, tiny, individual magnets in the material are randomly oriented, and only correlated with each other over small length-scales. In the ferromagnetic phase, these individual atomic magnetic moments align spontaneously over macroscopic length scales due to strong magnetic interactions.
In the computational model, the quantum magnets were initially arranged in one dimension, along a line. To describe the critical point in this quantum magnetism problem, particles in the line needed to be entangled with one another in a complex way, making this as a very challenging problem for a classical computer to solve in high dimensional and non-equilibrium systems.
“That's as hard as it gets for these systems,” Dave explained. “So that's where we want to look for quantum advantage – because we want the problem to be as hard as possible on the classical computer, and then have a quantum computer solve it.”
To improve the results, the team used two error mitigation techniques, symmetry-based error heralding, which is made possible by the MERA structure, and zero-noise extrapolation, a method originally developed by researchers at IBM. The first involved enforcing local symmetry in the model so that errors affecting the symmetry of the state could be detected. The second strategy, zero-noise extrapolation, involves adding noise to the qubits to measure the impact it has, and then using those results to extrapolate the results that would be expected under conditions with less noise than was present in the experiment.
The Quantinuum team describes this sort of problem as a stepping-stone, which allows the researchers to explore quantum tensor network methods on today’s devices and compare them either to simulations or analytical results produced using classical computers. It is a chance to learn how to tackle a problem really well before quantum computers scale up in the future and begin to offer solutions that are not possible to achieve on classical computers.
“Potentially, our biggest applications over the next couple of years will include studying solid-state systems, physics systems, many-body systems, and modeling them,” said Jenni Strabley, Senior Director of Offering Management at Quantinuum.
The team now looks forward to future work, exploring more complex MERA generalizations to compute the states of 2D and 3D many-body and condensed matter systems on a digital quantum computer – quantum states that are much more difficult to calculate classically.
The H-Series allows researchers to simulate a much broader range of systems than analog devices as well as to incorporate quantum error mitigation strategies, as demonstrated in the experiment. Plus, Quantinuum’s System Model H2 quantum computer, which was launched earlier this year, should scale this type of simulation beyond what is possible using classical computers.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
A team from Quantinuum and the University of Freiburg found that quantum computers outperform classical for a workhorse calculation often used in accelerators like the Large Hadron Collider (LHC) at CERN.
Quantinuum’s Ifan Williams worked with the University of Freiburg’s Mathieu Pellen to tackle a pernicious problem in accelerator physics: calculating “cross sections”. Together, they developed a general, scalable approach to calculating cross sections that offers a quadratic speed-up compared to its classical counterpart.
A “cross-section” relates to the probability of a certain interaction happening. Scientists who do experiments in particle accelerators compare real measurements with theoretical cross-section calculations (predictions), using the agreement (or disagreement) to reason about the nature of our universe.
Generally, scientists run Monte Carlo simulations to make their theoretical predictions. Monte Carlo simulations are currently the biggest computational bottleneck in experimental high-energy physics (HEP), costing enormous CPU resources, which will only grow larger as new experiments come online.
It’s hard to put a specific number on exactly how costly calculations like this are, but we can say that probing fundamental physics at the LHC probably uses roughly 10 billion CPUH/year for data treatment, simulations, and theory predictions. Knowing that the theory predictions represent approximately 15-25% of this total, putting even a 10% dent in this number would be a massive change.
The collaborators used Quantinuum’s Quantum Monte Carlo integration (QMCI) engine to solve the same problem. Their work is the first published general methodology for performing cross-section calculations in HEP using quantum integration.
Importantly, the team’s methodology is potentially extendable to the problem sizes needed for real-world HEP cross-section calculations (currently done classically). Overall, this work establishes a solid foundation for performing such computations on a quantum computer in the future.
The Large Hadron Collider, the world’s biggest particle accelerator, generates a billion collisions each second, far more data than can be computationally analyzed. Planned future experiments are expected to generate even more. Quantum computers are also accelerating. Quantinuum’s latest H2 System became the highest performing commercially available system in the world when it was launched. When it was upgraded in 2024, it became the first quantum computer that cannot be exactly simulated by any classical computer. Our next generation Helios, on schedule to launch in 2025, will encode at least a trillion times more information than the H2—this is the power of exponential growth.
We can’t wait to see what’s next with quantum computing and high-energy physics.
The Quantinuum team is looking forward to participating in this year’s SCAsia conference from March 10th – 13th in Singapore. Meet our team at Booth B2 to discover how Quantinuum is bridging the gap between quantum computing and high-performance compute with leading industry partners.
Our team will be participating in workshops and presenting at the keynote and plenary sessions to showcase our quantum computing technologies. Join us at the below sessions:
Workshop: Accelerating Quantum Supercomputing: CUDA-Q Tutorial across Multiple Quantum Platforms
Location: Room P10 – Peony Jr 4512 (Level 4)
This workshop will explore the seamless integration of classical and quantum resources for quantum-accelerated supercomputing. Join Kentaro Yamamoto and Enrico Rinaldi, Lead R&D Scientists at Quantinuum, for an Introduction to our integrated full-stack for quantum computing, Quantum Phase Estimation (QPE) for solving quantum chemistry problems, and a demonstration of a QPE algorithm with CUDA-Q on Quantinuum Systems. If you're interested in access to our quantum computers and emulator for use on the CUDA-Q platform, register here.
Keynote: Quantum Computing: A Transformative Force for Singapore's Regional Economy
Location: Melati Ballroom (Level 4)
Quantum Computing is no longer a distant promise; it has arrived and is poised to revolutionize several economies. Join our President and CEO, Dr. Rajeeb Hazra, to discover how Quantinuum’s approach to Quantum Generative AI is driving breakthroughs in applications which hold significant relevance for Singapore, in fields like chemistry, computational biology, and finance. Additionally, Raj will discuss the challenges and opportunities of adopting quantum solutions from both technical and business perspectives, emphasizing the importance of collaboration to build quantum applications that integrate the best of quantum and AI.
Industry Breakout Track: Transformative value of Quantum and AI: bringing meaningful insights for critical applications today
Location: Room L1 – Lotus Jr (Level 4)
The ability to solve classically intractable problems defines the transformative value of quantum computing, offering new tools to redefine industries and address complex humanity challenges. In this session with Dr. Elvira Shishenina, Senior Director of Strategic Initiatives, discover how Quantinuum’s hardware is leading the way in achieving early fault-tolerance, marking a significant step forward in computational capabilities. By integrating quantum technology with AI and high-performance computing, we are building systems designed to address real-world issues with efficiency, precision and scale. This approach empowers critical applications from hydrogen fuel cells and carbon capture to precision medicine, food security, and cybersecurity, providing meaningful insights at a commercial level today.
Hybrid Quantum Classical Computing Track: Quantifying Quantum Advantage with an End-to-End Quantum Algorithm for the Jones Polynomial
Location: Room O3 – Orchid Jr 4211-2 (Level 4)
Join Konstantinos Meichanetzidis, Head of Scientific Product Development, for this presentation on an end-to-end reconfigurable algorithmic pipeline for solving a famous problem in knot theory using a noisy digital quantum computer. Specifically, they estimate the value of the Jones polynomial at the fifth root of unity within additive error for any input link, i.e. a closed braid. This problem is DQC1-complete for Markov-closed braids and BQP-complete for Plat-closed braids, and we accommodate both versions of the problem. In their research, they demonstrate our quantum algorithm on Quantinuum’s H2 quantum computer and show the effect of problem-tailored error-mitigation techniques. Further, leveraging that the Jones polynomial is a link invariant, they construct an efficiently verifiable benchmark to characterize the effect of noise present in a given quantum processor. In parallel, they implement and benchmark the state-of-the-art tensor-network-based classical algorithms.The practical tools provided in the work presented will allow for precise resource estimation to identify near-term quantum advantage for a meaningful quantum-native problem in knot theory.
Industry Plenary: Quantum Heuristics: From Worst Case to Practice
Location: Melati Ballroom (Level 4)
Which problems allow for a quantum speedup, and which do not? This question lies at the heart of quantum information processing. Providing a definitive answer is challenging, as it connects deeply to unresolved questions in complexity theory. To make progress, complexity theory relies on conjectures such as P≠NP and the Strong Exponential Time Hypothesis, which suggest that for many computational problems, we have discovered algorithms that are asymptotically close to optimal in the worst case. In this talk, Professor Harry Buhrman, Chief Scientist for Algorithms and Innovation, will explore the landscape from both theoretical and practical perspectives. On the theoretical side, I will introduce the concept of “queasy instances”—problem instances that are quantum-easy but classically hard (classically queasy). On the practical side, I will discuss how these insights connect to advancements in quantum hardware development and co-design.
*All times in Singapore Standard Time
BY HARRY BUHRMAN
Quantum computing continues to push the boundaries of what is computationally possible. A new study by Marcello Benedetti, Harry Buhrman, and Jordi Weggemans introduces Complement Sampling, a problem that highlights a dramatic separation between quantum and classical sample complexity. This work provides a robust demonstration of quantum advantage in a way that is not only provable but also feasible on near-term quantum devices.
Imagine a universe of N = 2n elements, from which a subset S of size K is drawn uniformly at random. The challenge is to sample from the complement S̅ without explicitly knowing S, but having access to samples of S. Classically, solving this problem requires roughly K samples, as the best a classical algorithm can do is guess at random after observing only some of the elements of S.
To better understand this, consider a small example. Suppose N = 8, meaning our universe consists of the numbers {0,1,2,3,4,5,6,7}. If a subset S of size K = 4 is drawn at random—say {1,3,5,7}—the goal is to sample from the complement S̅, which consists of {0,2,4,6}. A classical algorithm would need to collect and verify enough samples from S before it could infer what S̅ might be. However, a quantum algorithm can use a single superposition state over S (a quantum sample) to instantly generate a sample from S̅, eliminating the need for iterative searching.
Quantum advantage is often discussed in terms of computational speedups, such as those achieved by Shor’s algorithm for factoring large numbers. However, quantum resources provide advantages beyond time efficiency—they also affect how data is accessed, stored, and processed.
Complement Sampling fits into the category of sample complexity problems, where the goal is to minimize the number of samples needed to solve a problem. The authors prove that their quantum approach not only outperforms classical methods but does so in a way that is:
At its core, the quantum approach to Complement Sampling relies on the ability to perform a perfect swap between a subset S and its complement S̅. The method draws inspiration from a construction by Aaronson, Atia, and Susskind, which links state distinguishability to state swapping. The quantum algorithm:
This is made possible by quantum interference and superposition, allowing a quantum computer to manipulate distributions in ways that classical systems fundamentally cannot.
A crucial aspect of this work is its robustness. The authors prove that even for subsets generated using strong pseudorandom permutations, the problem remains hard for classical algorithms. This means that classical computers cannot efficiently solve Complement Sampling even with structured input distributions—an important consideration for real-world applications.
This robustness suggests potential applications in cryptography, where generating samples from complements could be useful in privacy-preserving protocols and quantum-secure verification methods.
Unlike some quantum advantage demonstrations that are difficult to verify classically (such as the random circuit sampling experiment), Complement Sampling is designed to be verifiable. The authors propose an interactive quantum versus classical game:
While the classical player must resort to random guessing, the quantum player can leverage the swap algorithm to succeed with near certainty. Running such an experiment on NISQ hardware could serve as a practical demonstration of quantum advantage in a sample complexity setting.
This research raises exciting new questions:
With its blend of theoretical depth and experimental feasibility, Complement Sampling provides a compelling new frontier for demonstrating the power of quantum computing.
Complement Sampling represents one of the cleanest demonstrations of quantum advantage in a practical, verifiable, and NISQ-friendly setting. By leveraging quantum information processing in ways that classical computers fundamentally cannot, this work strengthens the case for near-term quantum technologies and their impact on computational complexity, cryptography, and beyond.
For those interested in the full details, the paper provides rigorous proofs, circuit designs, and further insights into the nature of quantum sample complexity. As quantum computing continues to evolve, Complement Sampling may serve as a cornerstone for future experimental demonstrations of quantum supremacy.
We have commenced work on the experiment – watch this space!