Cambridge Researchers at Honeywell Quantum Solutions have turned problematic micromotion that jostles trapped ion qubits out of position into a plus.
The team recently demonstrated a technique that uses micromotion to shield nearby ions from stray photons released during mid-circuit measurement, a procedure in which lasers are used to check the quantum state of certain qubits and then reset them.
Mid-circuit measurement is a key capability in today’s early-stage quantum computers. Because the qubit’s state can be checked and then re-used, researchers can run more complex algorithms – such as the holoQUADS algorithm – with fewer qubits.
By “hiding” ions behind micromotion, Honeywell researchers significantly reduced the amount of “crosstalk” – errors caused by photons hitting neighboring qubits – that occurred when measuring qubits during an operation. (Details are available in a pre-print publication available on the arXiv.)
“We were able to reduce crosstalk by an order of magnitude,” said Dr. John Gaebler, Chief Scientist of Commercial Products at Honeywell Quantum Solutions, and lead author of the paper. “It is a significant reduction in crosstalk errors. Much more so than other methods we’ve used.”
The new technique represents another step toward reducing errors that occur in today’s trapped-ion quantum computers, which is necessary if the technology is to solve problems too complex for classical systems.
“For quantum computers to scale, we need to reduce errors throughout the system,” said Tony Uttley, President of Honeywell Quantum Solutions. “The new technique the Honeywell team developed will help us get there.”
Today’s quantum computing technologies are still in the early stage and are prone to “noise” - or interference - caused by qubits interacting with their environment and one another.
This noise causes errors to accumulate, corrupts information stored in and between physical qubits, and disrupts the quantum state in which qubits must exist to run calculations. (Scientists call this decoherence.)
Researchers are trying to eliminate or suppress as many of these errors as possible while also creating logical qubits, a collection of entangled physical qubits on which quantum information is distributed, stored, and protected.
By creating logical qubits, scientists can apply mathematical codes to detect and correct errors and eliminate noise as calculations are running. This multi-step process is known as quantum error correction (QEC). Honeywell researchers recently demonstrated they can detect and correct errors in real-time by applying multiple rounds of full cycles of quantum error correction.
Logical qubits and QEC are important elements to improving the accuracy and precision of quantum computers. But, Gaebler said, those methods are not enough on their own.
“Everything has to be working at a certain level before QEC can take you the rest of the way,” he said. “The more we can suppress or eliminate errors in the overall system, the more effective QEC will be and the fewer qubits we need to run complex calculations.”
In classical computing, bit flip errors occur when a binary digit, or bit, inadvertently switches from a zero to one or vice versa. Quantum computers experience a similar bit flip error as well as phase flip errors. Both errors cause qubits to lose their quantum state – or to decohere. In trapped ion quantum computing, one source of errors comes from the lasers used to implement gate operations and qubit measurements.
Though these lasers are highly controlled, unruly photons (small packets of light) still escape and bounce into neighboring ions causing “crosstalk” and decoherence.
Researchers use a variety of methods to protect these ions from crosstalk, especially during mid-circuit measurement where only a single qubit or a small subset of qubits is meant to be measured. With its quantum charged-coupled device (QCCD) architecture, the Honeywell team takes the approach of moving neighboring ions away from the qubit being fluoresced by a laser. But there is limited space along the device, which becomes even more compact as more qubits are added.
“Even when we move them more than 100 microns away, we still get more crosstalk than we prefer,” said Dr. Charlie Baldwin, a senior advanced physicist and co-author of the paper. “There is still some scattered light from the detection laser.”
The team hit on hiding neighboring ions from stray photons using micromotion potentials, which are caused by the oscillating electric fields used to “trap” these charged atoms. Micromotion is typically thought of as a nuisance with ion trapping, causing the ions to rapidly oscillate back and forth, and occurs when the ions are pushed out of the center of the trap by additional electric fields.
“Usually, we are trying to eliminate micromotion but in this case, we were able to use it to our benefit,” said Dr. Patty Lee, chief scientist at Honeywell Quantum Solutions.
The team’s goal is to reduce by 10 million the probability of a neighboring ion absorbing photons at 110 microns away. By moving neighboring ions and hiding them behind micromotion the Honeywell team is approaching that mark.
In their paper, Honeywell researchers delved into how and why hiding ions with micromotion works, including the ideal frequency of the oscillations. They also identified and characterized errors. (The basic physics behind the concept of hiding ions was first explored by the ion storage group at the National Institute of Standards and Technology.)
“Mid-circuit operations are a new feature in commercial quantum computing hardware, so we had to invent a new way to validate that the micromotion hiding technique was achieving the low level of crosstalk errors that we predicted,” said Dr. Charlie Baldwin.
Though the new method resulted in a significant reduction of crosstalk errors, the Honeywell team acknowledged there is further to go.
“Crosstalk is one of those scary errors for scaling,” Gaebler said. “It has to be controlled because it becomes more of a problem as you scale and add qubits. This is another tool that will help us scale and help us compact our systems and pack in as many qubits as we can.”
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
In a new paper in Nature Physics, we've made a major breakthrough in one of quantum computing’s most elusive promises: simulating the physics of superconductors. A deeper understanding of superconductivity would have an enormous impact: greater insight could pave the way to real-world advances, like phone batteries that last for months, “lossless” power grids that drastically reduce your bills, or MRI machines that are widely available and cheap to use. The development of room-temperature superconductors would transform the global economy.
A key promise of quantum computing is that it has a natural advantage when studying inherently quantum systems, like superconductors. In many ways, it is precisely the deeply ‘quantum’ nature of superconductivity that makes it both so transformative and so notoriously difficult to study.
Now, we are pleased to report that we just got a lot closer to that ultimate dream.
To study something like a superconductor with a quantum computer, you need to first “encode” the elements of the system you want to study onto the qubits – in other words, you want to translate the essential features of your material onto the states and gates you will run on the computer.
For superconductors in particular, you want to encode the behavior of particles known as “fermions” (like the familiar electron). Naively simulating fermions using qubits will result in garbage data, because qubits alone lack the key properties that make a fermion so unique.
Until recently, scientists used something called the “Jordan-Wigner” encoding to properly map fermions onto qubits. People have argued that the Jordan-Wigner encoding is one of the main reasons fermionic simulations have not progressed beyond simple one-dimensional chain geometries: it requires too many gates as the system size grows.
Even worse, the Jordan-Wigner encoding has the nasty property that it is, in a sense, maximally non-fault-tolerant: one error occurring anywhere in the system affects the whole state, which generally leads to an exponential overhead in the number of shots required. Due to this, until now, simulating relevant systems at scale – one of the big promises of quantum computing – has remained a daunting challenge.
Theorists have addressed the issues of the Jordan-Wigner encoding and have suggested alternative fermionic encodings. In practice, however, the circuits created from these alternative encodings come with large overheads and have so far not been practically useful.
We are happy to report that our team developed a new way to compile one of the new, alternative, encodings that dramatically improves both efficiency and accuracy, overcoming the limitations of older approaches. Their new compilation scheme is the most efficient yet, slashing the cost of simulating fermionic hopping by an impressive 42%. On top of that, the team also introduced new, targeted error mitigation techniques that ensure even larger systems can be simulated with far fewer computational "shots"—a critical advantage in quantum computing.
Using their innovative methods, the team was able to simulate the Fermi-Hubbard model—a cornerstone of condensed matter physics— at a previously unattainable scale. By encoding 36 fermionic modes into 48 physical qubits on System Model H2, they achieved the largest quantum simulation of this model to date.
This marks an important milestone in quantum computing: it demonstrates that large-scale simulations of complex quantum systems, like superconductors, are now within reach.
This breakthrough doesn’t just show how we can push the boundaries of what quantum computers can do; it brings one of the most exciting use cases of quantum computing much closer to reality. With this new approach, scientists can soon begin to simulate materials and systems that were once thought too complex for the most powerful classical computers alone. And in doing so, they’ve unlocked a path to potentially solving one of the most exciting and valuable problems in science and technology: understanding and harnessing the power of superconductivity.
The future of quantum computing—and with it, the future of energy, electronics, and beyond—just got a lot more exciting.
In an experiment led by Princeton and NIST, we’ve just delivered a crucial result in Quantum Error Correction (QEC), demonstrating key principles of scalable quantum computing developed by Drs Peter Shor, Dorit Aharonov, and Michael Ben-Or. In this latest paper, we showed that by using “concatenated codes” noise can be exponentially suppressed — proving that quantum computing will scale.
Quantum computing is already producing results, but high-profile applications like Shor’s algorithm—which can break RSA encryption—require error rates about a billion times lower than what today’s machines can achieve.
Achieving such low error rates is a holy grail of quantum computing. Peter Shor was the first to hypothesize a way forward, in the form of quantum error correction. Building on his results, Dorit Aharanov and Michael Ben-Or proved that by concatenating quantum error correcting codes, a sufficiently high-quality quantum computer can suppress error rates arbitrarily at the cost of a very modest increase in the required number of qubits. Without that insight, building a truly fault-tolerant quantum computer would be impossible.
Their results, now widely referred to as the “threshold theorem”, laid the foundation for realizing fault-tolerant quantum computing. At the time, many doubted that the error rates required for large-scale quantum algorithms could ever be achieved in practice. The threshold theorem made clear that large scale quantum computing is a realistic possibility, giving birth to the robust quantum industry that exists today.
Until now, nobody has realized the original vision for the threshold theorem. Last year, Google performed a beautiful demonstration of the threshold theorem in a different context (without concatenated codes). This year, we are proud to report the first experimental realization of that seminal work—demonstrating fault-tolerant quantum computing using concatenated codes, just as they envisioned.
The team demonstrated that their family of protocols achieves high error thresholds—making them easier to implement—while requiring minimal ancilla qubits, meaning lower overall qubit overhead. Remarkably, their protocols are so efficient that fault-tolerant preparation of basis states requires zero ancilla overhead, making the process maximally efficient.
This approach to error correction has the potential to significantly reduce qubit requirements across multiple areas, from state preparation to the broader QEC infrastructure. Additionally, concatenated codes offer greater design flexibility, which makes them especially attractive. Taken together, these advantages suggest that concatenation could provide a faster and more practical path to fault-tolerant quantum computing than popular approaches like the surface code.
From a broader perspective, this achievement highlights the power of collaboration between industry, academia, and national laboratories. Quantinuum’s commercial quantum systems are so stable and reliable that our partners were able to carry out this groundbreaking research remotely—over the cloud—without needing detailed knowledge of the hardware. While we very much look forward to welcoming them to our labs before long, its notable that they never need to step inside to harness the full capabilities of our machines.
As we make quantum computing more accessible, the rate of innovation will only increase. The era of plug-and-play quantum computing has arrived. Are you ready?
Quantum computing companies are poised to exceed $1 billion in revenues by the close of 2025, according to McKinsey & Company, underscoring how today’s quantum computers are already delivering customer value in their current phase of development.
This figure is projected to reach upwards of $37 billion by 2030, rising in parallel with escalating demand, as well as with the scale of the machines and the complexity of problem sets of which they will be able to address.
Several systems on the market today are fault-tolerant by design, meaning they are capable of suppressing error-causing noise to yield reliable calculations. However, the full potential of quantum computing to tackle problems of true industrial relevance, in areas like medicine, energy, and finance, remains contingent on an architecture that supports a fully fault-tolerant universal gate set with repeatable error correction—a capability that, until now, has eluded the industry.
Quantinuum is the first—and only—company to achieve this critical technical breakthrough, universally recognized as the essential precursor to scalable, industrial-scale quantum computing. This milestone provides us with the most de-risked development roadmap in the industry and positions us to fulfill our promise to deliver our universal, fully fault-tolerant quantum computer, Apollo, by 2029.
In this regard, Quantinuum is the first company to step from the so-called “NISQ” (noisy intermediate-scale quantum) era towards utility-scale quantum computers.
A quantum computer uses operations called gates to process information in ways that even today’s fastest supercomputers cannot. The industry typically refers to two types of gates for quantum computers:
A system that can run both gates is classified as universal and has the machinery to tackle the widest range of problems. Without non-Clifford gates, a quantum computer is non-universal and restricted to smaller, easier sets of tasks - and it can always be simulated by classical computers. This is like painting with a full palette of primary colors, versus only having one or two to work with. Simply put, a quantum computer that cannot implement ‘non-Clifford’ gates is not really a quantum computer.
A fault-tolerant, or error-corrected, quantum computer detects and corrects its own errors (or faults) to produce reliable results. Quantinuum has the best and brightest scientists dedicated to keeping our systems’ error rates the lowest in the world.
For a quantum computer to be fully fault-tolerant, every operation must be error-resilient, across Clifford gates and non-Clifford gates, and thus, performing “a full gate set” with error correction. While some groups have performed fully fault-tolerant gate sets in academic settings, these demonstrations were done with only a few qubits and error rates near 10%—too high for any practical use.
Today, we have published two papers that establishes Quantinuum as the first company to develop a complete solution for a universal fully fault-tolerant quantum computer with repeatable error correction, and error rates low enough for real-world applications.
The first paper describes how scientists at Quantinuum used our System Model H1-1 to perfect magic state production, a crucial technique for achieving a fully fault-tolerant universal gate set. In doing so, they set a record magic state infidelity (7x10-5), 10x better than any previously published result.
Our simulations show that our system could reach a magic state infidelity of 10^-10, or about one error per 10 billion operations, on a larger-scale computer with our current physical error rate. We anticipate reaching 10^-14, or about one error per 100 trillion operations, as we continue to advance our hardware. This means that our roadmap is now derisked.
Setting a record magic state infidelity was just the beginning. The paper also presents the first break-even two-qubit non-Clifford gate, demonstrating a logical error rate below the physical one. In doing so, the team set another record for two-qubit non-Clifford gate infidelity (2x10-4, almost 10x better than our physical error rate). Putting everything together, the team ran the first circuit that used a fully fault-tolerant universal gate set, a critical moment for our industry.
In the second paper, co-authored with researchers at the University of California at Davis, we demonstrated an important technique for universal fault-tolerance called “code switching”.
Code switching describes switching between different error correcting codes. The team then used the technique to demonstrate the key ingredients for universal computation, this time using a code where we’ve previously demonstrated full error correction and the other ingredients for universality.
In the process, the team set a new record for magic states in a distance-3 error correcting code, over 10x better than the best previous attempt with error correction. Notably, this process only cost 28 qubits instead of hundreds. This completes, for the first time, the ingredient list for a universal gate setin a system that also has real-time and repeatable QEC.
Innovations like those described in these two papers can reduce estimates for qubit requirements by an order of magnitude, or more, bringing powerful quantum applications within reach far sooner.
With all of the required pieces now, finally, in place, we are ‘fully’ equipped to become the first company to perform universal fully fault-tolerant computing—just in time for the arrival of Helios, our next generation system launching this year, and what is very likely to remain as the most powerful quantum computer on the market until the launch of its successor, Sol, arriving in 2027.