Quantinuum extends its significant lead in quantum computing, achieving historic milestones for hardware fidelity and Quantum Volume

Quantinuum has raised the bar for the global ecosystem by achieving the historic and much-vaunted “three 9's” 2-qubit gate fidelity in its commercial quantum computer and announcing that its Quantum Volume has surpassed one million – exponentially higher than its nearest competitors.

April 16, 2024

By Ilyas Khan, Founder and Chief Product Officer, Jenni Strabley, Sr Director of Offering Management

All quantum error correction schemes depend for their success on physical hardware achieving high enough fidelity. If there are too many errors in the physical qubit operations, the error correcting code has the effect of amplifying rather than diminishing overall error rates. For decades now, it has been hoped that one day a quantum computer would achieve “three 9's” – an iconic, inherent 99.9% 2-qubit physical gate fidelity – at which point many of the error-correcting codes required for universal fault tolerant quantum computing would successfully be able to squeeze errors out of the system.

That day has now arrived. Building on several previous laboratory demonstrations 1 2 3, Quantinuum has become the first company ever to achieve “three 9's” in a commercially-available quantum computer, with the first demonstration of 99.914(3)% 2-qubit gate fidelity, showing repeatable performance across all qubit pairs on our H1-1 system that is constantly available to customers. This production-environment announcement is a marked difference to one-offs recorded in carefully contrived laboratory conditions. This demonstrates what will fast become the expected standard for the entire quantum computing sector.

Quantinuum is also announcing another milestone, a seven-figure Quantum Volume (QV) of 1,048,576 – or in terms preferred by the experts, 220 – reinforcing our commitment to building, by a significant margin, the highest-performing quantum computers in the world.

These announcements follow a historic month that started when we proved our ability to scale our systems to the sizes needed to solve some of the world’s most pressing problems – and in a way that offers the best path to universal quantum computing.  

On March 5th, 2024, Quantinuum researchers disclosed details of our experiments that provide a solution to a totemic problem faced by all quantum computing architectures, known as the wiring problem. Supported by a video showing qubits being shuffled through a 2-dimensional grid ion-trap, our team presented concrete proof of the scalability of the quantum charge-coupled device (QCCD) architecture used in our H-Series quantum computers

Stop-motion ion transport video showing a chosen sorting operation implemented on an 8-site 2D grid trap with the swap-or-stay primitive. The sort is implemented by discrete choices of swaps or stays between neighboring sites. The numbers shown (indicated by dashed circles) at the beginning and end of the video show the initial and final location of the ions after the sort, e.g. the ion that starts at the top left site ends at the bottom right site. The stop-motion video was collected by segmenting the primitive operation and pausing mid-operation such that Yb fluorescence could be detected with a CMOS camera exposure.

On April 3rd, 2024 in partnership with Microsoft, our teams announced a breakthrough in quantum error correction that delivered as its crowning achievement the most reliable logical qubits on record.

We revealed detailed demonstrations in an arXiv pre-print paper of the reliability achieved via 4 logical qubits encoded into just 30 physical qubits on our System Model H2 quantum computer. Our joint teams were able to demonstrate logical circuit error rates far below physical circuit error rates, a capability that our full-stack quantum computer is currently the only one in the world with the fidelity required to achieve. 

Explaining the importance of 2-qubit gate fidelity

Reaching this level of physical fidelity is not optional for commercial scale computers – it is essential for error correction to work, and that in turn is a necessary foundation for any useful quantum computer. Our record two-qubit gate fidelity of 99.914(3)% marks a symbolic inflection point for the industry: at ”three 9's” fidelity, we are nearing or surpassing the break-even point (where logical qubits outperform physical qubits) for many quantum error correction protocols, and this will generate great interest among research and industrial teams exploring fault-tolerant methods for tackling real-world problems.

Without hardware fidelity this good, error-corrected calculations are noisier than un-corrected computations. This is why we call it a “threshold” – when gate errors are “above threshold”, quantum computers will remain noisy no matter what you do. Below threshold, you can use quantum error correction to push error rates way, way down, so that quantum computers eventually become as reliable as classical computers.  

Four years ago, Quantinuum claimed that it would improve the performance of its H-Series quantum computers by 10x each year for five years, when measured by the industry’s most widely recognized benchmark, QV (an industry standard not to be confused with less comprehensive metrics such as Algorithmic Qubits). 

Today’s achievement of a 220 QV – which as with all our demonstrations was achieved on our commercially-available machine – shows that our team is living up to this audacious commitment. We are completely confident we can continue to overcome the technical problems that stand in the way of even better fidelity and QV performance. Our QV data is available on GitHub, as are our hardware specifications

The combination of high QV and gate fidelities puts the Quantinuum system in a class by-itself – it is far and away the best of any commercially-available quantum computer.

A diagram of a circuitDescription automatically generated
Figure 1: Quantum Volume (QV) heavy output probability (HOP) as a function of time-ordered circuit index. The solid blue line shows the cumulative average while the green region shows the two-sigma confidence interval based on bootstrap resampling. A QV test is passed when the lower two-sigma confidence interval crosses 2/3.
A graph with numbers and a lineDescription automatically generated
Figure 2. Quantum volume vs time for our commercial systems. Quantinuum’s new world record quantum volume of 1,048,576 maintains our self-imposed goal of a 10-fold increase each year. In fact, in 2023 we achieved an overall increase in quantum volume of >100x.
A graph with a line and numbersDescription automatically generated with medium confidence
Figure 3. Two-qubit randomized benchmarking data from H1-1 across the five gate zones (dashed lines) and average over all five gate zones (solid blue line). The survival probability decays as a function of sequence length, which can be related to the average fidelity of the two-qubit gates with standard randomized benchmarking theory. With this data, we can claim that not only are all zones consistent with 99.9, but all zones are >99.9 outside of error bars.
QCCD: the path to fault tolerance

Additionally, and notably, these benchmarks were achieved “inherently”, without error mitigation, thanks to the H Series’ all-to-all connectivity and QCCD architecture. Full connectivity results in less errors when running large, complicated circuits. While other modalities depend on error mitigation techniques, such techniques are not scalable and present only a modest near-term value. 

Lower physical error and high connectivity means our quantum computers have a provably lower overhead for error-corrected computation.

Looking more deeply, experts look for high fidelities that are valid in all operating zones and between any pair of qubits. In contrast to our competitors, this is precisely what our H Series delivers. We do not suffer from a broad distribution of gate fidelities between different pairs of qubits, meaning that some pairs of qubits have significantly lower fidelities. Quantinuum is the only quantum computing company with all qubit pairs boasting above 99.9% fidelity.

Alongside these benefits and demonstrations of scalability, fidelity, connectivity, and reliability, it is worth noting how these features impact what arguably matters the most to users – time to solution. In the QCCD architecture, speed of operations is decoupled from speed to reach a computational solution thanks to a combination of:

  • a better signal to noise ratio than other modalities
  • drastically reducing or eliminating the number of swap gates required (because we can move our ions through space), and
  • reducing the number of trials required for an accurate result.

The net effect is that for increasingly complex circuits it takes a high-fidelity QCCD-type quantum computer less time to achieve accurate results than other 2D connected or lower-fidelity architectures.

“Getting to three 9’s in the QCCD architecture means that ~1000 entangling operations can be done before an error occurs. Our quantum computers are right at the edge of being able to do computations at the physical level that are beyond the reach of classical computers, which would occur somewhere between 3 nines and 4 nines. Some tasks become hard for classical computers before this regime (e.g. Google’s random circuit sampling problem) but this new regime allows for much less contrived problems to be solved. At that point, these machines become real tools for new discoveries – albeit they will still be limited in what they can probe, likely to be physics simulations or closely related problems,” said Dave Hayes, a Senior R&D manager at Quantinuum.

“Additionally, these fidelities put us, some would say comfortably, within the regime needed to build fault-tolerant machines. These fidelities allow us to start adding more qubits without needing to improve performance further, and to take advantage of quantum error correction to improve the computational power necessary for tackling truly large problems. This scaling problem gets easier with even better fidelities (which is why we’re not satisfied with 3 nines) but it is possible in principle.”

Quantinuum’s new records in fidelity and quantum volume on our commercial H1 device are expected to be achieved on the H2, once upgrades are implemented, underscoring the value that we offer to users for whom stability, reliability and robust performance are pre-requisites. The quantum computing landscape is complex and changing, but we remain at the head of the pack in all key metrics. The relationship with our world-class applications teams means that co-designed devices for solving some of the world’s most intractable problems are a big step closer to reality.

Quantinuum is the world’s leading quantum computing company, and our world-class scientists and engineers are continually driving our technology forward while expanding the possibilities for our users. Their work on applications includes cybersecurity, quantum chemistry, quantum Monte Carlo integration, quantum topological data analysis, condensed matter physics, high energy physics, quantum machine learning, and natural language processing – and we are privileged to support them to bring new solutions to bear on some of the greatest challenges we face.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
December 9, 2024
Q2B 2024: The Roadmap to Quantum Value

At this year’s Q2B Silicon Valley conference from December 10th – 12th in Santa Clara, California, the Quantinuum team will be participating in plenary and case study sessions to showcase our quantum computing technologies. 

Schedule a meeting with us at Q2B

Meet our team at Booth #G9 to discover how Quantinuum is charting the path to universal, fully fault-tolerant quantum computing. 

Join our sessions: 

Tuesday, Dec 10, 10:00 - 10:20am PT

Plenary: Advancements in Fault-Tolerant Quantum Computation: Demonstrations and Results

There is industry-wide consensus on the need for fault-tolerant QPU’s, but demonstrations of these abilities are less common. In this talk, Dr. Hayes will review Quantinuum’s long list of meaningful demonstrations in fault-tolerance, including real-time error correction, a variety of codes from the surface code to exotic qLDPC codes, logical benchmarking, beyond break-even behavior on multiple codes and circuit families.

View the presentation

Wednesday, Dec 11, 4:30 – 4:50pm PT

Keynote: Quantum Tokens: Securing Digital Assets with Quantum Physics

Mitsui’s Deputy General Manager, Quantum Innovation Dept., Corporate Development Div., Koji Naniwada, and Quantinuum’s Head of Cybersecurity, Duncan Jones will deliver a keynote presentation on a case study for quantum in cybersecurity. Together, our organizations demonstrated the first implementation of quantum tokens over a commercial QKD network. Quantum tokens enable three previously incompatible properties: unforgeability guaranteed by physics, fast settlement without centralized validation, and user privacy until redemption. We present results from our successful Tokyo trial using NEC's QKD commercial hardware and discuss potential applications in financial services.

Details on the case study

Wednesday, Dec 11, 5:10 – 6:10pm PT

Quantinuum and Mitsui Sponsored Happy Hour

Join the Quantinuum and Mitsui teams in the expo hall for a networking happy hour. 

events
All
Blog
December 5, 2024
Quantum computing is accelerating

Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.

Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation. 

For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality. 

Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more. 

A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments. 

CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.” 

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems. 

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”

The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version). 

In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.

We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.  

Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.

Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe,  or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.

The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.

Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.

The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.

The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.

In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.

This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of. 

technical
All