Quantinuum’s H-Series hits 56 physical qubits that are all-to-all connected, and departs the era of classical simulation

In collaboration with JPMorgan Chase & Co., Quantinuum’s H2-1 achieved a massive uplift in an iconic demonstration

June 5, 2024

The first half of 2024 will go down as the period when we shed the last vestiges of the “wait and see” culture that has dominated the quantum computing industry. Thanks to a run of recent achievements, we have helped to lead the entire quantum computing industry into a new, post-classical era.

Today we are announcing the latest of these achievements: a major qubit count enhancement to our flagship System Model H2 quantum computer from 32 to 56 qubits. We also reveal meaningful results of work with our partner JPMorgan Chase & Co. that showcases a significant lift in performance.

But to understand the full importance of today’s announcements, it is worth recapping the succession of breakthroughs that confirm that we are entering a new era of quantum computing in which classical simulation will be infeasible.

A historic run

Between January and June 2024, Quantinuum’s pioneering teams published a succession of results that accelerate our path to universal fault-tolerant quantum computing. 

Our technical teams first presented a long-sought solution to the “wiring problem”, an engineering challenge that affects all types of quantum computers. In short, most current designs will require an impossible number of wires connected to the quantum processor to scale to large qubit numbers. Our solution allows us to scale to high qubit numbers with no issues, proving that our QCCD architecture has the potential to scale.

Next, we became the first quantum computing company in the world to hit “three 9s” two qubit gate fidelity across all qubit pairs in a production device. This level of fidelity in 2-qubit gate operations was long thought to herald the point at which error corrected quantum computing could become a reality. It has accelerated and intensified our focus on quantum error correction (QEC). Our scientists and engineers are working with our customers and partners to achieve multiple breakthroughs in QEC in the coming months, many of which will be incorporated into products such as the H-Series and our chemistry simulation platform, InQuanto™.

Following that, with our long-time partner Microsoft, we hit an error correction performance threshold that many believed was still years away. The System Model H2 became the first – and only – quantum computer in the world capable of creating and computing with highly reliable logical (error corrected) qubits. In this demonstration, the H2-1 configured with 32 physical qubits supported the creation of four highly reliable logical qubits operating at “better than break-even”. In the same demonstration, we also shared that logical circuit error rates were shown to be up to 800x lower than the corresponding physical circuit error rates. No other quantum computing company is even close to matching this achievement (despite many feverish claims in the past 12 months).

Pushing to the limits of supercomputing … and beyond

The quantum computing industry is departing the era when quantum computers could be simulated by a classical computer. Today, we are making two milestone announcements. The first is that our H2-1 processor has been upgraded to 56 trapped-ion qubits, making it impossible to classically simulate, without any loss of the market-leading fidelity, all-to-all qubit connectivity, mid-circuit measurement, qubit reuse, and feed forward.

The second is that the upgrade of H2-1 from 32 to 56 qubits makes our processor capable of challenging the world’s most powerful supercomputers. This demonstration was achieved in partnership with our long-term collaborator JPMorgan Chase & Co. and researchers from Caltech and Argonne National Lab.

Our collaboration tackled a well-known algorithm, Random Circuit Sampling (RCS), and measured the quality of our results with a suite of tests including the linear cross entropy benchmark (XEB) – an approach first made famous by Google in 2019 in a bid to demonstrate “quantum supremacy”. An XEB score close to 0 says your results are noisy – and do not utilize the full potential of quantum computing. In contrast, the closer an XEB score is to 1, the more your results demonstrate the power of quantum computing. The results on H2-1 are excellent, revealing, and worth exploring in a little detail. Here is the complete data on GitHub.

Better qubits, better results

Our results show how far quantum hardware has come since Google’s initial demonstration. They originally ran circuits on 53 superconducting qubits that were deep enough to severely frustrate high-fidelity classical simulation at the time, achieving an estimated XEB score of ~0.002. While they showed that this small value was statistically inconsistent with zero, improvements in classical algorithms and hardware have steadily increased what XEB scores are achievable by classical computers, to the point that classical computers can now achieve scores similar to Google’s on their original circuits.

Figure 1. At N=56 qubits, the H2 quantum computer achieves over 100x higher fidelity on computationally hard circuits compared to earlier superconducting experiments. This means orders of magnitude fewer shots are required for high confidence in the fidelity, resulting in comparable total runtimes

In contrast, we have been able to run circuits on all 56 qubits in H2-1 that are deep enough to challenge high-fidelity classical simulation while achieving an estimated XEB score of ~0.35. This >100x improvement implies the following: even for circuits large and complex enough to frustrate all known classical simulation methods, the H2 quantum computer produces results without making even a single error about 35% of the time. In contrast to past announcements associated with XEB experiments, 35% is a significant step towards the idealized 100% fidelity limit in which the computational advantage of quantum computers is clearly in sight.

This huge jump in quality is made possible by Quantinuum’s market-leading high fidelity and also our unique all-to-all connectivity. Our flexible connectivity, enabled by our QCCD architecture, enables us to implement circuits with much more complex geometries than the 2D geometries supported by superconducting-based quantum computers. This specific advantage means our quantum circuits become difficult to simulate classically with significantly fewer operations (or gates). These capabilities have an enormous impact on how our computational power scales as we add more qubits: since noisy quantum computers can only run a limited number of gates before returning unusable results, needing to run fewer gates ultimately translates into solving complex tasks with consistent and dependable accuracy.

This is a vitally important moment for companies and governments watching this space and deciding when to invest in quantum: these results underscore both the performance capabilities and the rapid rate of improvement of our processors, especially the System Model H2, as a prime candidate for achieving near-term value.

So what of the comparison between the H2-1 results and a classical supercomputer? 

A direct comparison can be made between the time it took H2-1 to perform RCS and the time it took a classical supercomputer. However, classical simulations of RCS can be made faster by building a larger supercomputer (or by distributing the workload across many existing supercomputers). A more robust comparison is to consider the amount of energy that must be expended to perform RCS on either H2-1 or on classical computing hardware, which ultimately controls the real cost of performing RCS. An analysis based on the most efficient known classical algorithm for RCS and the power consumption of leading supercomputers indicates that H2-1 can perform RCS at 56 qubits with an estimated 30,000x reduction in power consumption. These early results should be seen as very attractive for data center owners and supercomputing facilities looking to add quantum computers as “accelerators” for their users. 

Where we go next

Today’s milestone announcements are clear evidence that the H2-1 quantum processor can perform computational tasks with far greater efficiency than classical computers. They underpin the expectation that as our quantum computers scale beyond today’s 56 qubits to hundreds, thousands, and eventually millions of high-quality qubits, classical supercomputers will quickly fall behind. Quantinuum’s quantum computers are likely to become the device of choice as scrutiny continues to grow of the power consumption of classical computers applied to highly intensive workloads such as simulating molecules and material structures – tasks that are widely expected to be amenable to a speedup using quantum computers.

With this upgrade in our qubit count to 56, we will no longer be offering a commercial “fully encompassing” emulator – a mathematically exact simulation of our H2-1 quantum processor is now impossible, as it would take up the entire memory of the world’s best supercomputers. With 56 qubits, the only way to get exact results is to run on the actual hardware, a trend the leaders in this field have already embraced.

More generally, this work demonstrates that connectivity, fidelity, and speed are all interconnected when measuring the power of a quantum computer. Our competitive edge will persist in the long run; as we move to running more algorithms at the logical level, connectivity and fidelity will continue to play a crucial role in performance.

“We are entirely focused on the path to universal fault tolerant quantum computers. This objective has not changed, but what has changed in the past few months is clear evidence of the advances that have been made possible due to the work and the investment that has been made over many, many years. These results show that whilst the full benefits of fault tolerant quantum computers have not changed in nature, they may be reachable earlier than was originally expected, and crucially, that along the way, there will be tangible benefits to our customers in their day-to-day operations as quantum computers start to perform in ways that are not classically simulatable. We have an exciting few months ahead of us as we unveil some of the applications that will start to matter in this context with our partners across a number of sectors.”
– Ilyas Khan, Chief Product Officer

Stay tuned for results in error correction, physics, chemistry and more on our new 56-qubit processor.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
December 9, 2024
Q2B 2024: The Roadmap to Quantum Value

At this year’s Q2B Silicon Valley conference from December 10th – 12th in Santa Clara, California, the Quantinuum team will be participating in plenary and case study sessions to showcase our quantum computing technologies. 

Schedule a meeting with us at Q2B

Meet our team at Booth #G9 to discover how Quantinuum is charting the path to universal, fully fault-tolerant quantum computing. 

Join our sessions: 

Tuesday, Dec 10, 10:00 - 10:20am PT

Plenary: Advancements in Fault-Tolerant Quantum Computation: Demonstrations and Results

There is industry-wide consensus on the need for fault-tolerant QPU’s, but demonstrations of these abilities are less common. In this talk, Dr. Hayes will review Quantinuum’s long list of meaningful demonstrations in fault-tolerance, including real-time error correction, a variety of codes from the surface code to exotic qLDPC codes, logical benchmarking, beyond break-even behavior on multiple codes and circuit families.

View the presentation

Wednesday, Dec 11, 4:30 – 4:50pm PT

Keynote: Quantum Tokens: Securing Digital Assets with Quantum Physics

Mitsui’s Deputy General Manager, Quantum Innovation Dept., Corporate Development Div., Koji Naniwada, and Quantinuum’s Head of Cybersecurity, Duncan Jones will deliver a keynote presentation on a case study for quantum in cybersecurity. Together, our organizations demonstrated the first implementation of quantum tokens over a commercial QKD network. Quantum tokens enable three previously incompatible properties: unforgeability guaranteed by physics, fast settlement without centralized validation, and user privacy until redemption. We present results from our successful Tokyo trial using NEC's QKD commercial hardware and discuss potential applications in financial services.

Details on the case study

Wednesday, Dec 11, 5:10 – 6:10pm PT

Quantinuum and Mitsui Sponsored Happy Hour

Join the Quantinuum and Mitsui teams in the expo hall for a networking happy hour. 

events
All
Blog
December 5, 2024
Quantum computing is accelerating

Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.

Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation. 

For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality. 

Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more. 

A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments. 

CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.” 

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems. 

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”

The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version). 

In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.

We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.  

Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.

Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe,  or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.

The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.

Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.

The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.

The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.

In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.

This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of. 

technical
All