Quantum Volume reaches 5 digits for the first time

5 perspectives on what it means for quantum computing

February 23, 2023

Quantinuum’s H-Series team has hit the ground running in 2023, achieving a new performance milestone. The H1-1 trapped ion quantum computer has achieved a Quantum Volume (QV) of 32,768 (215), the highest in the industry to date.

The team previously increased the QV to 8,192 (or 213) for the System Model H1 system in September, less than six months ago. The next goal was a QV of 16,384 (214). However, continuous improvements to the H1-1's controls and subsystems advanced the system enough to successfully reach 214 as expected, and then to go one major step further, and reach a QV of 215.

The Quantum Volume test is a full-system benchmark that produces a single-number measure of a quantum computer’s general capability. The benchmark takes into account qubit number, fidelity, connectivity, and other quantities important in building useful devices.1 While other measures such as gate fidelity and qubit count are significant and worth tracking, neither is as comprehensive as Quantum Volume which better represents the operational ability of a quantum computer.

Dr. Brian Neyenhuis, Director of Commercial Operations, credits reductions in the phase noise of the computer’s lasers as one key factor in the increase.

"We've had enough qubits for a while, but we've been continually pushing on reducing the error in our quantum operations, specifically the two-qubit gate error, to allow us to do these Quantum Volume measurements,” he said. 

The Quantinuum team improved memory error and elements of the calibration process as well. 

“It was a lot of little things that got us to the point where our two-qubit gate error and our memory error are both low enough that we can pass these Quantum Volume circuit tests,” he said. 

The work of increasing Quantum Volume means improving all the subsystems and subcomponents of the machine individually and simultaneously, while ensuring all the systems continue to work well together. Such a complex task takes a high degree of orchestration across the Quantinuum team, with the benefits of the work passed on to H-Series users. 

To illustrate what this 5-digit Quantum Volume milestone means for the H-Series, here are 5 perspectives that reflect Quantinuum teams and H-Series users.

Perspective #1: How a higher QV impacts algorithms

Dr. Henrik Dreyer is Managing Director and Scientific Lead at Quantinuum’s office in Munich, Germany. In the context of his work, an improvement in Quantum Volume is important as it relates to gate fidelity. 

“As application developers, the signal-to-noise ratio is what we're interested in,” Henrik said. “If the signal is small, I might run the circuits 10 times and only get one good shot. To recover the signal, I have to do a lot more shots and throw most of them away. Every shot takes time."

“The signal-to-noise ratio is sensitive to the gate fidelity. If you increase the gate fidelity by a little bit, the runtime of a given algorithm may go down drastically,” he said. “For a typical circuit, as the plot shows, even a relatively modest 0.16 percentage point improvement in fidelity, could mean that it runs in less than half the time.”

To demonstrate this point, the Quantinuum team has been benchmarking the System Model H1 performance on circuits relevant for near-term applications. The graph below shows repeated benchmarking of the runtime of these circuits before and after the recent improvement in gate fidelity. The result of this moderate change in fidelity is a 3x change in runtime. The runtimes calculated below are based on the number of shots required to obtain accurate results from the benchmarking circuit – the example uses 430 arbitrary-angle two-qubit gates and an accuracy of 3%.

Perspective #2: Advancing quantum error correction

Dr. Natalie Brown and Dr, Ciaran Ryan-Anderson both work on quantum error correction at Quantinuum. They see the QV advance as an overall boost to this work. 

“Hitting a Quantum Volume number like this means that you have low error rates, a lot of qubits, and very long circuits,” Natalie said. “And all three of those are wonderful things for quantum error correction. A higher Quantum Volume most certainly means we will be able to run quantum error correction better. Error correction is a critical ingredient to large-scale quantum computing. The earlier we can start exploring error correction on today’s small-scale hardware, the faster we’ll be able to demonstrate it at large-scale.”

Ciaran said that H1-1's low error rates allow scientists to make error correction better and start to explore decoding options.

“If you can have really low error rates, you can apply a lot of quantum operations, known as gates,” Ciaran said. "This makes quantum error correction easier because we can suppress the noise even further and potentially use fewer resources to do it, compared to other devices.”

Perspective #3: Meeting a high benchmark

“This accomplishment shows that gate improvements are getting translated to full-system circuits,” said Dr. Charlie Baldwin, a research scientist at Quantinuum. 

Charlie specializes in quantum computing performance benchmarks, conducting research with the Quantum Economic Development Consortium (QED-C).

“Other benchmarking tests use easier circuits or incorporate other options like post-processing data. This can make it more difficult to determine what part improved,” he said. “With Quantum Volume, it’s clear that the performance improvements are from the hardware, which are the hardest and most significant improvements to make.” 

“Quantum Volume is a well-established test. You really can’t cheat it,” said Charlie.

Perspective #4: Implications for quantum applications

Dr. Ross Duncan, Head of Quantum Software, sees Quantum Volume measurements as a good way to show overall progress in the process of building a quantum computer.

“Quantum Volume has merit, compared to any other measure, because it gives a clear answer,” he said. 

“This latest increase reveals the extent of combined improvements in the hardware in recent months and means researchers and developers can expect to run deeper circuits with greater success.” 

Perspective #5: H-Series users

Quantinuum’s business model is unique in that the H-Series systems are continuously upgraded through their product lifecycle. For users, this means they continually and immediately get access to the latest breakthroughs in performance. The reported improvements were not done on an internal testbed, but rather implemented on the H1-1 system which is commercially available and used extensively by users around the world.

“As soon as the improvements were implemented, users were benefiting from them,” said Dr. Jenni Strabley, Sr. Director of Offering Management. “We take our Quantum Volume measurement intermixed with customers’ jobs, so we know that the improvements we’re seeing are also being seen by our customers.”

Jenni went on to say, “Continuously delivering increasingly better performance shows our commitment to our customers’ success with these early small-scale quantum computers as well as our commitment to accuracy and transparency. That’s how we accelerate quantum computing.”

Supporting data from Quantinuum’s 215 QV milestone

This latest QV milestone demonstrates how the Quantinuum team continues to boost the performance of the System Model H1, making improvements to the two-qubit gate fidelity while maintaining high single-qubit fidelity, high SPAM fidelity, and low cross-talk.

The average single-qubit gate fidelity for these milestones was 99.9955(8)%, the average two-qubit gate fidelity was 99.795(7)% with fully connected qubits, and state preparation and measurement fidelity was 99.69(4)%.

For both tests, the Quantinuum team ran 100 circuits with 200 shots each, using standard QV optimization techniques to yield an average of 219.02 arbitrary angle two-qubit gates per circuit on the 214 test, and 244.26 arbitrary angle two-qubit gates per circuit on the 215 test.

The Quantinuum H1-1 successfully passed the quantum volume 16,384 benchmark, outputting heavy outcomes 69.88% of the time, and passed the 32,768 benchmark, outputting heavy outcomes 69.075% of the time. The heavy output frequency is a simple measure of how well the measured outputs from the quantum computer match the results from an ideal simulation. Both results are above the two-thirds passing threshold with high confidence. More details on the Quantum Volume test can be found here.

Heavy output frequency for H1-1 at 215 (QV 32,768)
Chart, scatter chartDescription automatically generated
Heavy output frequency for H1-1 at 214 (QV 16,384) 
Chart, scatter chartDescription automatically generated

Quantum Volume data and analysis code can be accessed on Quantinuum’s GitHub repository for quantum volume data. Contemporary benchmarking data can be accessed at Quantinuum’s GitHub repository for hardware specifications.

1Re-examining the quantum volume test: Ideal distributions, compiler optimizations, confidence intervals, and scalable resource estimations (quantum-journal.org)

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
November 4, 2024
Establishing Trust

For a novel technology to be successful, it must prove that it is both useful and works as described.

Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.

Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?

We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.

This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.

The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.

Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.

technical
All
Blog
October 31, 2024
We’re working on bringing the power of quantum computing – and quantum machine learning - to particle physics

Particle accelerators like the LHC take serious computing power. Often on the bleeding-edge of computing technology, accelerator projects sometimes even drive innovations in computing. In fact, while there is some controversy over exactly where the world wide web was created, it is often attributed to Tim Berners-Lee at CERN, who developed it to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the High Energy Physics community is interested in quantum computing, which offers real solutions to some of their hardest problems. Furthermore, the HEP community is well-positioned to support the early stages of technological development: with budgets in the 10s of billions per year and tens of thousands of scientists and engineers working on accelerator and computational physics, this is a ripe industry for quantum computing to tap.

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable”

The authors go on to state: “Within the existing data reconstruction and analysis paradigm, access to algorithms that exhibit quantum speed-ups would revolutionize the simulation of large-scale quantum systems and the processing of data from complex experimental set-ups. This would enable a new generation of precision measurements to probe deeper into the nature of the universe. Existing measurements may contain the signatures of underlying quantum correlations or other sources of new physics that are inaccessible to classical analysis techniques. Quantum algorithms that leverage these properties could potentially extract more information from a given dataset than classical algorithms.”

Our scientists have been working with a team at DESY, one of the world’s leading accelerator centers, to bring the power of quantum computing to particle physics. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center for fundamental science located in Hamburg and Zeuthen, where the Center for Quantum Technologies and Applications (CQTA) is based.  DESY operates, develops, and constructs particle accelerators used to investigate the structure, dynamics and function of matter, and conducts a broad spectrum of interdisciplinary scientific research. DESY employs about 3,000 staff members from more than 60 nations, and is part of the worldwide computer network to store and analyze the enormous flood of data that is produced by the LHC in Geneva.

In a recent paper, our scientists collaborated with scientists from DESY, the Leiden Institute of Advanced Computer Science (LIACS), and Northeastern University to explore using a generative quantum machine learning model, called a “quantum Boltzmann machine” to untangle data from CERN’s LHC.

The goal was to learn probability distributions relevant to high energy physics better than the corresponding classical models. The data specifically contains “particle jet events”, which describe how colliders collect data about the subatomic particles generated during the experiments.

In some cases the quantum Boltzmann machine was indeed better, compared to a classical Boltzmann machine. The team is analyzed when and why this happens, understanding better how to apply these new quantum tools in this research setting. The team also studied the effect of the data encoding into a quantum state, noting that it can have a decisive effect on the training performance. Especially enticing is that the quantum Boltzmann machine is efficiently trainable, which our scientists showed in a recent paper published in Nature Communications Physics.  

technical
All
Blog
October 28, 2024
SC24: The International Conference for High Performance Computing, Networking, Storage, and Analysis

Find the Quantinuum team at this year’s SC24 conference from November 17th – 22nd in Atlanta, Georgia. Meet our team at Booth #4351 to discover how Quantinuum is bridging the gap between quantum computing and high-performance compute with leading industry partners.

Schedule time to meet with us

The Quantinuum team will be participating various events, panels and poster sessions to showcase our quantum computing technologies. Join us at the below sessions: 

Monday, Nov 18, 8:00 - 8:25pm, EST

Panel: KAUST booth 1031

Nash Palaniswamy, Quantinuum’s CCO, will join a panel discussion with quantum vendors and KAUST partners to discuss advancements in quantum technology.

Monday, Nov 18, 9:00 - 11:59pm, EST

Beowulf Bash: World of Coca-Cola

This year, we are proudly sponsoring the Beowulf Bash, a unique event organized to bring the HPC community together for a night of unique entertainment! Join us at the event on Monday, November 18th, 9:00pm at the World of Coca-Cola.

Wednesday, Nov 20, 3:30 – 5:00pm, EST

Panel: Educating for a Hybrid Future: Bridging the Gap between High-Performance and Quantum Computing

Vincent Anandraj, Quantinuum’s Director of Global Ecosystem and Strategic Alliances, will moderate this panel which brings together experts from leading supercomputing centers and the quantum computing industry, including PSC, Leibniz Supercomputing Centre, IQM Quantum Computers, NVIDIA, and National Research Foundation.

Thursday, Nov 21, 11:00 – 11:30am, EST 

Presentation: Realizing Quantum Kernel Models at Scale with Matrix Product State Simulation

Pablo Andres-Martinez​, Research Scientist at Quantinuum, will present research done in collaboration with HSBC, where the team applied quantum methods to fraud detection.

events
All