Quantinuum H-Series quantum computer accelerates through 3 more performance records for quantum volume

217, 218, and 219

June 30, 2023

In the last 6 months, Quantinuum H-Series hardware has demonstrated explosive performance improvement. Quantinuum’s System Model H1-1, Powered by Honeywell, has demonstrated going from 214 = 16,384 quantum volume (QV) announced in February 2023 to now 219 = 524,288, with all the details and data released on our GitHub repository for full transparency.  At a quantum volume of 524,288, H1-1 is 1000x higher than the next best reported quantum volume.

Figure 1:  H-Series progress quantum volume improvement trajectory
Figure 2:  Heavy output probability for the quantum volume data on H1-1 for (left) 217, (center) 218, and (right) 219

We set a big goal back in 2020 when we launched our first quantum computer, HØ. HØ was launched with six qubits and a quantum volume of 26 = 64, and at that time we made the bold and audacious commitment to increasing the quantum volume of our commercial machines 10x per year for 5 years, equating to a quantum volume of 8,388,608 or 223 by the end of 2025. In an industry that is often accused of being over-hyped, a commitment like this was easy to forget. But we did not forget. Diligently, our scientists and engineers continued to achieve world-record after world-record in a tireless and determined pursuit to systematically improve the overall performance of our quantum computers.   As seen in Figure 1, from 2020 to early 2023, we have steadily been increasing the quantum volume to demonstrate that increased qubit count while reducing errors directly translates to more computational power.  Just within 2023 we’ve had multiple announcements of quantum volume improvements.  In February we announced that H1-1 had leapfrogged 214 and achieved a quantum volume of 215. In May 2023, we launched H2-1 with 32 qubits at a quantum volume of 216.  Now we are thrilled to announce the sequential improvements of 217, 218, and 219, all on H1-1.

Importantly, none of these results were “hero results”, meaning there are no special calibrations made just to try to make the system look better. Our quantum volume data is taken on our commercial systems interwoven with customer jobs. What we experience is what our customers experience. Instead of improving at 10x per year as we committed back in 2020, the pace of improvement over the past 6 months has been 30x, accelerating at least one year from our 5-year commitment. While these demonstrations were made using H1-1, the similarities in the designs of H1-2 (now upgraded with 20 qubits) and H2-1, our recently released second generation system, make it straightforward to share the improvements from one machine to another and achieve the same results.

In this young and rapidly evolving industry, there are and will be disagreements about which benchmarks are best to use. Quantum volume, developed by IBM, is undeniably rigorous.  Quantum volume can be measured on any gate-based machine. Quantum volume has been peer-reviewed and has well defined assumptions and processes for making the measurements.  Improvements in QV require consistent reductions in errors, making it likely that no matter the application, QV improvements translate to better performance. In fact, to realize the exponential increase in power that quantum computers promise, it is required to continue to reduce these error rates. The average two-qubit gate error with these three new QV demonstrations was 0.13%, the best in the industry.  We measure many benchmarks, but it is for these reasons that we have adopted quantum volume as our primary system-wide benchmark to report our performance.

Putting aside the argument of which benchmark is better, year-over-year improvements in a rigorous benchmark do not happen accidentally. It can only happen because the dedicated, talented scientists and engineers that work on H-Series hardware have a deep understanding of its error model and a deep understanding of how to reduce the errors to make overall performance improvements. Equally important the talented scientists and engineers have mastery of their domain expertise and can dream-up and then implement the improvements. These validated error models become the bedrock of future systems’ design, instilling confidence that those systems will have well understood error models, and the performance of those systems can also be systematically improved and ultimate performance goals achieved. Taking nothing away from those talented scientists and engineers, but having perfect, identical qubits and employing our quantum charge coupled device (QCCD) architecture does give us an advantage that all the other architectures and other modalities do not have.

What should potential users of H-Series quantum computers take away from this write-up (and what do current users already know)?

  1. Quantinuum is committed to systematically improving the core performance of our quantum computing hardware. The better the fundamental performance, the lower the overhead will be when doing error mitigation, error detection, and ultimately error correction. This provides confidence in our ability to deliver fault-tolerant compute capabilities.
  2. Progress on your research, use-case, or application can be accelerated by getting access to H-series technology because our quantum computers can do circuits that other technologies cannot. “It actually works!” exclaim excited first-time users.
  3. Quantinuum intends to continue to be the quantum computing company that quietly over-delivers, even on big goals.

1. https://github.com/CQCL/quantinuum-hardware-quantum-volume

2. https://quantum-journal.org/papers/q-2022-05-09-707/

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
November 15, 2024
A step forward for non-Abelian quantum computing

Our team is making progress on the path towards “non-Abelian” quantum computing, which promises both fault tolerance and significant resource savings.

Computing with non-Abelian anyons, which are a type of quasiparticle, is sought after as it offers an enticing alternative to some of the biggest challenges in mainstream quantum computing. Estimates vary, but some scientists have calculated that some of the trickiest parts, like T gates and magic states distillation, can take up to 90% of the computer’s resources (when running something such as Shor’s algorithm). The non-abelian approach to quantum computing could mitigate this issue.

In a new paper in collaboration with Harvard and CalTech, our team is one step closer to realizing fault-tolerant non-Abelian quantum computing. This paper is a follow-up to our recent work published in Nature, where we demonstrated control of non-Abelian anyons. This marks a key step toward non-Abelian computing, and we are the only company who has achieved this. Additionally, we are the only company offering commercially-available mid-circuit measurement and feed-forward capabilities, which will be vital as we advance our research in this area.

In this paper, our team prepared the ground state of the “Z3” toric code – meaning this special state of matter was prepared in qutrit (3 states) Hilbert space. Before now, topological order has only been prepared in qubit (2 states) Hilbert spaces. This allowed them to explore the effect of defects in the lattice (for the experts, this was the “parafermion” defect as well as the “charge-conjugation” defect. They then entangled two pairs of charge conjugation defects, making a Bell pair.

All these accomplishments are critical steppingstones towards the non-Abelian anyons of the “S3” toric code, which is the non-Abelian approach that promises both huge resource savings previously discussed because it (unlike most quantum error correction codes) provides a universal gate set. The high-fidelity preparation our team accomplished in this paper suggests that we are very close to achieving a universal topological gate set, which will be an incredible “first” in the quantum computing community.

This work is another feather in our cap in terms of quantum error correction (QEC) research, a field we are leaders in. We recently demonstrated a significant reduction in circuit error rates in collaboration with Microsoft, we performed high fidelity and fault-tolerant teleportation of logical qubits, and we independently demonstrated the first implementation of the Quantum Fourier Transform with error correction. We’ve surpassed the “break-even” point multiple times, recently doing so entangling 4 logical qubits in a non-local code. This latest work in non-Abelian QEC is yet another crucial milestone for the community that we have rigorously passed before anyone else.

This world-class work is enabled by the native flexibility of our Quantum Charge Coupled Device (QCCD) architecture and its best-in-class fidelity. Our world-leading hardware combined with our team of over 350 PhD scientists means that we have the capacity to efficiently investigate a large variety of error correcting codes and fault-tolerant methods, while supporting our partners to do the same. Fault tolerance is one of the most critical challenges our industry faces, and we are proud to be leading the way towards large scale, fault-tolerant quantum computing.

technical
All
Blog
November 14, 2024
Making fault-tolerance a reality: Introducing our QEC decoder toolkit

We are thrilled to announce a groundbreaking addition to our technology suite: the Quantum Error Correction (QEC) decoder toolkit. This essential tool empowers users to decode syndromes and implement real-time corrections, an essential step towards achieving fault-tolerant quantum computing. As the only company offering this crucial capability to our users, we are paving the way for the future of quantum technology.

We are dedicated to realizing universal fault-tolerant quantum computing by the end of this decade. A key component of this mission is equipping our customers with essential QEC workflows, making advanced quantum computing more accessible than ever before.

Our QEC decoding toolkit is enabled by our real-time hybrid compute capability, which executes Web Assembly (Wasm) in our stack in both hardware and emulator environments. This enables the use of libraries (like linear algebra and graph libraries) and complex data structures (like vectors and graphs).

Our real-time hybrid compute capability enables a new frontier in classical-quantum hybrid computing. Our release of the QEC decoder toolkit marks a maturing from just running quantum circuits to running full quantum algorithms, interacting in depth with classical resources in real-time so that each platform (quantum, classical) can be focused where it performs best.

QEC decoding is one of the most exciting – and necessary – applications of hybrid computing capacity. Before now, error correction needed to be done with lookup tables: a list specifying the correction for each syndrome. This is not scalable: the number of syndromes grows exponentially with the distance (which is like the “error correcting power”) of the code. This means that codes hefty enough to run, say, Shor’s algorithm would need a lookup table too massive to search or even store properly.

For universal fault-tolerant quantum computing to become a reality, we need to decode error syndromes algorithmically. During algorithmic decoding, the syndrome is sent to a classical computer which solves (for example) a graph problem to determine the correction to make.

Algorithmic decoding is only half of the puzzle though – the other key ingredient is being able to decode syndromes and correct errors in real time. For universal, fully fault-tolerant computing, real-time decoding is necessary: one can’t just push all corrections to the end of the computation because the errors will propagate and overwhelm the computation. Errors must be corrected as the computation proceeds.

In real-time algorithmic decoding, the syndrome is sent to a classical computer while the qubits are held in stasis , then the computed correction is applied before the computation proceeds.  Alternatively, one can compute the correction while the computation proceeds in parallel, then it is retrieved when needed. This flexibility in implementation allows for maximum freedom in algorithmic design.

Our real-time co-compute capability combined with our industry-leading coherence times (up to 10,000x longer than competitors) is what allows us to be the first to release this capacity to our customers. Our long coherence times also enable our users to benefit from more complex decoders that offer superior results.

Our QEC toolkit is fully flexible and will work with any QEC code - allowing our customers to build their own decoders and explore the results. It also enables users to better understand what fault-tolerant computing on actual hardware is like and improve on ideas developed via theory and simulation. This means building better decoders for the real world.

Our toolkit includes three use cases and includes the relevant source-code needed to test and compile the Wasm binaries. These use cases are:

- Repeat Until Success: Conditionally adding quantum operations to a circuit based on equality comparisons with an in-memory Wasm variable.

- Repetition Code: [[3,1,2]] code that encodes 3 physical qubits into 1 logical qubit, with code distance of 2.

- Surface Code: [[9,1,3]] code that encodes 9 physical qubits into 1 logical qubit, with a code distance of 3.

This is just the beginning of our promise to deliver universal, fault-tolerant quantum computing by the end of the decade. We are proud to be the only company offering advanced capabilities like this to our customers, and to be leading the way towards practical QEC.  

technical
All
Blog
November 4, 2024
Establishing Trust

For a novel technology to be successful, it must prove that it is both useful and works as described.

Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.

Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?

We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.

This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.

The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.

Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.

technical
All