In a meaningful advance in an important area of industrial and real-world relevance, Quantinuum researchers have demonstrated a quantum algorithm capable of solving complex combinatorial optimization problems while making the most of available quantum resources.
Results on the new H2 quantum computer evidenced a remarkable ability to solve combinatorial optimization problems with as few quantum resources as those employed by just one layer of the quantum approximate optimization algorithm (QAOA), the current and traditional workhorse of quantum heuristic algorithms.
Optimization problems are common in industry in contexts such as route planning, scheduling, cost optimization and logistics. However, as the number of variables increases and optimization problems grow larger and more complex, finding satisfactory solutions using classical algorithms becomes increasingly difficult.
Recent research suggests that certain quantum algorithms might be capable of solving combinatorial optimization problems better than classical algorithms. The realization of such quantum algorithms can therefore potentially increase the efficiency of industrial processes.
However, the effectiveness of these algorithms on near-term quantum devices and even on future generations of more capable quantum computers presents a technical challenge: quantum resources will need to be reduced as much as possible in order to protect the quantum algorithm from the unavoidable effects of quantum noise.
Sebastian Leontica and Dr. David Amaro, a senior research scientist at Quantinuum, explain their advances in a new paper, “Exploring the neighborhood of 1-layer QAOA with Instantaneous Quantum Polynomial circuits” published on arXiv. This is one of several papers published at the launch of Quantinuum’s H2, that highlight the unparalleled power of the newest generation of the H-Series, Powered by Honeywell.
“We should strive to use as few quantum resources as possible no matter how good a quantum computer we are operating on, which means using the smallest possible number of qubits that fit within the problem size and a circuit that is as shallow as possible,” Dr. Amaro said. “Our algorithm uses the fewest possible resources and still achieves good performance.”
The researchers use a parameterized instantaneous quantum polynomial (IQP) circuit of the same depth as the 1-layer QAOA to incorporate corrections that would otherwise require multiple layers. Another differentiating feature of the algorithm is that the parameters in the IQP circuit can be efficiently trained on a classical computer, avoiding some training issues of other algorithms like QAOA. Critically, the circuit takes full advantage of, and benefits from features available on Quantinuum’s devices, including parameterized two-qubit gates, all-to-all connectivity, and high-fidelity operations.
“Our numerical simulations and experiments on the new H2 quantum computer at small scale indicate that this heuristic algorithm, compared to 1-layer QAOA, is expected to amplify the probability of sampling good or even optimal solutions of large optimization problems,” Dr. Amaro said. “We now want to understand how the solution quality and runtime of our algorithm compares to the best classical algorithms.”
This algorithm will be useful for current quantum computers as well as larger machines farther along the Quantinuum hardware roadmap.
The goal of this project was to provide a quantum heuristic algorithm for combinatorial optimization that returns better solutions for optimization problems and uses fewer quantum resources than state of the art quantum heuristics. The researchers used a fully connected parameterized IQP, warm-started from 1-layer QAOA. For a problem with n binary variables the circuit contained up to n(n-1)/2 two-qubit gates and the researchers employed only 20.32n shots.
The algorithm showed improved performance on the Sherrington-Kirkpatrick (SK) optimization problem compared to the 1-layer QAOA. Numerical simulations showed an average speed up of 20.31n compared to 20.5n when looking for the optimal solution.
Experimental results on our new H2 quantum computer and emulator confirmed that the new optimization algorithm outperforms 1-layer QAOA and reliably solves complex optimization problems. The optimal solution was found for 136 out of 312 instances, four of which were for the maximum size of 32 qubits. A 30-qubit instance was solved optimally on the H2 device, which means, remarkably, that at least one of the 776 shots measured after performing 432 two-qubit gates corresponds to the unique optimal solution in the huge set of 230 > 109 candidate solutions.
These results indicate that the algorithm, in combination with H2 hardware, is capable of solving hard optimization problems using minimal quantum resources in the presence of real hardware noise.
Quantinuum researchers expect that these promising results at small scale will encourage the further study of new quantum heuristic algorithms at the relevant scale for real-world optimization problems, which requires a better understanding of their performance under realistic conditions.
Numerical simulations of 256 SK random instances for each problem size from 4 to 29 qubits. Graph A shows the probability of sampling the optimal solution in the IQP circuit, for which the average is 2-0.31n. Graph B shows the enhancement factor compared to 1-layer QAOA, for which the average is 20.23n. These results indicate that Quantinuum’s algorithm has significantly better runtime than 1-layer QAOA.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Our team is making progress on the path towards “non-Abelian” quantum computing, which promises both fault tolerance and significant resource savings.
Computing with non-Abelian anyons, which are a type of quasiparticle, is sought after as it offers an enticing alternative to some of the biggest challenges in mainstream quantum computing. Estimates vary, but some scientists have calculated that some of the trickiest parts, like T gates and magic states distillation, can take up to 90% of the computer’s resources (when running something such as Shor’s algorithm). The non-abelian approach to quantum computing could mitigate this issue.
In a new paper in collaboration with Harvard and CalTech, our team is one step closer to realizing fault-tolerant non-Abelian quantum computing. This paper is a follow-up to our recent work published in Nature, where we demonstrated control of non-Abelian anyons. This marks a key step toward non-Abelian computing, and we are the only company who has achieved this. Additionally, we are the only company offering commercially-available mid-circuit measurement and feed-forward capabilities, which will be vital as we advance our research in this area.
In this paper, our team prepared the ground state of the “Z3” toric code – meaning this special state of matter was prepared in qutrit (3 states) Hilbert space. Before now, topological order has only been prepared in qubit (2 states) Hilbert spaces. This allowed them to explore the effect of defects in the lattice (for the experts, this was the “parafermion” defect as well as the “charge-conjugation” defect. They then entangled two pairs of charge conjugation defects, making a Bell pair.
All these accomplishments are critical steppingstones towards the non-Abelian anyons of the “S3” toric code, which is the non-Abelian approach that promises both huge resource savings previously discussed because it (unlike most quantum error correction codes) provides a universal gate set. The high-fidelity preparation our team accomplished in this paper suggests that we are very close to achieving a universal topological gate set, which will be an incredible “first” in the quantum computing community.
This work is another feather in our cap in terms of quantum error correction (QEC) research, a field we are leaders in. We recently demonstrated a significant reduction in circuit error rates in collaboration with Microsoft, we performed high fidelity and fault-tolerant teleportation of logical qubits, and we independently demonstrated the first implementation of the Quantum Fourier Transform with error correction. We’ve surpassed the “break-even” point multiple times, recently doing so entangling 4 logical qubits in a non-local code. This latest work in non-Abelian QEC is yet another crucial milestone for the community that we have rigorously passed before anyone else.
This world-class work is enabled by the native flexibility of our Quantum Charge Coupled Device (QCCD) architecture and its best-in-class fidelity. Our world-leading hardware combined with our team of over 350 PhD scientists means that we have the capacity to efficiently investigate a large variety of error correcting codes and fault-tolerant methods, while supporting our partners to do the same. Fault tolerance is one of the most critical challenges our industry faces, and we are proud to be leading the way towards large scale, fault-tolerant quantum computing.
We are thrilled to announce a groundbreaking addition to our technology suite: the Quantum Error Correction (QEC) decoder toolkit. This essential tool empowers users to decode syndromes and implement real-time corrections, an essential step towards achieving fault-tolerant quantum computing. As the only company offering this crucial capability to our users, we are paving the way for the future of quantum technology.
We are dedicated to realizing universal fault-tolerant quantum computing by the end of this decade. A key component of this mission is equipping our customers with essential QEC workflows, making advanced quantum computing more accessible than ever before.
Our QEC decoding toolkit is enabled by our real-time hybrid compute capability, which executes Web Assembly (Wasm) in our stack in both hardware and emulator environments. This enables the use of libraries (like linear algebra and graph libraries) and complex data structures (like vectors and graphs).
Our real-time hybrid compute capability enables a new frontier in classical-quantum hybrid computing. Our release of the QEC decoder toolkit marks a maturing from just running quantum circuits to running full quantum algorithms, interacting in depth with classical resources in real-time so that each platform (quantum, classical) can be focused where it performs best.
QEC decoding is one of the most exciting – and necessary – applications of hybrid computing capacity. Before now, error correction needed to be done with lookup tables: a list specifying the correction for each syndrome. This is not scalable: the number of syndromes grows exponentially with the distance (which is like the “error correcting power”) of the code. This means that codes hefty enough to run, say, Shor’s algorithm would need a lookup table too massive to search or even store properly.
For universal fault-tolerant quantum computing to become a reality, we need to decode error syndromes algorithmically. During algorithmic decoding, the syndrome is sent to a classical computer which solves (for example) a graph problem to determine the correction to make.
Algorithmic decoding is only half of the puzzle though – the other key ingredient is being able to decode syndromes and correct errors in real time. For universal, fully fault-tolerant computing, real-time decoding is necessary: one can’t just push all corrections to the end of the computation because the errors will propagate and overwhelm the computation. Errors must be corrected as the computation proceeds.
In real-time algorithmic decoding, the syndrome is sent to a classical computer while the qubits are held in stasis , then the computed correction is applied before the computation proceeds. Alternatively, one can compute the correction while the computation proceeds in parallel, then it is retrieved when needed. This flexibility in implementation allows for maximum freedom in algorithmic design.
Our real-time co-compute capability combined with our industry-leading coherence times (up to 10,000x longer than competitors) is what allows us to be the first to release this capacity to our customers. Our long coherence times also enable our users to benefit from more complex decoders that offer superior results.
Our QEC toolkit is fully flexible and will work with any QEC code - allowing our customers to build their own decoders and explore the results. It also enables users to better understand what fault-tolerant computing on actual hardware is like and improve on ideas developed via theory and simulation. This means building better decoders for the real world.
Our toolkit includes three use cases and includes the relevant source-code needed to test and compile the Wasm binaries. These use cases are:
- Repeat Until Success: Conditionally adding quantum operations to a circuit based on equality comparisons with an in-memory Wasm variable.
- Repetition Code: [[3,1,2]] code that encodes 3 physical qubits into 1 logical qubit, with code distance of 2.
- Surface Code: [[9,1,3]] code that encodes 9 physical qubits into 1 logical qubit, with a code distance of 3.
This is just the beginning of our promise to deliver universal, fault-tolerant quantum computing by the end of the decade. We are proud to be the only company offering advanced capabilities like this to our customers, and to be leading the way towards practical QEC.
For a novel technology to be successful, it must prove that it is both useful and works as described.
Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.
Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?
We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.
This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.
The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.
Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.