The research team behind Quantinuum's state-of-the-art quantum computational chemistry platform, InQuanto, has demonstrated a new method that makes more efficient use of today's "noisy" quantum computers, for simulating chemical systems.
In a new paper, “Variational Phase Estimation with Variational Fast Forwarding”, published on the arXiv, a team led by Nathan Fitzpatrick and co-authors Maria-Andreea Filip and David Muñoz Ramo, explored different methods and the trade-offs required to achieve results on near-term quantum hardware. The paper also assesses the hardware requirements for the proposed method.
Starting with the recently published Variational Quantum Phase Estimation (VQPE) algorithm, commonly used to calculate molecular ground-state and excited state energies, the team combined it with variational fast-forwarding (VFF) to reduce the quantum circuit depth required to achieve good results.
The demonstration made use of a Krylov subspace diagonalization algorithm, which can be used as a low-cost alternative to the traditional quantum phase estimation algorithm to estimate both the ground and excited-state energies of a quantum many-body system. The Krylov method uses time evolution to generate the subspace used in the algorithm, which can be very expensive in terms of gate depth. The new method demonstrated is less expensive, making the circuit depth required to achieve good results manageable.
The team decreased the circuit depth by using VFF, a hybrid classical-quantum algorithm, which provides an approximation to time-evolution, allowing VQPE to be applied with linear cost in the number of time-evolved states. Introducing VFF allows the time evolved states to be expressed with a lower fixed depth therefore the quantum computing resources required to run the algorithm are drastically decreased.
This new approach resulted in a circuit with a depth of 57 gates for the H2 Molecule, of which 24 are CNOTs. This is a significant improvement from the original trotterized time-evolution implementation, particularly as the depth of this circuit remains constant for any number of steps. Whereas, the original trotterized circuit required 34 CNOTs per step, with a large number of steps required for high accuracy.
The techniques demonstrated in this paper will be of interest to quantum chemists seeking near-term results in fields such as, excited state quantum chemistry and strongly correlated materials.
The tradeoff involved in this use of VFF is that the results are more approximate. Improving this will be an area for future research.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Our team is making progress on the path towards “non-Abelian” quantum computing, which promises both fault tolerance and significant resource savings.
Computing with non-Abelian anyons, which are a type of quasiparticle, is sought after as it offers an enticing alternative to some of the biggest challenges in mainstream quantum computing. Estimates vary, but some scientists have calculated that some of the trickiest parts, like T gates and magic states distillation, can take up to 90% of the computer’s resources (when running something such as Shor’s algorithm). The non-abelian approach to quantum computing could mitigate this issue.
In a new paper in collaboration with Harvard and CalTech, our team is one step closer to realizing fault-tolerant non-Abelian quantum computing. This paper is a follow-up to our recent work published in Nature, where we demonstrated control of non-Abelian anyons. This marks a key step toward non-Abelian computing, and we are the only company who has achieved this. Additionally, we are the only company offering commercially-available mid-circuit measurement and feed-forward capabilities, which will be vital as we advance our research in this area.
In this paper, our team prepared the ground state of the “Z3” toric code – meaning this special state of matter was prepared in qutrit (3 states) Hilbert space. Before now, topological order has only been prepared in qubit (2 states) Hilbert spaces. This allowed them to explore the effect of defects in the lattice (for the experts, this was the “parafermion” defect as well as the “charge-conjugation” defect. They then entangled two pairs of charge conjugation defects, making a Bell pair.
All these accomplishments are critical steppingstones towards the non-Abelian anyons of the “S3” toric code, which is the non-Abelian approach that promises both huge resource savings previously discussed because it (unlike most quantum error correction codes) provides a universal gate set. The high-fidelity preparation our team accomplished in this paper suggests that we are very close to achieving a universal topological gate set, which will be an incredible “first” in the quantum computing community.
This work is another feather in our cap in terms of quantum error correction (QEC) research, a field we are leaders in. We recently demonstrated a significant reduction in circuit error rates in collaboration with Microsoft, we performed high fidelity and fault-tolerant teleportation of logical qubits, and we independently demonstrated the first implementation of the Quantum Fourier Transform with error correction. We’ve surpassed the “break-even” point multiple times, recently doing so entangling 4 logical qubits in a non-local code. This latest work in non-Abelian QEC is yet another crucial milestone for the community that we have rigorously passed before anyone else.
This world-class work is enabled by the native flexibility of our Quantum Charge Coupled Device (QCCD) architecture and its best-in-class fidelity. Our world-leading hardware combined with our team of over 350 PhD scientists means that we have the capacity to efficiently investigate a large variety of error correcting codes and fault-tolerant methods, while supporting our partners to do the same. Fault tolerance is one of the most critical challenges our industry faces, and we are proud to be leading the way towards large scale, fault-tolerant quantum computing.
We are thrilled to announce a groundbreaking addition to our technology suite: the Quantum Error Correction (QEC) decoder toolkit. This essential tool empowers users to decode syndromes and implement real-time corrections, an essential step towards achieving fault-tolerant quantum computing. As the only company offering this crucial capability to our users, we are paving the way for the future of quantum technology.
We are dedicated to realizing universal fault-tolerant quantum computing by the end of this decade. A key component of this mission is equipping our customers with essential QEC workflows, making advanced quantum computing more accessible than ever before.
Our QEC decoding toolkit is enabled by our real-time hybrid compute capability, which executes Web Assembly (Wasm) in our stack in both hardware and emulator environments. This enables the use of libraries (like linear algebra and graph libraries) and complex data structures (like vectors and graphs).
Our real-time hybrid compute capability enables a new frontier in classical-quantum hybrid computing. Our release of the QEC decoder toolkit marks a maturing from just running quantum circuits to running full quantum algorithms, interacting in depth with classical resources in real-time so that each platform (quantum, classical) can be focused where it performs best.
QEC decoding is one of the most exciting – and necessary – applications of hybrid computing capacity. Before now, error correction needed to be done with lookup tables: a list specifying the correction for each syndrome. This is not scalable: the number of syndromes grows exponentially with the distance (which is like the “error correcting power”) of the code. This means that codes hefty enough to run, say, Shor’s algorithm would need a lookup table too massive to search or even store properly.
For universal fault-tolerant quantum computing to become a reality, we need to decode error syndromes algorithmically. During algorithmic decoding, the syndrome is sent to a classical computer which solves (for example) a graph problem to determine the correction to make.
Algorithmic decoding is only half of the puzzle though – the other key ingredient is being able to decode syndromes and correct errors in real time. For universal, fully fault-tolerant computing, real-time decoding is necessary: one can’t just push all corrections to the end of the computation because the errors will propagate and overwhelm the computation. Errors must be corrected as the computation proceeds.
In real-time algorithmic decoding, the syndrome is sent to a classical computer while the qubits are held in stasis , then the computed correction is applied before the computation proceeds. Alternatively, one can compute the correction while the computation proceeds in parallel, then it is retrieved when needed. This flexibility in implementation allows for maximum freedom in algorithmic design.
Our real-time co-compute capability combined with our industry-leading coherence times (up to 10,000x longer than competitors) is what allows us to be the first to release this capacity to our customers. Our long coherence times also enable our users to benefit from more complex decoders that offer superior results.
Our QEC toolkit is fully flexible and will work with any QEC code - allowing our customers to build their own decoders and explore the results. It also enables users to better understand what fault-tolerant computing on actual hardware is like and improve on ideas developed via theory and simulation. This means building better decoders for the real world.
Our toolkit includes three use cases and includes the relevant source-code needed to test and compile the Wasm binaries. These use cases are:
- Repeat Until Success: Conditionally adding quantum operations to a circuit based on equality comparisons with an in-memory Wasm variable.
- Repetition Code: [[3,1,2]] code that encodes 3 physical qubits into 1 logical qubit, with code distance of 2.
- Surface Code: [[9,1,3]] code that encodes 9 physical qubits into 1 logical qubit, with a code distance of 3.
This is just the beginning of our promise to deliver universal, fault-tolerant quantum computing by the end of the decade. We are proud to be the only company offering advanced capabilities like this to our customers, and to be leading the way towards practical QEC.
For a novel technology to be successful, it must prove that it is both useful and works as described.
Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.
Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?
We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.
This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.
The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.
Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.