In a series of recent technical papers, Quantinuum researchers demonstrated the world-leading capabilities of the latest H-Series quantum computers, and the features and tools that make these accessible to our global customers and users.
Our teams used the H-Series quantum computers to directly measure and control non-abelian topological states of matter [1] for the first time, explore new ways to solve combinatorial optimization problems more efficiently [2], simulate molecular systems using logical qubits with error detection [3], probe critical states of matter [4], as well as exhaustively benchmark our very latest system [5].
Part of what makes such rapid technical and scientific progress possible is the effort our teams continually make to develop and improve workflow tools, helping our users to achieve successful results. In this blog post, we will explore the capabilities of three new tools in some detail, discuss their significance, and highlight their impact in recent quantum computing research.
“Leakage” is a quantum error process where a qubit ends up in a state outside the computational subspace and can significantly impact quantum computations. To address this issue, Quantinuum has developed a leakage detection gadget in pyTKET, a python module for interfacing with TKET, our quantum computing toolkit and optimizing compiler. This gadget, presented at the 2022 IEEE International Conference [6], acts as an error detection technique: it detects and excludes results affected by leakage, minimizing its impact on computations. It is also a valuable tool for measuring single-qubit and two-qubit spontaneous emission rates. H-Series users can access this open-source gadget through pyTKET, and an example notebook is available on the pyTKET GitHub repository.
The MCMR package, built as a pyTKET compiler pass, is designed to reduce the number of qubits required for executing many types of quantum algorithms, expanding the scope of what is possible on the current-generation H-Series quantum computers.
As an example, in a recent paper [4], Quantinuum researchers applied this tool to simulate the transverse-field Ising model and used only 20 qubits to simulate a much larger 128 site system (there is more detail below on this work). By measuring qubits early in the circuit, resetting them, and reusing them elsewhere, the package ingests a raw circuit and outputs an optimized circuit that requires fewer quantum resources. Previously, a scientific paper [7] and blog post on MCMR were published highlighting its benefits and applications. H-Series customers can download this package via the Quantinuum user portal.
To enable efficient use of Quantinuum’s 2nd generation processor, the System Model H2, Quantinuum has released the H2-1 emulator to give users greater flexibility with noise-informed state vector emulation. This emulator uses the NVIDIA's cuQuantum SDK to accelerate quantum computing simulation workflows, nearly approaching the limit of full state emulation on conventional classical hardware. The emulator is a faithful representation of the QPU it emulates. This is accomplished by not only using realistic noise models and noise parameters, but also by sharing the same software stack between the QPU and the emulator up until the job is either routed to the QPU or the classical computing processors. Most notable is that the emulator and the QPU use the same compiler allowing subtle and time-dependent errors to be appropriately represented. The H2-1 emulator was initially released as a beta product alongside the System Model H2 quantum computer at launch. It runs on a GPU backend and an upgraded global framework now offering features such as job chunking, incremental resource distribution, mid-execution job cancellation, and partial result return. Detailed information about the emulator can be found in the H2 emulator product datasheet on the Quantinuum website. H-Series customers with an H2 subscription can access the H2-1 emulator via an API or the Microsoft Azure platform.
Quantinuum's new enabling tools have already demonstrated their efficacy and value in recent quantum computing research, playing a vital role in advancing the field and achieving groundbreaking results. Let's expand on some notable recent examples.
All works presented here benefited from having access to our H-Series emulators; of these two significant demonstrations were the “Creation of Non-Abelian Topological Order and Anyons on a Trapped-Ion Processor” [1] and “Demonstration of improved 1-layer QAOA with Instantaneous Quantum Polynomial” [2]. These demonstrations involved extensive testing, debugging, and experiment design, for which the versatility of the H2-1 emulator proved invaluable, providing initial performance benchmarks in a realistic noisy environment. Researchers relied on the emulator's results to gauge algorithmic performance and make necessary adjustments. By leveraging the emulator's capabilities, researchers were able to accelerate their progress.
The MCMR package was extensively used in benchmarking the System Model H2 quantum computer’s world-leading capabilities [5]. Two application-level benchmarks performed in this work, approximating the solution to a MaxCut combinatorics problem using the quantum approximate optimization algorithm (QAOA) and accurately simulating a quantum dynamics model using a holographic quantum dynamics (HoloQUADS) algorithm, would have been too large to encode on H2's 32 qubits without the MCMR package. Further illustrating the overall value of these tools, in the HoloQUADS benchmark, there is a "bond qubit" that is particularly susceptible to errors due to leakage. The leakage detection gadget was used on this "bond qubit" at the end of the circuit, and any shots with a detected leakage error were discarded. The leakage detection gadget was also used to obtain the rate of leakage error per single-qubit and two-qubit gates, two component-level benchmarks.
In another scientific work [4], the MCMR compilation tool proved instrumental to simulating a transverse-field Ising model on 128 sites, using 20 qubits. With the MCMR package and by leveraging a state-of-the-art classical tensor-network ansatz expressed as a quantum circuit, the Quantinuum team was able to express the highly entangled ground state of the critical Ising model. The team showed that with H1-1's 20 qubits, the properties of this state could be measured on a 128-site system with very high fidelity, enabling a quantitatively accurate extraction of some critical properties of the model.
At Quantinuum, we are entirely devoted to producing a quantum hardware, middleware and software stack that leads the world on the most important benchmarks and includes features and tools that provide breakthrough benefit to our growing base of users. In today's NISQ hardware, "benefit" usually takes the form of getting the most performance out of today’s hardware, continually pushing what is considered to be possible. In this blog we describe two examples: error detection and discard using the “leakage detection gadget” and an automated method for circuit optimization for qubit reuse. “Benefit” can also take other forms, such as productivity. Our emulator brings many benefits to our users, but one that resonates the most is productivity. Being a faithful representation of our QPU performance, the emulator is an accessible tool which users have at their disposal to develop and test new, innovative algorithms. The tools and features Quantinuum releases are driven by users’ feedback; whether you are new to H-Series or a seasoned user, please reach-out and let us know how we can help bring benefit to your research and use case.
Footnotes:
[1] Mohsin Iqbal et al., Creation of Non-Abelian Topological Order and Anyons on a Trapped-Ion Processor (2023), arXiv:2305.03766 [quant-ph]
[2] Sebastian Leontica and David Amaro, Exploring the neighborhood of 1-layer QAOA with Instantaneous Quantum Polynomial circuits (2022), arXiv:2210.05526 [quant-ph]
[3] Kentaro Yamamoto, Samuel Duffield, Yuta Kikuchi, and David Muñoz Ramo, Demonstrating Bayesian Quantum Phase Estimation with Quantum Error Detection (2023), arXiv:2306.16608 [quant-ph]
[4] Reza Haghshenas, et al., Probing critical states of matter on a digital quantum computer (2023),
arXiv:2305.01650 [quant-ph]
[5] S. A. Moses, et al., A Race Track Trapped-Ion Quantum Processor (2023), arXiv:2305.03828 [quant-ph]
[6] K. Mayer, Mitigating qubit leakage errors in quantum circuits with gadgets and post-selection, 2022 IEEE International Conference on Quantum Computing and Engineering (QCE), Broomfield, CO, USA, (2022), pp. 809-809, doi: 10.1109/QCE53715.2022.00126.
[7] Matthew DeCross, Eli Chertkov, Megan Kohagen, and Michael Foss-Feig, Qubit-reuse compilation with mid-circuit measurement and reset (2022), arXiv:2210.08039 [quant-ph]
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
For a novel technology to be successful, it must prove that it is both useful and works as described.
Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.
Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?
We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.
This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.
The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.
Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.
Particle accelerators like the LHC take serious computing power. Often on the bleeding-edge of computing technology, accelerator projects sometimes even drive innovations in computing. In fact, while there is some controversy over exactly where the world wide web was created, it is often attributed to Tim Berners-Lee at CERN, who developed it to meet the demand for automated information-sharing between scientists in universities and institutes around the world.
With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the High Energy Physics community is interested in quantum computing, which offers real solutions to some of their hardest problems. Furthermore, the HEP community is well-positioned to support the early stages of technological development: with budgets in the 10s of billions per year and tens of thousands of scientists and engineers working on accelerator and computational physics, this is a ripe industry for quantum computing to tap.
As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable”
The authors go on to state: “Within the existing data reconstruction and analysis paradigm, access to algorithms that exhibit quantum speed-ups would revolutionize the simulation of large-scale quantum systems and the processing of data from complex experimental set-ups. This would enable a new generation of precision measurements to probe deeper into the nature of the universe. Existing measurements may contain the signatures of underlying quantum correlations or other sources of new physics that are inaccessible to classical analysis techniques. Quantum algorithms that leverage these properties could potentially extract more information from a given dataset than classical algorithms.”
Our scientists have been working with a team at DESY, one of the world’s leading accelerator centers, to bring the power of quantum computing to particle physics. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center for fundamental science located in Hamburg and Zeuthen, where the Center for Quantum Technologies and Applications (CQTA) is based. DESY operates, develops, and constructs particle accelerators used to investigate the structure, dynamics and function of matter, and conducts a broad spectrum of interdisciplinary scientific research. DESY employs about 3,000 staff members from more than 60 nations, and is part of the worldwide computer network to store and analyze the enormous flood of data that is produced by the LHC in Geneva.
In a recent paper, our scientists collaborated with scientists from DESY, the Leiden Institute of Advanced Computer Science (LIACS), and Northeastern University to explore using a generative quantum machine learning model, called a “quantum Boltzmann machine” to untangle data from CERN’s LHC.
The goal was to learn probability distributions relevant to high energy physics better than the corresponding classical models. The data specifically contains “particle jet events”, which describe how colliders collect data about the subatomic particles generated during the experiments.
In some cases the quantum Boltzmann machine was indeed better, compared to a classical Boltzmann machine. The team is analyzed when and why this happens, understanding better how to apply these new quantum tools in this research setting. The team also studied the effect of the data encoding into a quantum state, noting that it can have a decisive effect on the training performance. Especially enticing is that the quantum Boltzmann machine is efficiently trainable, which our scientists showed in a recent paper published in Nature Communications Physics.
Find the Quantinuum team at this year’s SC24 conference from November 17th – 22nd in Atlanta, Georgia. Meet our team at Booth #4351 to discover how Quantinuum is bridging the gap between quantum computing and high-performance compute with leading industry partners.
The Quantinuum team will be participating various events, panels and poster sessions to showcase our quantum computing technologies. Join us at the below sessions:
Panel: KAUST booth 1031
Nash Palaniswamy, Quantinuum’s CCO, will join a panel discussion with quantum vendors and KAUST partners to discuss advancements in quantum technology.
Beowulf Bash: World of Coca-Cola
This year, we are proudly sponsoring the Beowulf Bash, a unique event organized to bring the HPC community together for a night of unique entertainment! Join us at the event on Monday, November 18th, 9:00pm at the World of Coca-Cola.
Panel: Educating for a Hybrid Future: Bridging the Gap between High-Performance and Quantum Computing
Vincent Anandraj, Quantinuum’s Director of Global Ecosystem and Strategic Alliances, will moderate this panel which brings together experts from leading supercomputing centers and the quantum computing industry, including PSC, Leibniz Supercomputing Centre, IQM Quantum Computers, NVIDIA, and National Research Foundation.
Presentation: Realizing Quantum Kernel Models at Scale with Matrix Product State Simulation
Pablo Andres-Martinez, Research Scientist at Quantinuum, will present research done in collaboration with HSBC, where the team applied quantum methods to fraud detection.