Quantinuum President and COO Tony Uttley announced three major accomplishments during his keynote address at the IEEE Quantum Week event in Colorado last week.
The three milestones, representing actionable acceleration for the quantum computing eco-system, are: (i) new arbitrary angle gate capabilities on the H-series hardware, (ii) another QV record for the System Model H1 hardware, and (iii) over 500,000 downloads of Quantinuum’s open-sourced TKET, a world-leading quantum software development kit (SDK).
The announcements were made during Uttley’s keynote address titled, “A Measured Approach to Quantum Computing.”
These advancements are the latest examples of the company’s continued demonstration of its leadership in the quantum computing community.
“Quantinuum is accelerating quantum computing’s impact to the world,” Uttley said. “We are making significant progress with both our hardware and software, in addition to building a community of developers who are using our TKET SDK.”
This latest quantum volume measurement of 8192 is particularly noteworthy and is the second time this year Quantinuum has published a new QV record on their trapped-ion quantum computing platform, the System Model H1, Powered by Honeywell.
A key to achieving this latest record is the new capability of directly implementing arbitrary angle two-qubit gates. For many quantum circuits, this new way of doing a two-qubit gate allows for more efficient circuit construction and leads to higher fidelity results.
Dr. Brian Neyenhuis, Director of Commercial Operations at Quantinuum, said, “This new capability allows for several user advantages. In many cases, this includes shorter interactions with the qubits, which lowers the error rate. This allows our customers to run long computations with less noise.”
These arbitrary angle gates build on the overall design strength of the trapped-ion architecture of the H1, Neyenhuis said.
“With the quantum-charged coupled device (QCCD) architecture, interactions between qubits are very simple and can be limited to a small number of qubits which means we can precisely control the interaction and don’t have to worry about additional crosstalk,” he said.
This new gate design represents a third method for Quantinuum to improve the efficiency of the H1 generation, said Dr. Jenni Strabley, Senior Director of Offering Management at Quantinuum.
“Quantinuum’s goal is to accelerate quantum computing. We know we have to make the hardware better and we have to make the algorithms smarter, and we’re doing that,” she said. “Now we can also implement the algorithms more efficiently on our H1 with this new gate design.”
Currently, researchers can do single qubit gates – rotations on a single qubit – or a fully entangling two-qubit gate. It’s possible to build any quantum operation out of just those building blocks.
With arbitrary angle gates, instead of just having a two-qubit gate that's fully entangling, scientists can use a two-qubit gate that is partially entangling.
“There are many algorithms where you want to evolve the quantum state of the system one tiny step at a time. Previously, if you wanted a tiny bit of entanglement for some small time step, you had to entangle it all the way, rotate it a little bit, and then unentangle it almost all the way back,” Neyenhuis said. “Now we can just add this tiny little bit of entanglement natively and then go to the next step of the algorithm.”
There are other algorithms where this arbitrary angle two-qubit gate is the natural building block, according to Neyenhuis. One example is the quantum Fourier transform. Using arbitrary angle two-qubit gates cuts the number of two-qubit gates (and the overall error) in half, drastically improving the fidelity of the circuit. Researchers can use this new gate design to run harder problems that resulted in catastrophic errors in previous experiments.
“By going to an arbitrary angle gate, in addition to cutting the number of two-qubit gates in half, the error we get per gate is lower because it scales with the amplitude of that gate,” Neyenhuis said.
This is a powerful new capability, particularly for noisy intermediate-scale quantum algorithms. Another demonstration from the Quantinuum team was to use arbitrary angle two-qubit gates to study non-equilibrium phase transitions, the technical details of which are available on the arXiv here.
“For the algorithms that we are going to want to run in this NISQ regime that we're in right now, this is a more efficient way to run your algorithm,” Neyenhuis said. “There are lots of different circuits you would want to run where this arbitrary angle gate gives you a fairly significant increase in the fidelity of your overall circuit. This capability also allows for a speed up in the circuit execution by removing unneeded gates, which ultimately reduces the time of executing a job on our machines.”
Researchers working with machine learning algorithms, variational algorithms, and time evolution algorithms would see the most benefit from these new gates. This advancement is particularly relevant for simulating the dynamics of other quantum systems.
“This just gave us a big win in fidelity because we can run the sort of interaction you're after natively, rather than constructing it out of some other Lego blocks,” Neyenhuis said.
Quantum volume tests require running arbitrary circuits. At each slice of the quantum volume circuit, the qubits are randomly paired up and a complex two-qubit operation is performed. This SU(4) gate can be constructed more efficiently using the arbitrary angle two-qubit gate, lowering the error at each step of the algorithm.
The H1-1’s quantum volume of 8192 is due in part to the implementation of arbitrary angle gates and the continued reduction in error rates. Quantinuum’s last quantum volume increase was in April when the System Model H1-2 doubled its performance to become the first commercial quantum computer to pass Quantum Volume 4096.
This new increase is the seventh time in two years that Quantinuum’s H-Series hardware has set an industry record for measured quantum volume as it continues to achieve its goal of 10X annual improvement.
Quantum volume, a benchmark introduced by IBM in 2019, is a way to measure the performance of a quantum computer using randomized circuits, and is a frequently used metric across the industry.
Quantinuum has also achieved another milestone: over 500,000 downloads of TKET.
TKET is an advanced software development kit for writing and running programs on gate-based quantum computers. TKET enables developers to optimize their quantum algorithms, reducing the computational resources required, which is important in the NISQ era.
TKET is open source and accessible through the PyTKET Python package. The SDK also integrates with major quantum software platforms including Qiskit, Cirq and Q#. TKET has been available as an open source language for almost a year.
This universal availability and TKET’s portability across many quantum processors are critical for building a community of developers who can write quantum algorithms. The number of downloads includes many companies and academic institutions which account for multiple users.
Quantinuum CEO Ilyas Khan said, “Whilst we do not have the exact number of users of TKET, it is clear that we are growing towards a million people around the world who have taken advantage of a critical tool that integrates across multiple platforms and makes those platforms perform better. We continue to be thrilled by the way that TKET helps democratize as well as accelerate innovation in quantum computing.”
Arbitrary angle two-qubit gates and other recent Quantinuum advances are all built into TKET.
“TKET is an evolving platform and continues to take advantage of these new hardware capabilities,” said Dr. Ross Duncan, Quantinuum’s Head of Quantum Software. “We’re excited to put these new capabilities into the hands of the rapidly increasing number of TKET users around the world.”
The average single-qubit gate fidelity for this milestone was 99.9959(5)%, the average two-qubit gate fidelity was 99.71(3)% with fully connected qubits, and state preparation and measurement fidelity was 99.72(1)%. The Quantinuum team ran 220 circuits with 90 shots each, using standard QV optimization techniques to yield an average of 175.2 arbitrary angle two-qubit gates per circuit.
The System Model H1-1 successfully passed the quantum volume 8192 benchmark, outputting heavy outcomes 69.33% of the time, with a 95% confidence interval lower bound of 68.38% which is above the 2/3 threshold.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
At this year’s Q2B Silicon Valley conference from December 10th – 12th in Santa Clara, California, the Quantinuum team will be participating in plenary and case study sessions to showcase our quantum computing technologies.
Schedule a meeting with us at Q2B
Meet our team at Booth #G9 to discover how Quantinuum is charting the path to universal, fully fault-tolerant quantum computing.
Join our sessions:
Plenary: Advancements in Fault-Tolerant Quantum Computation: Demonstrations and Results
There is industry-wide consensus on the need for fault-tolerant QPU’s, but demonstrations of these abilities are less common. In this talk, Dr. Hayes will review Quantinuum’s long list of meaningful demonstrations in fault-tolerance, including real-time error correction, a variety of codes from the surface code to exotic qLDPC codes, logical benchmarking, beyond break-even behavior on multiple codes and circuit families.
Keynote: Quantum Tokens: Securing Digital Assets with Quantum Physics
Mitsui’s Deputy General Manager, Quantum Innovation Dept., Corporate Development Div., Koji Naniwada, and Quantinuum’s Head of Cybersecurity, Duncan Jones will deliver a keynote presentation on a case study for quantum in cybersecurity. Together, our organizations demonstrated the first implementation of quantum tokens over a commercial QKD network. Quantum tokens enable three previously incompatible properties: unforgeability guaranteed by physics, fast settlement without centralized validation, and user privacy until redemption. We present results from our successful Tokyo trial using NEC's QKD commercial hardware and discuss potential applications in financial services.
Quantinuum and Mitsui Sponsored Happy Hour
Join the Quantinuum and Mitsui teams in the expo hall for a networking happy hour.
Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.
Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation.
For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality.
Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more.
A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments.
CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.”
With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems.
As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”
The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version).
In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.
We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.
Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.
Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe, or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.
The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.
Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.
The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.
The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.
In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.
This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of.