Blog

Discover how we are pushing the boundaries in the world of quantum computing

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
technical
All
May 9, 2023
Quantinuum Researchers Demonstrate a new Optimization Algorithm that delivers solutions on H2 Quantum Computer

In a meaningful advance in an important area of industrial and real-world relevance, Quantinuum researchers have demonstrated a quantum algorithm capable of solving complex combinatorial optimization problems while making the most of available quantum resources. 

Results on the new H2 quantum computer evidenced a remarkable ability to solve combinatorial optimization problems with as few quantum resources as those employed by just one layer of the quantum approximate optimization algorithm (QAOA), the current and traditional workhorse of quantum heuristic algorithms. 

Optimization problems are common in industry in contexts such as route planning, scheduling, cost optimization and logistics. However, as the number of variables increases and optimization problems grow larger and more complex, finding satisfactory solutions using classical algorithms becomes increasingly difficult. 

Recent research suggests that certain quantum algorithms might be capable of solving combinatorial optimization problems better than classical algorithms. The realization of such quantum algorithms can therefore potentially increase the efficiency of industrial processes. 

However, the effectiveness of these algorithms on near-term quantum devices and even on future generations of more capable quantum computers presents a technical challenge: quantum resources will need to be reduced as much as possible in order to protect the quantum algorithm from the unavoidable effects of quantum noise.

Sebastian Leontica and Dr. David Amaro, a senior research scientist at Quantinuum, explain their advances in a new paper, “Exploring the neighborhood of 1-layer QAOA with Instantaneous Quantum Polynomial circuits” published on arXiv. This is one of several papers published at the launch of Quantinuum’s H2, that highlight the unparalleled power of the newest generation of the H-Series, Powered by Honeywell. 

“We should strive to use as few quantum resources as possible no matter how good a quantum computer we are operating on, which means using the smallest possible number of qubits that fit within the problem size and a circuit that is as shallow as possible,” Dr. Amaro said. “Our algorithm uses the fewest possible resources and still achieves good performance.”

The researchers use a parameterized instantaneous quantum polynomial (IQP) circuit of the same depth as the 1-layer QAOA to incorporate corrections that would otherwise require multiple layers. Another differentiating feature of the algorithm is that the parameters in the IQP circuit can be efficiently trained on a classical computer, avoiding some training issues of other algorithms like QAOA. Critically, the circuit takes full advantage of, and benefits from features available on Quantinuum’s devices, including parameterized two-qubit gates, all-to-all connectivity, and high-fidelity operations. 

“Our numerical simulations and experiments on the new H2 quantum computer at small scale indicate that this heuristic algorithm, compared to 1-layer QAOA, is expected to amplify the probability of sampling good or even optimal solutions of large optimization problems,” Dr. Amaro said. “We now want to understand how the solution quality and runtime of our algorithm compares to the best classical algorithms.”

This algorithm will be useful for current quantum computers as well as larger machines farther along the Quantinuum hardware roadmap. 

How the Experiment Worked

The goal of this project was to provide a quantum heuristic algorithm for combinatorial optimization that returns better solutions for optimization problems and uses fewer quantum resources than state of the art quantum heuristics. The researchers used a fully connected parameterized IQP, warm-started from 1-layer QAOA. For a problem with n binary variables the circuit contained up to n(n-1)/2 two-qubit gates and the researchers employed only 20.32n shots. 

The algorithm showed improved performance on the Sherrington-Kirkpatrick (SK) optimization problem compared to the 1-layer QAOA. Numerical simulations showed an average speed up of 20.31n compared to 20.5n when looking for the optimal solution. 

Experimental results on our new H2 quantum computer and emulator confirmed that the new optimization algorithm outperforms 1-layer QAOA and reliably solves complex optimization problems. The optimal solution was found for 136 out of 312 instances, four of which were for the maximum size of 32 qubits. A 30-qubit instance was solved optimally on the H2 device, which means, remarkably, that at least one of the 776 shots measured after performing 432 two-qubit gates corresponds to the unique optimal solution in the huge set of 230 > 109 candidate solutions. 

These results indicate that the algorithm, in combination with H2 hardware, is capable of solving hard optimization problems using minimal quantum resources in the presence of real hardware noise.

Quantinuum researchers expect that these promising results at small scale will encourage the further study of new quantum heuristic algorithms at the relevant scale for real-world optimization problems, which requires a better understanding of their performance under realistic conditions.

Speedup of IQP over QAOA
ChartDescription automatically generated

Numerical simulations of 256 SK random instances for each problem size from 4 to 29 qubits. Graph A shows the probability of sampling the optimal solution in the IQP circuit, for which the average is 2-0.31n. Graph B shows the enhancement factor compared to 1-layer QAOA, for which the average is 20.23n. These results indicate that Quantinuum’s algorithm has significantly better runtime than 1-layer QAOA.

technical
All
April 9, 2023
How Quantinuum researchers used quantum tensor networks to measure the properties of quantum particles at a phase transition

When thinking about changes in phases of matter, the first images that come to mind might be ice melting or water boiling. The critical point in these processes is located at the boundary between the two phases – the transition from solid to liquid or from liquid to gas. 

Phase transitions like these get right to the heart of how large material systems behave and are at the frontier of research in condensed matter physics for their ability to provide insights into emergent phenomena like magnetism and topological order. In classical systems, phase transitions are generally driven by thermal fluctuations and occur at finite temperature. On the contrary, quantum systems can exhibit phase transitions even at zero temperatures; the residual fluctuations that control such phase transitions at zero temperature are due to entanglement and are entirely quantum in origin.  

Quantinuum researchers recently used the H1-1 quantum computer to computationally model a group of highly correlated quantum particles at a quantum critical point — on the border of a transition between a paramagnetic state (a state of magnetism characterized by a weak attraction) to a ferromagnetic one (characterized by a strong attraction).

Simulating such a transition on a classical computer is possible using tensor network methods, though it is difficult. However, generalizations of such physics to more complicated systems can pose serious problems to classical tensor network techniques, even when deployed on the most powerful supercomputers.  On a quantum computer, on the other hand, such generalizations will likely only require modest increases in the number and quality of available qubits.

In a technical paper submitted to the arXiv, Probing critical states of matter on a digital quantum computer, the Quantinuum team demonstrated how the powerful components and high fidelity of the H-Series digital quantum computers could be harnessed to tackle a 128-site condensed matter physics problem, combining a quantum tensor network method with qubit reuse to make highly productive use of the 20-qubit H1-1 quantum computer.

Reza Haghshenas, Senior Advanced Physicist, and the lead author the paper said, “This is the kind of problem that appeals to condensed-matter physicists working with quantum computers, who are looking forward to revealing exotic aspects of strongly correlated systems that are still unknown to the classical realm. Digital quantum computers have the potential to become a versatile tool for working scientists, particularly in fields like condensed matter and particle physics, and may open entirely new directions in fundamental research.”

The role of quantum tensor networks
A circular structure with many dots and linesDescription automatically generated
Abstract representation of the 128-site MERA used in this work

Tensor networks are mathematical frameworks whose structure enables them to represent and manipulate quantum states in an efficient manner. Originally associated with the mathematics of quantum mechanics, tensor network methods now crop up in many places, from machine learning to natural language processing, or indeed any model with a large number of interacting, high-dimensional mathematical objects. 

The Quantinuum team described using a tensor network method--the multi-scale entanglement renormalization ansatz (MERA)--to produce accurate estimates for the decay of ferromagnetic correlations and the ground state energy of the system. MERA is particularly well-suited to studying scale invariant quantum states, such as ground states at continuous quantum phase transitions, where each layer in the mathematical model captures entanglement at different scales of distance. 

“By calculating the critical state properties with MERA on a digital quantum computer like the H-Series, we have shown that research teams can program the connectivity and system interactions into the problem,” said Dave Hayes, Lead of the U.S. quantum theory team at Quantinuum and one of the paper’s authors. “So, it can, in principle, go out and simulate any system that you can dream of.”

Simulating a highly entangled quantum spin model

In this experiment, the researchers wanted to accurately calculate the ground state of the quantum system in its critical state. This quantum system is composed of many tiny quantum magnets interacting with one another and pointing in different directions, known as a quantum spin model. In the paramagnetic phase, tiny, individual magnets in the material are randomly oriented, and only correlated with each other over small length-scales. In the ferromagnetic phase, these individual atomic magnetic moments align spontaneously over macroscopic length scales due to strong magnetic interactions. 

In the computational model, the quantum magnets were initially arranged in one dimension, along a line. To describe the critical point in this quantum magnetism problem, particles in the line needed to be entangled with one another in a complex way, making this as a very challenging problem for a classical computer to solve in high dimensional and non-equilibrium systems. 

“That's as hard as it gets for these systems,” Dave explained. “So that's where we want to look for quantum advantage – because we want the problem to be as hard as possible on the classical computer, and then have a quantum computer solve it.”

To improve the results, the team used two error mitigation techniques, symmetry-based error heralding, which is made possible by the MERA structure, and zero-noise extrapolation, a method originally developed by researchers at IBM. The first involved enforcing local symmetry in the model so that errors affecting the symmetry of the state could be detected. The second strategy, zero-noise extrapolation, involves adding noise to the qubits to measure the impact it has, and then using those results to extrapolate the results that would be expected under conditions with less noise than was present in the experiment.

Future applications

The Quantinuum team describes this sort of problem as a stepping-stone, which allows the researchers to explore quantum tensor network methods on today’s devices and compare them either to simulations or analytical results produced using classical computers. It is a chance to learn how to tackle a problem really well before quantum computers scale up in the future and begin to offer solutions that are not possible to achieve on classical computers. 

“Potentially, our biggest applications over the next couple of years will include studying solid-state systems, physics systems, many-body systems, and modeling them,” said Jenni Strabley, Senior Director of Offering Management at Quantinuum.

The team now looks forward to future work, exploring more complex MERA generalizations to compute the states of 2D and 3D many-body and condensed matter systems on a digital quantum computer – quantum states that are much more difficult to calculate classically. 

The H-Series allows researchers to simulate a much broader range of systems than analog devices as well as to incorporate quantum error mitigation strategies, as demonstrated in the experiment. Plus, Quantinuum’s System Model H2 quantum computer, which was launched earlier this year, should scale this type of simulation beyond what is possible using classical computers.

events
All
March 31, 2023
An exclusive at the RSA Conference, Cyber secrets: What best-in-class cyber experts know about quantum

Quantum computing is set to have a huge impact on cybersecurity. Much of the discussion today is around the threat that it could pose to the foundation of security systems, but there is also enormous potential for quantum computing to transform the security of communications and data.

Quantinuum is hosting an exclusive RSA Conference session titled “Cyber Secrets: What best-in-class cyber experts know about quantum” on Tuesday, April 25th, from 7:30-8:15am PST in the Moscone Center South Hall at Booth #2100. 

During the session, you can expect to learn more about:

  • The impact of quantum computing on cybersecurity
  • Separating hype from reality with quantum security technologies
  • What leading organizations are doing to build resilience with quantum technologies


With our expert panelists, direct from the world of quantum computing and cybersecurity:

  • Kaniah Konkoly-Thege, Chief Legal Counsel and SVP Government Relations for Quantinuum
  • Jeff Miller, Chief Information Officer for Quantinuum
  • Matt Bohne, Vice President & Chief Product Security Officer for Honeywell Corporation
  • Todd Moore, Head of the Data Security Products Portfolio for Thales
  • John Davis, Vice President, Public Sector for Palo Alto Networks


Breakfast and refreshments will be provided for all attendees. Come early and stick around after to talk with our panelists. We hope to see you at the session and invite you to Quantinuum’s Booth #5176 in the North Hall.

partnership
All
March 23, 2023
KPMG and Microsoft join Quantinuum in simplifying quantum algorithm development via the cloud

In 1952, facing opposition from scientists who disbelieved her thesis that computer programming could be made more useful by using English words, the mathematician and computer scientist Grace Hopper published her first paper on compilers and wrote a precursor to the modern compiler, the A-0, while working at Remington Rand.

Over subsequent decades, the principles of compilers, whose task it is to translate between high level programming languages and machine code, took shape and new methods were introduced to support their optimization. One such innovation was the intermediate representation (IR), which was introduced to manage the complexity of the compilation process, enabling compilers to represent the program without loss of information, and to be broken up into modular phases and components.

This developmental path spawned the modern computer industry, with languages that work across hardware systems, middleware, firmware, operating systems, and software applications. It has also supported the emergence of the huge numbers of small businesses and professionals who make a living collaborating to solve problems using code that depends on compilers to control the underlying computing hardware.

Now, a similar story is unfolding in quantum computing. There are efforts around the world to make it simpler for engineers and developers across many sectors to take advantage of quantum computers by translating between high level coding languages and tools, and quantum circuits — the combinations of gates that run on quantum computers to generate solutions. Many of these efforts focus on hybrid quantum-classical workflows, which allow a problem to be solved by taking advantage of the strengths of different modes of computation, accessing central processing units (CPUs), graphical processing units (GPUs) and quantum processing units (QPUs) as needed.

Microsoft is a significant contributor to this burgeoning quantum ecosystem, providing access to multiple quantum computing systems through Azure Quantum, and a founding member of the QIR Alliance, a cross-industry effort to make quantum computing source code portable across different hardware systems and modalities and to make quantum computing more useful to engineers and developers. QIR offers an interoperable specification for quantum programs, including a hardware profile designed for Quantinuum’s H-Series quantum computers, and has the capacity to support cross-compiling quantum and classical workflows, encouraging hybrid use-cases.

As one of the largest integrated quantum computing companies in the world, Quantinuum was excited to become a QIR steering member alongside partners including Nvidia, Oak Ridge National Laboratory, Quantum Circuits Inc., and Rigetti Computing. Quantinuum supports multiple open-source eco-system tools including its own family of open-source software development kits and compilers, such as TKET for general purpose quantum computation and lambeq for quantum natural language processing.

Rapid progress with KPMG and Microsoft

As founding members of QIR, Quantinuum recently worked with Microsoft Azure Quantum alongside KPMG on a project that involved Microsoft’s Q#, a stand-alone language offering a high level of abstraction and Quantinuum’s System Model H1, Powered by Honeywell. The Q# language has been designed for the specific needs of quantum computing and provides a high-level of abstraction enabling developers to seamlessly blend classical and quantum operations, significantly simplifying the design of hybrid algorithms. 

KPMG’s quantum team wanted to translate an existing algorithm into Q#, and to take advantage of the unique and differentiating capabilities of Quantinuum’s H-Series, particularly qubit reuse, mid-circuit measurement and all-to-all connectivity. System Model H1 is the first generation trapped-ion based quantum computer built using the quantum charge-coupled device (QCCD) architecture. KPMG accessed the H1-1 QPU with 20 fully connected qubits. H1-1 recently achieved a Quantum Volume of 32,768, demonstrating a new high-water mark for the industry in terms of computation power as measured by quantum volume.

Q# and QIR offered an abstraction from hardware specific instructions, allowing the KPMG team, led by Michael Egan, to make best use of the H-Series and take advantage of runtime support for measurement-conditioned program flow control, and classical calculations within runtime.

Nathan Rhodes of the KPMG team wrote a tutorial about the project to demonstrate how an algorithm writer would use the KPMG code step-by-step as well as the particular features of QIR, Q# and the H-Series. It is the first time that code from a third party will be available for end users on Microsoft’s Azure portal.

Microsoft recently announced the roll-out of integrated quantum computing on Azure Quantum, an important milestone in Microsoft’s Hybrid Quantum Computing Architecture, which provides tighter integration between quantum and classical processing. 

Fabrice Frachon, Principal PM Lead, Azure Quantum, described this new Azure Quantum capability as a key milestone to unlock a new generation of hybrid algorithms on the path to scaled quantum computing.

The demonstration

The team ran an algorithm designed to solve an estimation problem, a promising use case for quantum computing, with potential application in fields including traffic flow, network optimization, energy generation, storage, and distribution, and to solve other infrastructure challenges. The iterative phase estimation algorithm1 was compiled into quantum circuits from code written in a Q# environment with the QIR toolset, producing a circuit with approximately 500 gates, including 111 2-Qubit gates, running across three qubits with one reused three times, and achieving a fidelity of 0.92. This is possible because of the high gate fidelity and the low SPAM error which enables qubit reuse.

The results compare favorably with the more standard Quantum Phase Estimation version described in “Quantum computation and quantum information,” by Michael A. Nielsen and Isaac Chuang.

Quantinuum’s H1 had five capabilities that were crucial to this project:

  1. Qubit reuse
  2. Mid-circuit measurement
  3. Bound loop (a restriction on how many times the system will do the iterative circuit)
  4. Classical computation
  5. Nested functions

The project emphasized the importance of companies experimenting with quantum computing, so they can identify any possible IT issues early on, understanding the development environment and how quantum computing integrates with current workflows and processes.

As the global quantum ecosystem continues to advance, collaborative efforts like QIR will play a crucial role in bringing together industrial partners seeking novel solutions to challenging problems, talented developers, engineers, and researchers, and quantum hardware and software companies, which will continue to add deep scientific and engineering knowledge and expertise.

  1. Phys. Rev. A 76, 030306(R) (2007) - Arbitrary accuracy iterative quantum phase estimation algorithm using a single ancillary qubit: A two-qubit benchmark (aps.org)
technical
All
February 23, 2023
Quantum Volume reaches 5 digits for the first time: 5 perspectives on what it means for quantum computing

Quantinuum’s H-Series team has hit the ground running in 2023, achieving a new performance milestone. The H1-1 trapped ion quantum computer has achieved a Quantum Volume (QV) of 32,768 (215), the highest in the industry to date.

The team previously increased the QV to 8,192 (or 213) for the System Model H1 system in September, less than six months ago. The next goal was a QV of 16,384 (214). However, continuous improvements to the H1-1's controls and subsystems advanced the system enough to successfully reach 214 as expected, and then to go one major step further, and reach a QV of 215.

The Quantum Volume test is a full-system benchmark that produces a single-number measure of a quantum computer’s general capability. The benchmark takes into account qubit number, fidelity, connectivity, and other quantities important in building useful devices.1 While other measures such as gate fidelity and qubit count are significant and worth tracking, neither is as comprehensive as Quantum Volume which better represents the operational ability of a quantum computer.

Dr. Brian Neyenhuis, Director of Commercial Operations, credits reductions in the phase noise of the computer’s lasers as one key factor in the increase.

"We've had enough qubits for a while, but we've been continually pushing on reducing the error in our quantum operations, specifically the two-qubit gate error, to allow us to do these Quantum Volume measurements,” he said. 

The Quantinuum team improved memory error and elements of the calibration process as well. 

“It was a lot of little things that got us to the point where our two-qubit gate error and our memory error are both low enough that we can pass these Quantum Volume circuit tests,” he said. 

The work of increasing Quantum Volume means improving all the subsystems and subcomponents of the machine individually and simultaneously, while ensuring all the systems continue to work well together. Such a complex task takes a high degree of orchestration across the Quantinuum team, with the benefits of the work passed on to H-Series users. 

To illustrate what this 5-digit Quantum Volume milestone means for the H-Series, here are 5 perspectives that reflect Quantinuum teams and H-Series users.

Perspective #1: How a higher QV impacts algorithms

Dr. Henrik Dreyer is Managing Director and Scientific Lead at Quantinuum’s office in Munich, Germany. In the context of his work, an improvement in Quantum Volume is important as it relates to gate fidelity. 

“As application developers, the signal-to-noise ratio is what we're interested in,” Henrik said. “If the signal is small, I might run the circuits 10 times and only get one good shot. To recover the signal, I have to do a lot more shots and throw most of them away. Every shot takes time."

“The signal-to-noise ratio is sensitive to the gate fidelity. If you increase the gate fidelity by a little bit, the runtime of a given algorithm may go down drastically,” he said. “For a typical circuit, as the plot shows, even a relatively modest 0.16 percentage point improvement in fidelity, could mean that it runs in less than half the time.”

To demonstrate this point, the Quantinuum team has been benchmarking the System Model H1 performance on circuits relevant for near-term applications. The graph below shows repeated benchmarking of the runtime of these circuits before and after the recent improvement in gate fidelity. The result of this moderate change in fidelity is a 3x change in runtime. The runtimes calculated below are based on the number of shots required to obtain accurate results from the benchmarking circuit – the example uses 430 arbitrary-angle two-qubit gates and an accuracy of 3%.

Perspective #2: Advancing quantum error correction

Dr. Natalie Brown and Dr, Ciaran Ryan-Anderson both work on quantum error correction at Quantinuum. They see the QV advance as an overall boost to this work. 

“Hitting a Quantum Volume number like this means that you have low error rates, a lot of qubits, and very long circuits,” Natalie said. “And all three of those are wonderful things for quantum error correction. A higher Quantum Volume most certainly means we will be able to run quantum error correction better. Error correction is a critical ingredient to large-scale quantum computing. The earlier we can start exploring error correction on today’s small-scale hardware, the faster we’ll be able to demonstrate it at large-scale.”

Ciaran said that H1-1's low error rates allow scientists to make error correction better and start to explore decoding options.

“If you can have really low error rates, you can apply a lot of quantum operations, known as gates,” Ciaran said. "This makes quantum error correction easier because we can suppress the noise even further and potentially use fewer resources to do it, compared to other devices.”

Perspective #3: Meeting a high benchmark

“This accomplishment shows that gate improvements are getting translated to full-system circuits,” said Dr. Charlie Baldwin, a research scientist at Quantinuum. 

Charlie specializes in quantum computing performance benchmarks, conducting research with the Quantum Economic Development Consortium (QED-C).

“Other benchmarking tests use easier circuits or incorporate other options like post-processing data. This can make it more difficult to determine what part improved,” he said. “With Quantum Volume, it’s clear that the performance improvements are from the hardware, which are the hardest and most significant improvements to make.” 

“Quantum Volume is a well-established test. You really can’t cheat it,” said Charlie.

Perspective #4: Implications for quantum applications

Dr. Ross Duncan, Head of Quantum Software, sees Quantum Volume measurements as a good way to show overall progress in the process of building a quantum computer.

“Quantum Volume has merit, compared to any other measure, because it gives a clear answer,” he said. 

“This latest increase reveals the extent of combined improvements in the hardware in recent months and means researchers and developers can expect to run deeper circuits with greater success.” 

Perspective #5: H-Series users

Quantinuum’s business model is unique in that the H-Series systems are continuously upgraded through their product lifecycle. For users, this means they continually and immediately get access to the latest breakthroughs in performance. The reported improvements were not done on an internal testbed, but rather implemented on the H1-1 system which is commercially available and used extensively by users around the world.

“As soon as the improvements were implemented, users were benefiting from them,” said Dr. Jenni Strabley, Sr. Director of Offering Management. “We take our Quantum Volume measurement intermixed with customers’ jobs, so we know that the improvements we’re seeing are also being seen by our customers.”

Jenni went on to say, “Continuously delivering increasingly better performance shows our commitment to our customers’ success with these early small-scale quantum computers as well as our commitment to accuracy and transparency. That’s how we accelerate quantum computing.”

Supporting data from Quantinuum’s 215 QV milestone

This latest QV milestone demonstrates how the Quantinuum team continues to boost the performance of the System Model H1, making improvements to the two-qubit gate fidelity while maintaining high single-qubit fidelity, high SPAM fidelity, and low cross-talk.

The average single-qubit gate fidelity for these milestones was 99.9955(8)%, the average two-qubit gate fidelity was 99.795(7)% with fully connected qubits, and state preparation and measurement fidelity was 99.69(4)%.

For both tests, the Quantinuum team ran 100 circuits with 200 shots each, using standard QV optimization techniques to yield an average of 219.02 arbitrary angle two-qubit gates per circuit on the 214 test, and 244.26 arbitrary angle two-qubit gates per circuit on the 215 test.

The Quantinuum H1-1 successfully passed the quantum volume 16,384 benchmark, outputting heavy outcomes 69.88% of the time, and passed the 32,768 benchmark, outputting heavy outcomes 69.075% of the time. The heavy output frequency is a simple measure of how well the measured outputs from the quantum computer match the results from an ideal simulation. Both results are above the two-thirds passing threshold with high confidence. More details on the Quantum Volume test can be found here.

Heavy output frequency for H1-1 at 215 (QV 32,768)
Chart, scatter chartDescription automatically generated
Heavy output frequency for H1-1 at 214 (QV 16,384) 
Chart, scatter chartDescription automatically generated

Quantum Volume data and analysis code can be accessed on Quantinuum’s GitHub repository for quantum volume data. Contemporary benchmarking data can be accessed at Quantinuum’s GitHub repository for hardware specifications.

1Re-examining the quantum volume test: Ideal distributions, compiler optimizations, confidence intervals, and scalable resource estimations (quantum-journal.org)

technical
All
February 14, 2023
Trust and verify: Quantinuum hardware team posts performance data on GitHub

If you’re a software developer, the best way to show your work is to post your code on GitHub. The site serves as a host for code repositories and a tool for software version control. It’s a straightforward and popular way for developers to share code, collaborate and spread the word about new languages and technical projects. Community members can download code, contribute to open source software projects, or develop their own projects. 

Quantinuum has used this open platform to make it easier for developers and everyone in the quantum ecosystem to understand the performance of the company’s H-Series quantum computers. The team posts to GitHub characterization data of System Model H1 quantum computer performance and also benchmarking data on Quantum Volume.

The Quantinuum team prioritizes transparency and published the data behind the System Model H1 data sheets in a publicly available place to back up performance claims with data. Anyone who is curious about how the hardware team achieved 32,768 quantum volume in February can review the quantum volume data on GitHub. This repository contains the raw data along with the analysis code.

Charlie Baldwin, a lead physicist at Quantinuum, said the GitHub postings make it easy to understand how the hardware team measures errors.

“Algorithm developers and anyone interested in quantum computing also can use the data to verify our stated error rates,” he said. “Both the single- and two-qubit error rates are among the lowest--if not the lowest--available on a commercial system.”

The publicly available data from Quantinuum’s H-Series, Powered by Honeywell, is the most comprehensive set shared by a quantum computing company, as it includes circuits, raw data, gate counts and error rates. Quantinuum shares this data for users who need to understand exactly what a quantum computer’s performance metrics represent when they are analyzing or publishing their results. Posting the verification data for any performance metric is a best practice of how quantum hardware providers can promote more transparency in the performance of their hardware.

The team also has posted data sheets for the System Model H1 and for the System Model H1 Emulator on the company website. The System Model H1 is a generation of quantum computers based on ions trapped in a single linear geometry. Currently the Quantinuum H1-1 and H1-2 are available to customers. Many Fortune 500 companies use the System Model H1 for quantum research and development.