KPMG and Microsoft join Quantinuum in simplifying quantum algorithm development via the cloud

The QIR Alliance, an international effort to enhance platform interoperability and enhance the work of quantum computing developers, has announced a milestone in the industry-wide effort to accelerate adoption

March 23, 2023

In 1952, facing opposition from scientists who disbelieved her thesis that computer programming could be made more useful by using English words, the mathematician and computer scientist Grace Hopper published her first paper on compilers and wrote a precursor to the modern compiler, the A-0, while working at Remington Rand.

Over subsequent decades, the principles of compilers, whose task it is to translate between high level programming languages and machine code, took shape and new methods were introduced to support their optimization. One such innovation was the intermediate representation (IR), which was introduced to manage the complexity of the compilation process, enabling compilers to represent the program without loss of information, and to be broken up into modular phases and components.

This developmental path spawned the modern computer industry, with languages that work across hardware systems, middleware, firmware, operating systems, and software applications. It has also supported the emergence of the huge numbers of small businesses and professionals who make a living collaborating to solve problems using code that depends on compilers to control the underlying computing hardware.

Now, a similar story is unfolding in quantum computing. There are efforts around the world to make it simpler for engineers and developers across many sectors to take advantage of quantum computers by translating between high level coding languages and tools, and quantum circuits — the combinations of gates that run on quantum computers to generate solutions. Many of these efforts focus on hybrid quantum-classical workflows, which allow a problem to be solved by taking advantage of the strengths of different modes of computation, accessing central processing units (CPUs), graphical processing units (GPUs) and quantum processing units (QPUs) as needed.

Microsoft is a significant contributor to this burgeoning quantum ecosystem, providing access to multiple quantum computing systems through Azure Quantum, and a founding member of the QIR Alliance, a cross-industry effort to make quantum computing source code portable across different hardware systems and modalities and to make quantum computing more useful to engineers and developers. QIR offers an interoperable specification for quantum programs, including a hardware profile designed for Quantinuum’s H-Series quantum computers, and has the capacity to support cross-compiling quantum and classical workflows, encouraging hybrid use-cases.

As one of the largest integrated quantum computing companies in the world, Quantinuum was excited to become a QIR steering member alongside partners including Nvidia, Oak Ridge National Laboratory, Quantum Circuits Inc., and Rigetti Computing. Quantinuum supports multiple open-source eco-system tools including its own family of open-source software development kits and compilers, such as TKET for general purpose quantum computation and lambeq for quantum natural language processing.

Rapid progress with KPMG and Microsoft

As founding members of QIR, Quantinuum recently worked with Microsoft Azure Quantum alongside KPMG on a project that involved Microsoft’s Q#, a stand-alone language offering a high level of abstraction and Quantinuum’s System Model H1, Powered by Honeywell. The Q# language has been designed for the specific needs of quantum computing and provides a high-level of abstraction enabling developers to seamlessly blend classical and quantum operations, significantly simplifying the design of hybrid algorithms. 

KPMG’s quantum team wanted to translate an existing algorithm into Q#, and to take advantage of the unique and differentiating capabilities of Quantinuum’s H-Series, particularly qubit reuse, mid-circuit measurement and all-to-all connectivity. System Model H1 is the first generation trapped-ion based quantum computer built using the quantum charge-coupled device (QCCD) architecture. KPMG accessed the H1-1 QPU with 20 fully connected qubits. H1-1 recently achieved a Quantum Volume of 32,768, demonstrating a new high-water mark for the industry in terms of computation power as measured by quantum volume.

Q# and QIR offered an abstraction from hardware specific instructions, allowing the KPMG team, led by Michael Egan, to make best use of the H-Series and take advantage of runtime support for measurement-conditioned program flow control, and classical calculations within runtime.

Nathan Rhodes of the KPMG team wrote a tutorial about the project to demonstrate how an algorithm writer would use the KPMG code step-by-step as well as the particular features of QIR, Q# and the H-Series. It is the first time that code from a third party will be available for end users on Microsoft’s Azure portal.

Microsoft recently announced the roll-out of integrated quantum computing on Azure Quantum, an important milestone in Microsoft’s Hybrid Quantum Computing Architecture, which provides tighter integration between quantum and classical processing. 

Fabrice Frachon, Principal PM Lead, Azure Quantum, described this new Azure Quantum capability as a key milestone to unlock a new generation of hybrid algorithms on the path to scaled quantum computing.

The demonstration

The team ran an algorithm designed to solve an estimation problem, a promising use case for quantum computing, with potential application in fields including traffic flow, network optimization, energy generation, storage, and distribution, and to solve other infrastructure challenges. The iterative phase estimation algorithm1 was compiled into quantum circuits from code written in a Q# environment with the QIR toolset, producing a circuit with approximately 500 gates, including 111 2-Qubit gates, running across three qubits with one reused three times, and achieving a fidelity of 0.92. This is possible because of the high gate fidelity and the low SPAM error which enables qubit reuse.

The results compare favorably with the more standard Quantum Phase Estimation version described in “Quantum computation and quantum information,” by Michael A. Nielsen and Isaac Chuang.

Quantinuum’s H1 had five capabilities that were crucial to this project:

  1. Qubit reuse
  2. Mid-circuit measurement
  3. Bound loop (a restriction on how many times the system will do the iterative circuit)
  4. Classical computation
  5. Nested functions

The project emphasized the importance of companies experimenting with quantum computing, so they can identify any possible IT issues early on, understanding the development environment and how quantum computing integrates with current workflows and processes.

As the global quantum ecosystem continues to advance, collaborative efforts like QIR will play a crucial role in bringing together industrial partners seeking novel solutions to challenging problems, talented developers, engineers, and researchers, and quantum hardware and software companies, which will continue to add deep scientific and engineering knowledge and expertise.

  1. Phys. Rev. A 76, 030306(R) (2007) - Arbitrary accuracy iterative quantum phase estimation algorithm using a single ancillary qubit: A two-qubit benchmark (aps.org)
About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
December 9, 2024
Q2B 2024: The Roadmap to Quantum Value

At this year’s Q2B Silicon Valley conference from December 10th – 12th in Santa Clara, California, the Quantinuum team will be participating in plenary and case study sessions to showcase our quantum computing technologies. 

Schedule a meeting with us at Q2B

Meet our team at Booth #G9 to discover how Quantinuum is charting the path to universal, fully fault-tolerant quantum computing. 

Join our sessions: 

Tuesday, Dec 10, 10:00 - 10:20am PT

Plenary: Advancements in Fault-Tolerant Quantum Computation: Demonstrations and Results

There is industry-wide consensus on the need for fault-tolerant QPU’s, but demonstrations of these abilities are less common. In this talk, Dr. Hayes will review Quantinuum’s long list of meaningful demonstrations in fault-tolerance, including real-time error correction, a variety of codes from the surface code to exotic qLDPC codes, logical benchmarking, beyond break-even behavior on multiple codes and circuit families.

View the presentation

Wednesday, Dec 11, 4:30 – 4:50pm PT

Keynote: Quantum Tokens: Securing Digital Assets with Quantum Physics

Mitsui’s Deputy General Manager, Quantum Innovation Dept., Corporate Development Div., Koji Naniwada, and Quantinuum’s Head of Cybersecurity, Duncan Jones will deliver a keynote presentation on a case study for quantum in cybersecurity. Together, our organizations demonstrated the first implementation of quantum tokens over a commercial QKD network. Quantum tokens enable three previously incompatible properties: unforgeability guaranteed by physics, fast settlement without centralized validation, and user privacy until redemption. We present results from our successful Tokyo trial using NEC's QKD commercial hardware and discuss potential applications in financial services.

Details on the case study

Wednesday, Dec 11, 5:10 – 6:10pm PT

Quantinuum and Mitsui Sponsored Happy Hour

Join the Quantinuum and Mitsui teams in the expo hall for a networking happy hour. 

events
All
Blog
December 5, 2024
Quantum computing is accelerating

Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.

Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation. 

For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality. 

Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more. 

A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments. 

CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.” 

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems. 

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”

The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version). 

In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.

We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.  

Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.

Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe,  or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.

The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.

Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.

The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.

The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.

In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.

This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of. 

technical
All