KPMG and Microsoft join Quantinuum in simplifying quantum algorithm development via the cloud

KPMG and Microsoft join Quantinuum in simplifying quantum algorithm development via the cloud

The QIR Alliance, an international effort to enhance platform interoperability and enhance the work of quantum computing developers, has announced a milestone in the industry-wide effort to accelerate adoption

March 23, 2023

In 1952, facing opposition from scientists who disbelieved her thesis that computer programming could be made more useful by using English words, the mathematician and computer scientist Grace Hopper published her first paper on compilers and wrote a precursor to the modern compiler, the A-0, while working at Remington Rand.

Over subsequent decades, the principles of compilers, whose task it is to translate between high level programming languages and machine code, took shape and new methods were introduced to support their optimization. One such innovation was the intermediate representation (IR), which was introduced to manage the complexity of the compilation process, enabling compilers to represent the program without loss of information, and to be broken up into modular phases and components.

This developmental path spawned the modern computer industry, with languages that work across hardware systems, middleware, firmware, operating systems, and software applications. It has also supported the emergence of the huge numbers of small businesses and professionals who make a living collaborating to solve problems using code that depends on compilers to control the underlying computing hardware.

Now, a similar story is unfolding in quantum computing. There are efforts around the world to make it simpler for engineers and developers across many sectors to take advantage of quantum computers by translating between high level coding languages and tools, and quantum circuits — the combinations of gates that run on quantum computers to generate solutions. Many of these efforts focus on hybrid quantum-classical workflows, which allow a problem to be solved by taking advantage of the strengths of different modes of computation, accessing central processing units (CPUs), graphical processing units (GPUs) and quantum processing units (QPUs) as needed.

Microsoft is a significant contributor to this burgeoning quantum ecosystem, providing access to multiple quantum computing systems through Azure Quantum, and a founding member of the QIR Alliance, a cross-industry effort to make quantum computing source code portable across different hardware systems and modalities and to make quantum computing more useful to engineers and developers. QIR offers an interoperable specification for quantum programs, including a hardware profile designed for Quantinuum’s H-Series quantum computers, and has the capacity to support cross-compiling quantum and classical workflows, encouraging hybrid use-cases.

As one of the largest integrated quantum computing companies in the world, Quantinuum was excited to become a QIR steering member alongside partners including Nvidia, Oak Ridge National Laboratory, Quantum Circuits Inc., and Rigetti Computing. Quantinuum supports multiple open-source eco-system tools including its own family of open-source software development kits and compilers, such as TKET for general purpose quantum computation and lambeq for quantum natural language processing.

Rapid progress with KPMG and Microsoft

As founding members of QIR, Quantinuum recently worked with Microsoft Azure Quantum alongside KPMG on a project that involved Microsoft’s Q#, a stand-alone language offering a high level of abstraction and Quantinuum’s System Model H1, Powered by Honeywell. The Q# language has been designed for the specific needs of quantum computing and provides a high-level of abstraction enabling developers to seamlessly blend classical and quantum operations, significantly simplifying the design of hybrid algorithms. 

KPMG’s quantum team wanted to translate an existing algorithm into Q#, and to take advantage of the unique and differentiating capabilities of Quantinuum’s H-Series, particularly qubit reuse, mid-circuit measurement and all-to-all connectivity. System Model H1 is the first generation trapped-ion based quantum computer built using the quantum charge-coupled device (QCCD) architecture. KPMG accessed the H1-1 QPU with 20 fully connected qubits. H1-1 recently achieved a Quantum Volume of 32,768, demonstrating a new high-water mark for the industry in terms of computation power as measured by quantum volume.

Q# and QIR offered an abstraction from hardware specific instructions, allowing the KPMG team, led by Michael Egan, to make best use of the H-Series and take advantage of runtime support for measurement-conditioned program flow control, and classical calculations within runtime.

Nathan Rhodes of the KPMG team wrote a tutorial about the project to demonstrate how an algorithm writer would use the KPMG code step-by-step as well as the particular features of QIR, Q# and the H-Series. It is the first time that code from a third party will be available for end users on Microsoft’s Azure portal.

Microsoft recently announced the roll-out of integrated quantum computing on Azure Quantum, an important milestone in Microsoft’s Hybrid Quantum Computing Architecture, which provides tighter integration between quantum and classical processing. 

Fabrice Frachon, Principal PM Lead, Azure Quantum, described this new Azure Quantum capability as a key milestone to unlock a new generation of hybrid algorithms on the path to scaled quantum computing.

The demonstration

The team ran an algorithm designed to solve an estimation problem, a promising use case for quantum computing, with potential application in fields including traffic flow, network optimization, energy generation, storage, and distribution, and to solve other infrastructure challenges. The iterative phase estimation algorithm1 was compiled into quantum circuits from code written in a Q# environment with the QIR toolset, producing a circuit with approximately 500 gates, including 111 2-Qubit gates, running across three qubits with one reused three times, and achieving a fidelity of 0.92. This is possible because of the high gate fidelity and the low SPAM error which enables qubit reuse.

The results compare favorably with the more standard Quantum Phase Estimation version described in “Quantum computation and quantum information,” by Michael A. Nielsen and Isaac Chuang.

Quantinuum’s H1 had five capabilities that were crucial to this project:

  1. Qubit reuse
  2. Mid-circuit measurement
  3. Bound loop (a restriction on how many times the system will do the iterative circuit)
  4. Classical computation
  5. Nested functions

The project emphasized the importance of companies experimenting with quantum computing, so they can identify any possible IT issues early on, understanding the development environment and how quantum computing integrates with current workflows and processes.

As the global quantum ecosystem continues to advance, collaborative efforts like QIR will play a crucial role in bringing together industrial partners seeking novel solutions to challenging problems, talented developers, engineers, and researchers, and quantum hardware and software companies, which will continue to add deep scientific and engineering knowledge and expertise.

  1. Phys. Rev. A 76, 030306(R) (2007) - Arbitrary accuracy iterative quantum phase estimation algorithm using a single ancillary qubit: A two-qubit benchmark (aps.org)
About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
May 27, 2025
Teleporting to new heights

Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.

Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.

Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.

Figure 1: Fidelity of two-bit state teleportation for physical qubit experiments and logical qubit experiments using the d=3 color code (Steane code). The same QASM programs that were ran during March 2024 on the Quantinuum's H2-1 device were reran on the same device on April to March 2025. Thanks to the improvements made to H2-1 from 2024 to 2025, physical error rates have been reduced leading to increased fidelity for both the physical and logical level teleportation experiments. The results imply a logical error rate that is 2.3 times smaller than the physical error rate while being statistically well separated, thus indicating the logical fidelities are below break-even for teleportation.

This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.

Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.

This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.

technical
All
Blog
May 22, 2025
λambeq Gen II: A Quantum-Enhanced Interpretable and Scalable Text-based NLP Software Package
By Bob Coecke and Dimitri Kartsaklis
Introduction

Today we announce the next generation of λambeq , Quantinuum’s quantum natural language processing (QNLP) package.

Incorporating recent developments in both quantum NLP and quantum hardware, λambeq Gen II allows users not only to model the semantics of natural language (in terms of vectors and tensors), but to convert linguistic structures and meaning directly into quantum circuits for real quantum hardware.

Five years ago, our team reported the first realization of Quantum Natural Language Processing (QNLP). In their work, the team realized that there is a direct correspondence between the meanings of words and quantum states, and between grammatical structures and quantum entanglement. As that article put it: “Language is effectively quantum native”.

Our team realized an NLP task on quantum hardware and provided the data and code via a GitHub repository, attracting the interest of a then-nascent quantum NLP community, which has since grown around successive releases of λambeq. We released it 18 months later, supported by a research paper on the arXiv.

Λambeq: an open-source python library that turns sentences into quantum circuits, and then feeds these to quantum computers subject to VQC methodologies. Initial release in October 2021 arXiv:2110.04236

From that moment onwards, anyone could play around with QNLP on the then freely available quantum hardware. Our λambeq software has been downloaded over 50,000 times, and the user community is supported by an active Discord page, where practitioners can interact with each other and with our development team.  

The QNLP Back-Story

In order to demonstrate that QNLP was possible, even on the hardware available in 2021, we focused exclusively on small noisy quantum computers. Our motivation was to produce some exploratory findings, looking for a potential quantum advantage for natural language processing using quantum hardware. We published our original scientific work in 2016, detailing a quadratic speedup over classical computers (in certain circumstances). We are strongly convinced that there is a lot more potential than indicated in that paper.

That first realization of QNLP marked a shift away from brute-force machine learning, which has now taken the world by storm in the shape of large language models (LLMs) running on algorithms called “transformers”.

Instead of the transformer approach, we decoded linguistic structure using a compositional theory of meaning. With deep roots in computational linguistics, our approach was inspired by research into compositional linguistic algorithms, and their resemblance to other quantum primitives such as quantum teleportation. As we continued our work, it became clear that our approach reduced training requirements by relying on a natural relationship between linguistic structure and quantum structure, offering near-term QNLP in practice.

Embedding recent progress in λambeq Gen II

We haven’t sat still, and neither have the teams working in the field of quantum hardware. Quantinuum’s stack now performs at a level we only dreamed of in 2020. While we look forward to continued progress on the hardware front, we are getting ahead of these future developments by shifting the focus in our algorithms and software packages, to ensure we and λambeq’s users are ready to chase far more ambitious goals!

We moved away from the compositional theory of meaning that was the focus of our early experiments, called DisCoCat, to a new mathematical foundation called DisCoCirc. This enabled us to explore the relationship between text generation and text circuits, concluding that “text circuits are generative for text”.

Formally speaking, DisCoCirc embraces substantially more compositional structure present in language than DisCoCat does, and that pays off in many ways:

  • Firstly, the new theoretical backbone enables one to compose the structure of sentences into text structure, so we can now deal with large texts.
  • Secondly, the compositional structure of language is represented in a compressed manner, that, in fact, makes the formalism language-neutral, as reported in this blog post.
  • Thirdly, the augmented compositional linguistic structure, together with the requirement of learnability, makes a quantum model now canonical, and we now have solid theoretical evidence for genuine enhanced performance on quantum hardware, as shown in this arXiv paper.  
  • Fourthly, the problems associated with trainability of quantum machine learning models vanish, thanks to compositional generalization, which was the subject of this paper.
  • Lastly, and surely not least, we reported on the achievement of compositional interpretability and explored the myriad ways that it supports explainable AI (XAI), which we also discussed extensively in this blog post.

Today, our users can benefit from these recent developments with the release λambeq Gen II. Our open-source tools have always benefited from the attention and feedback we receive from our users. Please give it a try, and we look forward to hearing your feedback on λambeq Gen II.

Enjoy!

technical
All
Blog
May 21, 2025
Unlocking Scalable Chemistry Simulations for Quantum-Supercomputing

We're announcing the world’s first scalable, error-corrected, end-to-end computational chemistry workflow. With this, we are entering the future of computational chemistry.

Quantum computers are uniquely equipped to perform the complex computations that describe chemical reactions – computations that are so complex they are impossible even with the world’s most powerful supercomputers.

However, realizing this potential is a herculean task: one must first build a large-scale, universal, fully fault-tolerant quantum computer – something nobody in our industry has done yet. We are the farthest along that path, as our roadmap, and our robust body of research, proves. At the moment, we have the world’s most powerful quantum processors, and are moving quickly towards universal fault tolerance. Our commitment to building the best quantum computers is proven again and again in our world-leading results.

While we do the work to build the world’s best quantum computers, we aren’t waiting to develop their applications. We have teams working right now on making sure that we hit the ground running with each new hardware generation. In fact, our team has just taken a huge leap forward for computational chemistry using our System Model H2.

In our latest paper, we have announced the first-ever demonstration of a scalable, end-to-end workflow for simulating chemical systems with quantum error correction (QEC). This milestone shows that quantum computing will play an essential role, in tandem with HPC and AI, in unlocking new frontiers in scientific discovery.

In the paper, we showcase the first practical combination of quantum phase estimation (QPE) with logical qubits for molecular energy calculations – an essential step toward fault-tolerant quantum simulations. It builds on our previous work implementing quantum error detection with QPE and marks a critical step toward achieving quantum advantage in chemistry.  

By demonstrating this end-to-end workflow on our H2 quantum computer using our state-of-the-art chemistry platform InQuanto™, we are proving that quantum error-corrected chemistry simulations are not only feasible, but also scalable and —crucially—implementable in our quantum computing stack.

This work sets key benchmarks on the path to fully fault-tolerant quantum simulations. Building such capabilities into an industrial workflow will be a milestone for quantum computing, and the demonstration reported here represents a new high-water mark as we continue to lead the global industry in pushing towards universal fault-tolerant computers capable of widespread scientific and commercial advantage.  

As we look ahead, this workflow will serve as the foundation for future quantum-HPC integration, enabling chemistry simulations that are impossible today.

Showcasing Quantinuum’s Full-Stack Advantage

Today’s achievement wouldn’t be possible without the Quantinuum’s full stack approach. Our vertical integration - from hardware to software to applications - ensures that each layer works together seamlessly.  

Our H2 quantum computer, based on the scalable QCCD architecture with its unique combination of high-fidelity operations, all-to-all connectivity, mid-circuit measurements and conditional logic, enabled us to run more complex quantum computing simulations than previously possible. The work also leverages Quantinuum’s real-time QEC decoding capability and benefitted from the quantum error correction advantages also provided by QCCD.

We will make this workflow available to customers via InQuanto, our quantum chemistry platform, allowing users to easily replicate and build upon this work. The integration of high-quality quantum computing hardware with sophisticated software creates a robust environment for iterating and accelerating breakthroughs in fields like chemistry and materials science.

A Collaborative Future: The Role of AI and Supercomputing

Achieving quantum advantage in chemistry will require more than just quantum hardware; it will require a synergistic approach that combines such quantum computing workflows demonstrated here with classical supercomputing and AI. Our strategic partnerships with leading supercomputing providers – with Quantinuum being selected as a founding collaborator for NVIDIA’s Accelerated Quantum Research Center – as well as our commitment to exploring generative quantum AI, place us in a unique position to maximize the benefit of quantum computing, and supercharge quantum advantage with the integration of classical supercomputing and AI.

Conclusion

Quantum computing holds immense potential for transforming industries across the globe. Our work today experimentally demonstrates the first complete and scalable quantum chemistry simulation, showing that the long-awaited quantum advantage in simulating chemical systems is not only possible, but within reach. With the development of new error correction techniques and the continued advancement of our quantum hardware and software we are paving the way for a future where quantum simulations can address challenges that are impossible today. Quantinuum’s ongoing collaborations with HPC providers and its exploration of AI-driven quantum techniques position our company to capitalize on this trifecta of computing power and achieve meaningful breakthroughs in quantum chemistry and beyond.

We encourage you to explore this breakthrough further by reading our latest research on arXiv and try out the Python code for yourself.

technical
All