By chemists for chemists

Introducing InQuanto™ 2.0

December 13, 2022

When we launched InQuanto™, our computational chemistry platform for quantum computing, we explained that its origins lay at least as much with our industrial partners as it did with us. We revealed that its development was the culmination of many important scientific collaborations with some of the world’s leading industrial names in energy, automotive, pharmaceuticals, industrial materials, and other sectors.

Today, we announce the next version of our state-of-the-art platform. Just as before, it is important to us that InQuanto 2.0, while being more versatile, more extensible, and more applicable for those who have not yet explored the use of quantum computers, is the result of precisely the same spirit of collaboration.

In close collaboration with our industrial partners, we have designed, developed, and discovered methods using InQuanto for exploring the application of near-term quantum technology to material and molecular problems that remain challenging or intractable for even the most powerful classical computers.

What’s inside InQuanto 2.0?

InQuanto continues to be built around the latest quantum algorithms, advanced subroutines, and chemistry-specific noise-mitigation techniques. In the new version, we have added new features to enhance efficiency, such as new protocol classes that can speed up vector calculations by an order of magnitude, and integral operator classes that exploit symmetries and can reduce memory requirements.

We have introduced new tools for developing custom ansätze, new embedding techniques and novel hybrid methods to improve efficiency and precision, which in some cases have only recently been described in the scientific literature. And these rapid advances are supported by new ways for computational chemists to build InQuanto into their workflow, whether that is by improving visualization and interoperability with other chemistry packages, or by demonstrating the ability to run it in the cloud, for example, through a recent demonstration with Amazon Braket.

The most exciting progress, of course, is reflected in the diverse work of our partners. We know that some of the work being done today will be reflected in future methods and techniques incorporated into InQuanto, fulfilling the ever more advanced needs of our partners tomorrow.

Please book a demonstration of InQuanto 2.0 today.

InQuanto 2.0 brings together a range of new features that continue to make it the right choice for computational chemists on quantum computers:

Efficiency

  • Workflow improvements in protocol classes for more efficient small test calculations — up to 10x speed-ups in some state vector calculations
  • Symmetry-exploiting integral operator classes for efficient handling of the two-electron integral for a chemistry Hamiltonian using ~50% less memory
  • Optimized computables for n-particle reduced density matrices

Algorithms

  • Wide range of restructured ansätze to support multi-reference calculations to enable new types of variational quantum algorithms — with improved custom ansatz development tools
  • Generalised variational quantum solvers to perform imaginary and real-time evolution simulations
  • Added Fragment Molecular Orbital embedding method
  • New QRDM-NEVPT2 method to measure 4-particle reduced density matrices and add corrections to VQE energy

User Experience

  • FCIDUMP read/write for improved integration with other quantum chemistry packages
  • Unit cell visualization extensions, and support for trotterization in the operator level
  • Improved resource cost estimation on H-Series quantum computers, Powered by Honeywell 
What to read next:

Research case study:
Ford battery researchers used InQuanto™ to study how quantum computers could be used to model lithium-ion batteries.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
February 25, 2025
Unlocking Quantum Advantage with Complement Sampling

BY HARRY BUHRMAN

Quantum computing continues to push the boundaries of what is computationally possible. A new study by Marcello Benedetti, Harry Buhrman, and Jordi Weggemans introduces Complement Sampling, a problem that highlights a dramatic separation between quantum and classical sample complexity. This work provides a robust demonstration of quantum advantage in a way that is not only provable but also feasible on near-term quantum devices.

The Complement Sampling Problem

Imagine a universe of N = 2n elements, from which a subset S of size K is drawn uniformly at random. The challenge is to sample from the complement without explicitly knowing S, but having access to samples of S. Classically, solving this problem requires roughly K samples, as the best a classical algorithm can do is guess at random after observing only some of the elements of S.

To better understand this, consider a small example. Suppose N = 8, meaning our universe consists of the numbers {0,1,2,3,4,5,6,7}. If a subset S of size K = 4 is drawn at random—say {1,3,5,7}—the goal is to sample from the complement  , which consists of {0,2,4,6}. A classical algorithm would need to collect and verify enough samples from S before it could infer what might be. However, a quantum algorithm can use a single superposition state over S (a quantum sample) to instantly generate a sample from , eliminating the need for iterative searching.

Why This Matters: Quantum Advantage in Sample Complexity

Quantum advantage is often discussed in terms of computational speedups, such as those achieved by Shor’s algorithm for factoring large numbers. However, quantum resources provide advantages beyond time efficiency—they also affect how data is accessed, stored, and processed.

Complement Sampling fits into the category of sample complexity problems, where the goal is to minimize the number of samples needed to solve a problem. The authors prove that their quantum approach not only outperforms classical methods but does so in a way that is:

  • Provable: It provides rigorous lower bounds on classical sample complexity, demonstrating an exponential separation.
  • Verifiable: The correctness of the output of the sampler can be efficiently checked classically.
  • NISQable: The quantum circuit required is shallow and feasible for Noisy Intermediate-Scale Quantum (NISQ) devices.
How the Quantum Algorithm Works

At its core, the quantum approach to Complement Sampling relies on the ability to perform a perfect swap between a subset S and its complement . The method draws inspiration from a construction by Aaronson, Atia, and Susskind, which links state distinguishability to state swapping. The quantum algorithm:

  1. Uses a unitary transformation that maps the quantum sample |S⟩ to |⟩ with high probability.
  2. For K = N/2, the algorithm works perfectly outputting an element from with probability 1.
  3. For other values of K, a probabilistic zero-error approach is used, ensuring correctness while reducing success probability.

This is made possible by quantum interference and superposition, allowing a quantum computer to manipulate distributions in ways that classical systems fundamentally cannot.

Classical Hardness and Cryptographic Implications

A crucial aspect of this work is its robustness. The authors prove that even for subsets generated using strong pseudorandom permutations, the problem remains hard for classical algorithms. This means that classical computers cannot efficiently solve Complement Sampling even with structured input distributions—an important consideration for real-world applications.

This robustness suggests potential applications in cryptography, where generating samples from complements could be useful in privacy-preserving protocols and quantum-secure verification methods.

Towards an Experimental Demonstration

Unlike some quantum advantage demonstrations that are difficult to verify classically (such as the random circuit sampling experiment), Complement Sampling is designed to be verifiable. The authors propose an interactive quantum versus classical game:

  1. A referee provides a quantum player with quantum samples from S.
  2. The player must return a sample from
  3. A classical player, given the same number of classical samples, attempts to do the same.

While the classical player must resort to random guessing, the quantum player can leverage the swap algorithm to succeed with near certainty. Running such an experiment on NISQ hardware could serve as a practical demonstration of quantum advantage in a sample complexity setting.

Future Directions

This research raises exciting new questions:

  • Can Complement Sampling be extended to more general probability distributions?
  • Are there cryptographic protocols that can directly leverage this advantage?
  • How well does the quantum algorithm perform in real-world noisy conditions?

With its blend of theoretical depth and experimental feasibility, Complement Sampling provides a compelling new frontier for demonstrating the power of quantum computing.

Conclusion

Complement Sampling represents one of the cleanest demonstrations of quantum advantage in a practical, verifiable, and NISQ-friendly setting. By leveraging quantum information processing in ways that classical computers fundamentally cannot, this work strengthens the case for near-term quantum technologies and their impact on computational complexity, cryptography, and beyond.

For those interested in the full details, the paper provides rigorous proofs, circuit designs, and further insights into the nature of quantum sample complexity. As quantum computing continues to evolve, Complement Sampling may serve as a cornerstone for future experimental demonstrations of quantum supremacy.

We have commenced work on the experiment – watch this space!

technical
All
Blog
January 22, 2025
Quantum Computers Will Make AI Better
Today’s LLMs are often impressive by past standards – but they are far from perfect

Quietly, and determinedly since 2019, we’ve been working on Generative Quantum AI. Our early focus on building natively quantum systems for machine learning has benefitted from and been accelerated by access to the world’s most powerful quantum computers, and quantum computers that cannot be classically simulated.

Our work additionally benefits from being very close to our Helios generation quantum computer, built in Colorado, USA. Helios is 1 trillion times more powerful than our H2 System, which is already significantly more advanced than all other quantum computers available.

While tools like ChatGPT have already made a profound impact on society, a critical limitation to their broader industrial and enterprise use has become clear. Classical large language models (LLMs) are computational behemoths, prohibitively huge and expensive to train, and prone to errors that damage their credibility.

Training models like ChatGPT requires processing vast datasets with billions, even trillions, of parameters. This demands immense computational power, often spread across thousands of GPUs or specialized hardware accelerators. The environmental cost is staggering—simply training GPT-3, for instance, consumed nearly 1,300 megawatt-hours of electricity, equivalent to the annual energy use of 130 average U.S. homes.

This doesn’t account for the ongoing operational costs of running these models, which remain high with every query. 

Despite these challenges, the push to develop ever-larger models shows no signs of slowing down.

Enter quantum computing. Quantum technology offers a more sustainable, efficient, and high-performance solution—one that will fundamentally reshape AI, dramatically lowering costs and increasing scalability, while overcoming the limitations of today's classical systems. 

Quantum Natural Language Processing: A New Frontier

At Quantinuum we have been maniacally focused on “rebuilding” machine learning (ML) techniques for Natural Language Processing (NLP) using quantum computers. 

Our research team has worked on translating key innovations in natural language processing — such as word embeddings, recurrent neural networks, and transformers — into the quantum realm. The ultimate goal is not merely to port existing classical techniques onto quantum computers but to reimagine these methods in ways that take full advantage of the unique features of quantum computers.

We have a deep bench working on this. Our Head of AI, Dr. Steve Clark, previously spent 14 years as a faculty member at Oxford and Cambridge, and over 4 years as a Senior Staff Research Scientist at DeepMind in London. He works closely with Dr. Konstantinos Meichanetzidis, who is our Head of Scientific Product Development and who has been working for years at the intersection of quantum many-body physics, quantum computing, theoretical computer science, and artificial intelligence.

A critical element of the team’s approach to this project is avoiding the temptation to simply “copy-paste”, i.e. taking the math from a classical version and directly implementing that on a quantum computer. 

This is motivated by the fact that quantum systems are fundamentally different from classical systems: their ability to leverage quantum phenomena like entanglement and interference ultimately changes the rules of computation. By ensuring these new models are properly mapped onto the quantum architecture, we are best poised to benefit from quantum computing’s unique advantages. 

These advantages are not so far in the future as we once imagined – partially driven by our accelerating pace of development in hardware and quantum error correction.

Making computers “talk”- a short history

The ultimate problem of making a computer understand a human language isn’t unlike trying to learn a new language yourself – you must hear/read/speak lots of examples, memorize lots of rules and their exceptions, memorize words and their meanings, and so on. However, it’s more complicated than that when the “brain” is a computer. Computers naturally speak their native languages very well, where everything from machine code to Python has a meaningful structure and set of rules. 

In contrast, “natural” (human) language is very different from the strict compliance of computer languages: things like idioms confound any sense of structure, humor and poetry play with semantics in creative ways, and the language itself is always evolving. Still, people have been considering this problem since the 1950’s (Turing’s original “test” of intelligence involves the automated interpretation and generation of natural language).

Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. 

Initial ML approaches were largely “statistical”: by analyzing large amounts of text data, one can identify patterns and probabilities. There were notable successes in translation (like translating French into English), and the birth of the web led to more innovations in learning from and handling big data.

What many consider “modern” NLP was born in the late 2000’s, when expanded compute power and larger datasets enabled practical use of neural networks. Being mathematical models, neural networks are “built” out of the tools of mathematics; specifically linear algebra and calculus. 

Building a neural network, then, means finding ways to manipulate language using the tools of linear algebra and calculus. This means representing words and sentences as vectors and matrices, developing tools to manipulate them, and so on. This is precisely the path that researchers in classical NLP have been following for the past 15 years, and the path that our team is now speedrunning in the quantum case.

Quantum Word Embeddings: A Complex Twist

The first major breakthrough in neural NLP came roughly a decade ago, when vector representations of words were developed, using the frameworks known as Word2Vec and GloVe (Global Vectors for Word Representation). In a recent paper, our team, including Carys Harvey and Douglas Brown, demonstrated how to do this in quantum NLP models – with a crucial twist. Instead of embedding words as real-valued vectors (as in the classical case), the team built it to work with complex-valued vectors.

In quantum mechanics, the state of a physical system is represented by a vector residing in a complex vector space, called a Hilbert space. By embedding words as complex vectors, we are able to map language into parameterized quantum circuits, and ultimately the qubits in our processor. This is a major advance that was largely under appreciated by the AI community but which is now rapidly gaining interest.

Using complex-valued word embeddings for QNLP means that from the bottom-up we are working with something fundamentally different. This different “geometry” may provide advantage in any number of areas: natural language has a rich probabilistic and hierarchical structure that may very well benefit from the richer representation of complex numbers.

The Quantum Recurrent Neural Network (RNN)

Another breakthrough comes from the development of quantum recurrent neural networks (RNNs). RNNs are commonly used in classical NLP to handle tasks such as text classification and language modeling. 

Our team, including Dr. Wenduan Xu, Douglas Brown, and Dr. Gabriel Matos, implemented a quantum version of the RNN using parameterized quantum circuits (PQCs). PQCs allow for hybrid quantum-classical computation, where quantum circuits process information and classical computers optimize the parameters controlling the quantum system.

In a recent experiment, the team used their quantum RNN to perform a standard NLP task: classifying movie reviews from Rotten Tomatoes as positive or negative. Remarkably, the quantum RNN performed as well as classical RNNs, GRUs, and LSTMs, using only four qubits. This result is notable for two reasons: it shows that quantum models can achieve competitive performance using a much smaller vector space, and it demonstrates the potential for significant energy savings in the future of AI.

In a similar experiment, our team partnered with Amgen to use PQCs for peptide classification, which is a standard task in computational biology. Working on the Quantinuum System Model H1, the joint team performed sequence classification (used in the design of therapeutic proteins), and they found competitive performance with classical baselines of a similar scale. This work was our first proof-of-concept application of near-term quantum computing to a task critical to the design of therapeutic proteins, and helped us to elucidate the route toward larger-scale applications in this and related fields, in line with our hardware development roadmap.

Quantum Transformers - The Next Big Leap

Transformers, the architecture behind models like GPT-3, have revolutionized NLP by enabling massive parallelism and state-of-the-art performance in tasks such as language modeling and translation. However, transformers are designed to take advantage of the parallelism provided by GPUs, something quantum computers do not yet do in the same way.

In response, our team, including Nikhil Khatri and Dr. Gabriel Matos, introduced “Quixer”, a quantum transformer model tailored specifically for quantum architectures. 

By using quantum algorithmic primitives, Quixer is optimized for quantum hardware, making it highly qubit efficient. In a recent study, the team applied Quixer to a realistic language modeling task and achieved results competitive with classical transformer models trained on the same data. 

This is an incredible milestone achievement in and of itself. 

This paper also marks the first quantum machine learning model applied to language on a realistic rather than toy dataset. 

This is a truly exciting advance for anyone interested in the union of quantum computing and artificial intelligence, and is in danger of being lost in the increased ‘noise’ from the quantum computing sector where organizations who are trying to raise capital will try to highlight somewhat trivial advances that are often duplicative.

Quantum Tensor Networks. A Scalable Approach

Carys Harvey and Richie Yeung from Quantinuum in the UK worked with a broader team that explored the use of quantum tensor networks for NLP. Tensor networks are mathematical structures that efficiently represent high-dimensional data, and they have found applications in everything from quantum physics to image recognition. In the context of NLP, tensor networks can be used to perform tasks like sequence classification, where the goal is to classify sequences of words or symbols based on their meaning.

The team performed experiments on our System Model H1, finding comparable performance to classical baselines. This marked the first time a scalable NLP model was run on quantum hardware – a remarkable advance. 

The tree-like structure of quantum tensor models lends itself incredibly well to specific features inherent to our architecture such as mid-circuit measurement and qubit re-use, allowing us to squeeze big problems onto few qubits.

Since quantum theory is inherently described by tensor networks, this is another example of how fundamentally different quantum machine learning approaches can look – again, there is a sort of “intuitive” mapping of the tensor networks used to describe the NLP problem onto the tensor networks used to describe the operation of our quantum processors.

What we’ve learned so far

While it is still very early days, we have good indications that running AI on quantum hardware will be more energy efficient. 

We recently published a result in “random circuit sampling”, a task used to compare quantum to classical computers. We beat the classical supercomputer in time to solution as well as energy use – our quantum computer cost 30,000x less energy to complete the task than Frontier, the classical supercomputer we compared against. 

We may see, as our quantum AI models grow in power and size, that there is a similar scaling in energy use: it’s generally more efficient to use ~100 qubits than it is to use ~10^18 classical bits.

Another major insight so far is that quantum models tend to require significantly fewer parameters to train than their classical counterparts. In classical machine learning, particularly in large neural networks, the number of parameters can grow into the billions, leading to massive computational demands. 

Quantum models, by contrast, leverage the unique properties of quantum mechanics to achieve comparable performance with a much smaller number of parameters. This could drastically reduce the energy and computational resources required to run these models.

The Path Ahead

As quantum computing hardware continues to improve, quantum AI models may increasingly complement or even replace classical systems. By leveraging quantum superposition, entanglement, and interference, these models offer the potential for significant reductions in both computational cost and energy consumption. With fewer parameters required, quantum models could make AI more sustainable, tackling one of the biggest challenges facing the industry today.

The work being done by Quantinuum reflects the start of the next chapter in AI, and one that is transformative. As quantum computing matures, its integration with AI has the potential to unlock entirely new approaches that are not only more efficient and performant but can also handle the full complexities of natural language. The fact that Quantinuum’s quantum computers are the most advanced in the world, and cannot be simulated classically, gives us a unique glimpse into a future. 

The future of AI now looks very much to be quantum and Quantinuum’s Gen QAI system will usher in the era in which our work will have meaningful societal impact.

technical
All