Quantinuum is excited to announce the release of InQuanto™ v4.0, the latest version of our advanced quantum computational chemistry software. This update introduces new features and significant performance improvements, designed to help both industry and academic researchers accelerate their computational chemistry work.
If you're new to InQuanto or want to learn more about how to use it, we encourage you to explore our documentation.
InQuanto v4.0 is being released alongside Quantinuum Nexus, our cloud-based platform for quantum software. Users with Nexus access can leverage the `inquanto-nexus` extension to, for example, take advantage of multiple available backends and seamless cloud storage.
In addition, InQuanto v4.0 introduces enhancements that allow users to run larger chemical simulations on quantum computers. Systems can be easily imported from classical codes using the widely supported FCIDUMP file format. These fermionic representations are then efficiently mapped to qubit representations, benefiting from performance improvements in InQuanto operators. For systems too large for quantum hardware experiments, users can now utilize the new `inquanto-cutensornet` extension to run simulations via tensor networks.
These updates enable users to compile and execute larger quantum circuits with greater ease, while accessing powerful compute resources through Nexus.
InQuanto v4.0 is fully integrated with Quantinuum Nexus via the `inquanto-nexus` extension. This integration allows users to easily run experiments across a range of quantum backends, from simulators to hardware, and access results stored in Nexus cloud storage.
Results can be annotated for better searchability and seamlessly shared with others. Nexus also offers the Nexus Lab, which provides a preconfigured Jupyter environment for compiling circuits and executing jobs. The Lab is set up with InQuanto v4.0 and a full suite of related software, enabling users to get started quickly.
The `inquanto.mappings` submodule has received a significant performance enhancement in InQuanto v4.0. By integrating a set of operator classes written in C++, the team has increased the performance of the module past that of other open-source packages’ equivalent methods.
Like any other Python package, InQuanto can benefit from delegating tasks with high computational overhead to compiled languages such as C++. This prescription has been applied to the qubit encoding functions of the `inquanto.mappings` submodule, in which fermionic operators are mapped to their qubit operator equivalents. One such qubit encoding scheme is the Jordan-Wigner (JW) transformation. With respect to JW encoding as a benchmarking task, the integration of C++ operator classes in InQuanto v4.0 has yielded an execution time speed-up of two and a half times that of open-source competitors (Figure 1).
This is a substantial increase in performance that all users will benefit from. InQuanto users will still interact with the familiar Python classes such as `FermionOperator` and `QubitOperator` in v4.0. However, when the `mappings` module is called, the Python operator objects are converted to C++ equivalents and vice versa before and after the qubit encoding procedure (Figure 2). With future total integration of C++ operator classes, we can remove the conversion step and push the performance of the `mappings` module further. Tests, once again using the JW mappings scheme, show a 40 times execution time speed-up as compared to open-source competitors (Figure 1).
Efficient classical pre-processing implementations such as this are a crucial step on the path to quantum advantage. As the number of physical qubits available on quantum computers increases, so will the size and complexity of the physical systems that can be simulated. To support this hardware upscaling, computational bottlenecks including those associated with the classical manipulation of operator objects must be alleviated. Aside from keeping pace with hardware advancements, it is important to enlarge the tractable system size in situations that do not involve quantum circuit execution, such as tensor network circuit simulation and resource estimation.
Users with access to GPU capabilities can now take advantage of tensor networks to accelerate simulations in InQuanto v4.0. This is made possible by the `inquanto-cutensornet` extension, which interfaces InQuanto with the NVIDIA® cuTensorNet library. The `inquanto-cutensornet` extension leverages the `pytket-cutensornet` library, which facilitates the conversion of `pytket` circuits into tensor networks to be evaluated using the NVIDIA® cuTensorNet library. This extension increases the size limit of circuits that can be simulated for chemistry applications. Future work will seek to integrate this functionality with our Nexus platform, allowing InQuanto users to employ the extension without requiring access to their own local GPU resources.
Here we demonstrate the use of the `CuTensorNetProtocol` passed to a VQE experiment. For the sake of brevity, we use the `get_system` method of `inquanto.express` to swiftly define the system, in this case H2 using the STO-3G basis-set.
from inquanto.algorithms import AlgorithmVQE
from inquanto.ansatzes import FermionSpaceAnsatzUCCD
from inquanto.computables import ExpectationValue, ExpectationValueDerivative
from inquanto.express import get_system
from inquanto.mappings import QubitMappingJordanWigner
from inquanto.minimizers import MinimizerScipy
from inquanto.extensions.cutensornet import CuTensorNetProtocol
fermion_hamiltonian, space, state = get_system("h2_sto3g.h5")
qubit_hamiltonian = fermion_hamiltonian.qubit_encode()
ansatz = FermionSpaceAnsatzUCCD(space, state, QubitMappingJordanWigner())
expectation_value = ExpectationValue(ansatz, qubit_hamiltonian)
gradient_expression = ExpectationValueDerivative(
ansatz, qubit_hamiltonian, ansatz.free_symbols_ordered()
)
protocol_tn = CuTensorNetProtocol()
vqe_tn = (
AlgorithmVQE(
objective_expression=expectation_value,
gradient_expression=gradient_expression,
minimizer=MinimizerScipy(),
initial_parameters=ansatz.state_symbols.construct_zeros(),
)
.build(protocol_objective=protocol_tn, protocol_gradient=protocol_tn)
.run()
)
print(vqe_tn.generate_report()["final_value"])
# -1.136846575472054
The inherently modular design of InQuanto allows for the seamless integration of new extensions and functionality. For instance, a user can simply modify existing code using `SparseStatevectorProtocol` to enable GPU acceleration through `inquanto-cutensornet`. It is worth noting that the extension is also compatible with shot-based simulation via the `CuTensorNetShotsBackend` provided by `pytket-cutensornet`.
“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research,” said Tim Costa, Senior Director at NVIDIA®. “With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations.”
As demonstrated by our `inquanto-pyscf` extension, we want InQuanto to easily interface with classical codes. In InQuanto v4.0, we have clarified integration with other classical codes such as Gaussian and Psi4. All that is required is an FCIDUMP file, which is a common output file for classical codes. An FCIDUMP file encodes all the one and two electron integrals required to set up a CI Hamiltonian. Users can bring their system from classical codes by passing an FCIDUMP file to the `FCIDumpRestricted` class and calling the `to_ChemistryRestrictedIntegralOperator` method or its unrestricted counterpart, depending on how they wish to treat spin. The resulting InQuanto operator object can be used within their workflow as they usually would.
Users can experiment with TKET’s latest circuit compilation tools in a straightforward manner with InQuanto v4.0. Circuit compilation now only occurs within the `inquanto.protocols` module. This allows users to define which optimization passes to run before and/or after the backend specific defaults, all in one line of code. Circuit compilation is a crucial step in all InQuanto workflows. As such, this structural change allows us to cleanly integrate new functionality through extensions such as `inquanto-nexus` and `inquanto-cutensornet`. Looking forward, beyond InQuanto v4.0, this change is a positive step towards bringing quantum error correction to InQuanto.
InQuanto v4.0 pushes the size of the chemical systems that a user can simulate on quantum computers. Users can import larger, carefully constructed systems from classical codes and encode them to optimized quantum circuits. They can then evaluate these circuits on quantum backends with `inquanto-nexus` or execute them as tensor networks using `inquanto-cutensornet`. We look forward to seeing how our users leverage InQuanto v4.0 to demonstrate the increasing power of quantum computational chemistry. If you are curious about InQuanto and want to read further, our initial release blogpost is very informative or visit the InQuanto website.
If you are interested in trying InQuanto, please request access or a demo at inquanto@quantinuum.com
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
The most common question in the public discourse around quantum computers has been, “When will they be useful?” We have an answer.
Very recently in Nature we announced a successful demonstration of a quantum computer generating certifiable randomness, a critical underpinning of our modern digital infrastructure. We explained how we will be taking a product to market this year, based on that advance – one that could only be achieved because we have the world’s most powerful quantum computer.
Today, we have made another huge leap in a different domain, providing fresh evidence that our quantum computers are the best in the world. In this case, we have shown that our quantum computers can be a useful tool for advancing scientific discovery.
Our latest paper shows how our quantum computer rivals the best classical approaches in expanding our understanding of magnetism. This provides an entry point that could lead directly to innovations in fields from biochemistry, to defense, to new materials. These are tangible and meaningful advances that will deliver real world impact.
To achieve this, we partnered with researchers from Caltech, Fermioniq, EPFL, and the Technical University of Munich. The team used Quantinuum’s System Model H2 to simulate quantum magnetism at a scale and level of accuracy that pushes the boundaries of what we know to be possible.
As the authors of the paper state:
“We believe the quantum data provided by System Model H2 should be regarded as complementary to classical numerical methods, and is arguably the most convincing standard to which they should be compared.”
Our computer simulated the quantum Ising model, a model for quantum magnetism that describes a set of magnets (physicists call them ‘spins’) on a lattice that can point up or down, and prefer to point the same way as their neighbors. The model is inherently “quantum” because the spins can move between up and down configurations by a process known as “quantum tunneling”.
Researchers have struggled to simulate the dynamics of the Ising model at larger scales due to the enormous computational cost of doing so. Nobel laureate physicist Richard Feynman, who is widely considered to be the progenitor of quantum computing, once said, “it is impossible to represent the results of quantum mechanics with a classical universal device.” When attempting to simulate quantum systems at comparable scales on classical computers, the computational demands can quickly become overwhelming. It is the inherent ‘quantumness’ of these problems that makes them so hard classically, and conversely, so well-suited for quantum computing.
These inherently quantum problems also lie at the heart of many complex and useful material properties. The quantum Ising model is an entry point to confront some of the deepest mysteries in the study of interacting quantum magnets. While rooted in fundamental physics, its relevance extends to wide-ranging commercial and defense applications, including medical test equipment, quantum sensors, and the study of exotic states of matter like superconductivity.
Instead of tailored demonstrations that claim ‘quantum advantage’ in contrived scenarios, our breakthroughs announced this week prove that we can tackle complex, meaningful scientific questions difficult for classical methods to address. In the work described in this paper, we have proved that quantum computing could be the gold standard for materials simulations. These developments are critical steps toward realizing the potential of quantum computers.
With only 56 qubits in our commercially available System Model H2, the most powerful quantum system in the world today, we are already testing the limits of classical methods, and in some cases, exceeding them. Later this year, we will introduce our massively more powerful 96-qubit Helios system - breaching the boundaries of what until recently was deemed possible.
The marriage of AI and quantum computing is going to have a widespread and meaningful impact in many aspects of our lives, combining the strengths of both fields to tackle complex problems.
Quantum and AI are the ideal partners. At Quantinuum, we are developing tools to accelerate AI with quantum computers, and quantum computers with AI. According to recent independent analysis, our quantum computers are the world’s most powerful, enabling state-of-the-art approaches like Generative Quantum AI (Gen QAI), where we train classical AI models with data generated from a quantum computer.
We harness AI methods to accelerate the development and performance of our full quantum computing stack as opposed to simply theorizing from the sidelines. A paper in Nature Machine Intelligence reveals the results of a recent collaboration between Quantinuum and Google DeepMind to tackle the hard problem of quantum compilation.
The work shows a classical AI model supporting quantum computing by demonstrating its potential for quantum circuit optimization. An AI approach like this has the potential to lead to more effective control at the hardware level, to a richer suite of middleware tools for quantum circuit compilation, error mitigation and correction, even to novel high-level quantum software primitives and quantum algorithms.
The joint Quantinuum-Google DeepMind team of researchers tackled one of quantum computing’s most pressing challenges: minimizing the number of highly expensive but essential T-gates required for universal quantum computation. This is important specifically for the fault-tolerant regime, which is becoming increasingly relevant as quantum error correction protocols are being explored on rapidly developing quantum hardware. The joint team of researchers adapted AlphaTensor, Google DeepMind’s reinforcement learning AI system for algorithm discovery, which was introduced to improve the efficiency of linear algebra computations. The team introduced AlphaTensor-Quantum, which takes as input a quantum circuit and returns a new, more efficient one in terms of number of T-gates, with exactly the same functionality!
AlphaTensor-Quantum outperformed current state-of-the art optimization methods and matched the best human-designed solutions across multiple circuits in a thoroughly curated set of circuits, chosen for their prevalence in many applications, from quantum arithmetic to quantum chemistry. This breakthrough shows the potential for AI to automate the process of finding the most efficient quantum circuit. This is the first time that such an AI model has been put to the problem of T-count reduction at such a large scale.
The symbiotic relationship between quantum and AI works both ways. When AI and quantum computing work together, quantum computers could dramatically accelerate machine learning algorithms, whether by the development and application of natively quantum algorithms, or by offering quantum-generated training data that can be used to train a classical AI model.
Our recent announcement about Generative Quantum AI (Gen QAI) spells out our commitment to unlocking the value of the data generated by our H2 quantum computer. This value arises from the world’s leading fidelity and computational power of our System Model H2, making it impossible to exactly simulate on any classical computer, and therefore the data it generates – that we can use to train AI – is inaccessible by any other means. Quantinuum’s Chief Scientist for Algorithms and Innovation, Prof. Harry Buhrman, has likened accessing the first truly quantum-generated training data to the invention of the modern microscope in the seventeenth century, which revealed an entirely new world of tiny organisms thriving unseen within a single drop of water.
Recently, we announced a wide-ranging partnership with NVIDIA. It charts a course to commercial scale applications arising from the partnership between high-performance classical computers, powerful AI systems, and quantum computers that breach the boundaries of what previously could and could not be done. Our President & CEO, Dr. Raj Hazra spoke to CNBC recently about our partnership. Watch the video here.
As we prepare for the next stage of quantum processor development, with the launch of our Helios system in 2025, we’re excited to see how AI can help write more efficient code for quantum computers – and how our quantum processors, the most powerful in the world, can provide a backend for AI computations.
As in any truly symbiotic relationship, the addition of AI to quantum computing equally benefits both sides of the equation.
To read more about Quantinuum and Google DeepMind’s collaboration, please read the scientific paper here.
Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.
Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.
This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.
It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.
Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.
By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.
Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.
Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.
In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.
Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.
What follows is intended as a non-technical explainer of the results in today’s Nature paper.
For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.
At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.
In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?
This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:
This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.
The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.
The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.
Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.
In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.
Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.
And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.