It turns out that the lack of explainability in machine learning (ML) models, such as ChatGPT or Claude, comes from the way that the systems are built. Their underlying architecture (a neural network) lacks coherent structure. While neural networks can be trained to effectively solve certain tasks, the way they do it is largely (or, from a practical standpoint, almost wholly) inaccessible. This absence of interpretability in modern ML is increasingly a major concern in sensitive areas where accountability is required, such as in finance and the healthcare and pharmaceutical sectors. The “interpretability problem in AI” is therefore a topic of grave worry for large swathes of the corporate and enterprise sector, regulators, lawmakers, and the general public.
These concerns have given birth to the field of eXplainable AI, or XAI, which attempts to solve the interpretability problem through so-called ‘post-hoc’ techniques (where one takes a trained AI model and aims to give explanations for either its overall behavior or individual outputs). This approach, while still evolving, has its own issues due to the approximate nature and fundamental limitations of post-hoc techniques.
The second approach to the interpretability problem is to employ new ML models that are, by design, inherently interpretable from the start. Such an interpretable AI model comes with explicit structure which is meaningful to us “from the outside”. Realizing this in the tech we use every day means completely redesigning how machines learn - creating a new paradigm in AI. As Sean Tull, one of the authors of the paper, stated: “In the best case, such intrinsically interpretable models would no longer even require XAI methods, serving instead as their own explanation, and one of a deeper kind.”
At Quantinuum, we’re continuing work to develop new paradigms in AI while also working to sharpen theoretical and foundational tools that allow us all to assess the interpretability of a given model. In our recent paper, we present a new theoretical framework for both defining AI models and analyzing their interpretability. With this framework, we show how advantageous it is for an AI model to have explicit and meaningful compositional structure.
The idea of composition is explored in a rigorous way using a mathematical approach called “category theory”, which is a language that describes processes and their composition. The category theory approach to interpretability can be accomplished via a graphical calculus which was also developed in part by Quantinuum scientists, and which is finding use cases in everything from gravity to quantum computing.
A fundamental problem in the field of XAI has been that many terms have not been rigorously defined, making it difficult to study - let alone discuss - interpretability in AI. Our paper presents the first known theoretical framework for assessing the compositional interpretability of AI models. With our team’s work, we now have a precise and mathematically defined definition of interpretability that allows us to have these critical conversations.
After developing the framework, our team used it to analyze the full spectrum of ML approaches. We started with Transformers (the “T” in ChatGPT), which are not interpretable – pointing to a serious issue in some of the world’s most widely used ML tools. This is in contrast with (sparse) linear models and decision trees, which we found are indeed inherently interpretable, as they are usually described.
Our team was also able to make precise how other ML models were what they call 'compositionally interpretable'. These include models already studied by our own scientists including DisCo NLP models, causal models, and conceptual space models.
Many of the models discussed in this paper are classical, but more broadly the use of category theory and string diagrams makes these tools very well suited to analyzing quantum models for machine learning. In addition to helping the broader field accurately assess the interpretability of various ML models, the seminal work in this paper will help us to develop systems that are interpretable by design.
This work is part of our broader AI strategy, which includes using AI to improve quantum computing, using quantum computers to improve AI, and – in this case - using the tools of category theory and compositionality to help us better understand AI.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Chemistry plays a central role in the modern global economy, as it has for centuries. From Antoine Lavoisier to Alessandro Volta, Marie Curie to Venkatraman Ramakrishnan, pioneering chemists drove progress in fields such as combustion, electrochemistry, and biochemistry. They contributed to our mastery of critical 21st century materials such as biodegradable plastics, semiconductors, and life-saving pharmaceuticals.
Advances in high-performance computing (HPC) and AI have brought fundamental and industrial science ever more within the scope of methods like data science and predictive analysis. In modern chemistry, it has become routine for research to be aided by computational models run in silico. Yet, due to their intrinsically quantum mechanical nature, “strongly correlated” chemical systems – those involving strongly interacting electrons or highly interdependent molecular behaviors – prove extremely hard to accurately simulate using classical computers alone. Quantum computers running quantum algorithms are designed to meet this need. Strongly correlated systems turn up in potential applications such as smart materials, high-temperature superconductors, next-generation electronic devices, batteries and fuel cells, revealing the economic potential of extending our understanding of these systems, and the motivation to apply quantum computing to computational chemistry.
For senior business and research leaders driving value creation and scientific discovery, a critical question is how will the introduction of quantum computers affect the trajectory of computational approaches to fundamental and industrial science?
This is the exciting context for our announcement of InQuanto v4.0, the latest iteration of our computational chemistry platform for quantum computers. Developed over many years in close partnership with computational chemists and materials scientists, InQuanto has become an essential tool for teams using the most advanced methods for simulating molecular and material systems. InQuanto v4.0 is packed with powerful updates, including the capability to incorporate NVIDIA’s tensor network methods for large-scale classical simulations supported by graphical processing units (GPUs).
When researching chemistry on quantum computers, we use classical HPC to perform tasks such as benchmarking, and for classical pre- and post-processing with computational chemistry methods such as density functional theory. This powerful hybrid quantum-classical combination with InQuanto accelerated our work with partners such as BMW Group, Airbus, and Honeywell. Global businesses and national governments alike are gearing up for the use of such hybrid “quantum supercomputers” to become standard practice.
In a recent technical blog post, we explored the rapid development and deployment of InQuanto for research and enterprise users, offering insights for combining quantum and high-performance classical methods with only a few lines of code. Here, we provide a higher-level overview of the value InQuanto brings to fundamental and industrial research teams.
InQuanto v4.0 is the most powerful version to date of our advanced quantum computational chemistry platform. It supports our users in applying quantum and classical computing methods to problems in chemistry and, increasingly, adjacent fields such as condensed matter physics.
Like previous versions of InQuanto, this one offers state-of-the-art algorithms, methods, and error handling techniques out of the box. Quantum error correction and detection have enabled rapid progress in quantum computing, such as groundbreaking demonstrations in partnership with Microsoft, in April and September 2024, of highly reliable “logical qubits”. Qubits are the core information-carrying components of a quantum computer and by forming them into an ensemble, they are more resistant to errors, allowing more complex problems to be tackled while producing accurate results. InQuanto continues to offer leading-edge quantum error detection protocols as standard and supports users to explore the potential of algorithms for fault-tolerant machines.
InQuanto v4.0 also marks the significant step of introducing native support for tensor networks using GPUs to accelerate simulations. In 2022, Quantinuum and NVIDIA teamed up on one of the quantum computing industry’s earliest quantum-classical collaborations. InQuanto v4.0 introduces classical tensor network methods via an interface with NVIDIA's cuQuantum SDK. Interfacing with cuQuantum enables the simulation of many quantum circuits via the use of GPUs for applications in chemistry that were previously inaccessible, particularly those with larger numbers of qubits.
“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research. With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations” - Tim Costa, Senior Director of HPC and Quantum Computing at NVIDIA
We are also responding to our users’ needs for more robust, enterprise-grade management of applications and data, by incorporating InQuanto into Quantinuum Nexus. This integration makes it far easier and more efficient to build hybrid workflows, decode and store data, and use powerful analytical methods to accelerate scientific and technical progress in critical fields in natural science.
Adding further capabilities, we recently announced our integration of InQuanto with Microsoft’s Azure Quantum Elements (AQE), allowing users to seamlessly combine AQE’s state-of-the-art HPC and AI methods with the enhanced quantum capabilities of InQuanto in a single workflow. The first end-to-end workflow using HPC, AI and quantum computing was demonstrated by Microsoft using AQE and Quantinuum Systems hardware, achieving chemical accuracy and demonstrating the advantage of logical qubits compared to physical qubits in modeling a catalytic reaction.
In the coming years, we expect to see scientific and economic progress using the powerful combination of quantum computing, HPC, and artificial intelligence. Each of these computing paradigms contributes to our ability to solve important problems. Together, their combined impact is far greater than the sum of their parts, and we recognize that these have the potential to drive valuable computational innovation in industrial use-cases that really matter, such as in energy generation, transmission and storage, and in chemical processes essential to agriculture, transport, and medicine.
Building on our recent hardware roadmap announcement, which supports scientific quantum advantage and a commercial tipping point in 2029, we are demonstrating the value of owning and building out the full quantum computing stack with a unified goal of accelerating quantum computing, integrating with HPC and AI resources where it shows promise, and using the power of the “quantum supercomputer” to make a positive difference in fundamental and industrial chemistry and related domains.
In close collaboration with our customers, we are driving towards systems capable of supporting quantum advantage and unlocking tangible and significant business value.
To access InQuanto today, including Quantinuum Systems and third-party hardware and emulators, visit: https://www.quantinuum.com/products-solutions/inquanto
To get started with Quantinuum Nexus, which meets all your quantum computing needs across Quantinuum Systems and third-party backends, visit: https://www.quantinuum.com/products-solutions/nexus
To find out more and access Quantinuum Systems, visit: https://www.quantinuum.com/products-solutions/quantinuum-systems
Quantinuum is excited to announce the release of InQuanto™ v4.0, the latest version of our advanced quantum computational chemistry software. This update introduces new features and significant performance improvements, designed to help both industry and academic researchers accelerate their computational chemistry work.
If you're new to InQuanto or want to learn more about how to use it, we encourage you to explore our documentation.
InQuanto v4.0 is being released alongside Quantinuum Nexus, our cloud-based platform for quantum software. Users with Nexus access can leverage the `inquanto-nexus` extension to, for example, take advantage of multiple available backends and seamless cloud storage.
In addition, InQuanto v4.0 introduces enhancements that allow users to run larger chemical simulations on quantum computers. Systems can be easily imported from classical codes using the widely supported FCIDUMP file format. These fermionic representations are then efficiently mapped to qubit representations, benefiting from performance improvements in InQuanto operators. For systems too large for quantum hardware experiments, users can now utilize the new `inquanto-cutensornet` extension to run simulations via tensor networks.
These updates enable users to compile and execute larger quantum circuits with greater ease, while accessing powerful compute resources through Nexus.
InQuanto v4.0 is fully integrated with Quantinuum Nexus via the `inquanto-nexus` extension. This integration allows users to easily run experiments across a range of quantum backends, from simulators to hardware, and access results stored in Nexus cloud storage.
Results can be annotated for better searchability and seamlessly shared with others. Nexus also offers the Nexus Lab, which provides a preconfigured Jupyter environment for compiling circuits and executing jobs. The Lab is set up with InQuanto v4.0 and a full suite of related software, enabling users to get started quickly.
The `inquanto.mappings` submodule has received a significant performance enhancement in InQuanto v4.0. By integrating a set of operator classes written in C++, the team has increased the performance of the module past that of other open-source packages’ equivalent methods.
Like any other Python package, InQuanto can benefit from delegating tasks with high computational overhead to compiled languages such as C++. This prescription has been applied to the qubit encoding functions of the `inquanto.mappings` submodule, in which fermionic operators are mapped to their qubit operator equivalents. One such qubit encoding scheme is the Jordan-Wigner (JW) transformation. With respect to JW encoding as a benchmarking task, the integration of C++ operator classes in InQuanto v4.0 has yielded an execution time speed-up of two and a half times that of open-source competitors (Figure 1).
This is a substantial increase in performance that all users will benefit from. InQuanto users will still interact with the familiar Python classes such as `FermionOperator` and `QubitOperator` in v4.0. However, when the `mappings` module is called, the Python operator objects are converted to C++ equivalents and vice versa before and after the qubit encoding procedure (Figure 2). With future total integration of C++ operator classes, we can remove the conversion step and push the performance of the `mappings` module further. Tests, once again using the JW mappings scheme, show a 40 times execution time speed-up as compared to open-source competitors (Figure 1).
Efficient classical pre-processing implementations such as this are a crucial step on the path to quantum advantage. As the number of physical qubits available on quantum computers increases, so will the size and complexity of the physical systems that can be simulated. To support this hardware upscaling, computational bottlenecks including those associated with the classical manipulation of operator objects must be alleviated. Aside from keeping pace with hardware advancements, it is important to enlarge the tractable system size in situations that do not involve quantum circuit execution, such as tensor network circuit simulation and resource estimation.
Users with access to GPU capabilities can now take advantage of tensor networks to accelerate simulations in InQuanto v4.0. This is made possible by the `inquanto-cutensornet` extension, which interfaces InQuanto with the NVIDIA® cuTensorNet library. The `inquanto-cutensornet` extension leverages the `pytket-cutensornet` library, which facilitates the conversion of `pytket` circuits into tensor networks to be evaluated using the NVIDIA® cuTensorNet library. This extension increases the size limit of circuits that can be simulated for chemistry applications. Future work will seek to integrate this functionality with our Nexus platform, allowing InQuanto users to employ the extension without requiring access to their own local GPU resources.
Here we demonstrate the use of the `CuTensorNetProtocol` passed to a VQE experiment. For the sake of brevity, we use the `get_system` method of `inquanto.express` to swiftly define the system, in this case H2 using the STO-3G basis-set.
from inquanto.algorithms import AlgorithmVQE
from inquanto.ansatzes import FermionSpaceAnsatzUCCD
from inquanto.computables import ExpectationValue, ExpectationValueDerivative
from inquanto.express import get_system
from inquanto.mappings import QubitMappingJordanWigner
from inquanto.minimizers import MinimizerScipy
from inquanto.extensions.cutensornet import CuTensorNetProtocol
fermion_hamiltonian, space, state = get_system("h2_sto3g.h5")
qubit_hamiltonian = fermion_hamiltonian.qubit_encode()
ansatz = FermionSpaceAnsatzUCCD(space, state, QubitMappingJordanWigner())
expectation_value = ExpectationValue(ansatz, qubit_hamiltonian)
gradient_expression = ExpectationValueDerivative(
ansatz, qubit_hamiltonian, ansatz.free_symbols_ordered()
)
protocol_tn = CuTensorNetProtocol()
vqe_tn = (
AlgorithmVQE(
objective_expression=expectation_value,
gradient_expression=gradient_expression,
minimizer=MinimizerScipy(),
initial_parameters=ansatz.state_symbols.construct_zeros(),
)
.build(protocol_objective=protocol_tn, protocol_gradient=protocol_tn)
.run()
)
print(vqe_tn.generate_report()["final_value"])
# -1.136846575472054
The inherently modular design of InQuanto allows for the seamless integration of new extensions and functionality. For instance, a user can simply modify existing code using `SparseStatevectorProtocol` to enable GPU acceleration through `inquanto-cutensornet`. It is worth noting that the extension is also compatible with shot-based simulation via the `CuTensorNetShotsBackend` provided by `pytket-cutensornet`.
“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research,” said Tim Costa, Senior Director at NVIDIA®. “With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations.”
As demonstrated by our `inquanto-pyscf` extension, we want InQuanto to easily interface with classical codes. In InQuanto v4.0, we have clarified integration with other classical codes such as Gaussian and Psi4. All that is required is an FCIDUMP file, which is a common output file for classical codes. An FCIDUMP file encodes all the one and two electron integrals required to set up a CI Hamiltonian. Users can bring their system from classical codes by passing an FCIDUMP file to the `FCIDumpRestricted` class and calling the `to_ChemistryRestrictedIntegralOperator` method or its unrestricted counterpart, depending on how they wish to treat spin. The resulting InQuanto operator object can be used within their workflow as they usually would.
Users can experiment with TKET’s latest circuit compilation tools in a straightforward manner with InQuanto v4.0. Circuit compilation now only occurs within the `inquanto.protocols` module. This allows users to define which optimization passes to run before and/or after the backend specific defaults, all in one line of code. Circuit compilation is a crucial step in all InQuanto workflows. As such, this structural change allows us to cleanly integrate new functionality through extensions such as `inquanto-nexus` and `inquanto-cutensornet`. Looking forward, beyond InQuanto v4.0, this change is a positive step towards bringing quantum error correction to InQuanto.
InQuanto v4.0 pushes the size of the chemical systems that a user can simulate on quantum computers. Users can import larger, carefully constructed systems from classical codes and encode them to optimized quantum circuits. They can then evaluate these circuits on quantum backends with `inquanto-nexus` or execute them as tensor networks using `inquanto-cutensornet`. We look forward to seeing how our users leverage InQuanto v4.0 to demonstrate the increasing power of quantum computational chemistry. If you are curious about InQuanto and want to read further, our initial release blogpost is very informative or visit the InQuanto website.
If you are interested in trying InQuanto, please request access or a demo at inquanto@quantinuum.com
In July, we proudly introduced the Beta version of Quantinuum Nexus, our comprehensive quantum computing platform. Designed to provide an exceptional experience for managing, storing, and executing quantum workflows, Nexus offers unparalleled integration with Quantinuum’s software and hardware.
Before July, Nexus was primarily available to our internal researchers and software developers, who leveraged it to drive groundbreaking work leading to notable publications such as:
Following our initial announcement, we invited external users to experience Nexus for the first time.
We selected quantum computing researchers and developers from both industry and academia to help accelerate their work and advance scientific discovery. Participants included teams from diverse sectors such as automotive and energy technology, as well as research groups from universities and national laboratories worldwide. We also welcomed scientists and software developers from other quantum computing companies to explore areas ranging from physical system simulation to the foundations of quantum mechanics.
The feedback and results from our trial users have been exceptional. But don’t just take our word for it—read on to hear directly from some of them:
At Unitary Fund, we leveraged Nexus to study a foundational question about quantum mechanics. The quantum platform allowed us to scale experimental violations of Local Friendliness to a more significant regime than had been previously tested. Using Nexus, we encoded Extended Wigner’s Friend Scenarios (EWFS) into quantum circuits, running them on state-of-the-art simulators and quantum processors. Nexus enabled us to scale the complexity of these circuits efficiently, helping us validate LF violations at larger and larger scales. The platform's reliability and advanced capabilities were crucial to extending our results, from simulating smaller systems to experimentally demonstrating LF violations on quantum hardware. Nexus has empowered us to deepen our research and contribute to foundational quantum science.
Read the publication here: Towards violations of Local Friendliness with quantum computers.
At Phasecraft we are designing algorithms for near term quantum devices, identifying the most impactful experiments to run on the best available hardware. We recently implemented a series of circuits to simulate the time dynamics of a materials model with a novel layout, exploiting the all-to-all connectivity of the H series. Nexus integrated easily with our software stack, allowing us to easily deploy our circuits and collect data, with impressive results. We first tested that our in-house software could interface with Nexus smoothly using the syntax checker as well as the suite of functionality available through the Nexus API. We then tested our circuits on the H1 emulator, and it was straightforward to switch from the emulator to the hardware when we were ready. Overall, we found nexus a straightforward interface, especially when compared with alternative quantum hardware access models.
Quantum Software Lab, University of Edinburgh
In this project, we performed the largest verified measurement-based quantum computation to date, up to the size of 52 vertices, which was made possible by the Nexus system. The protocol requires complex operations intermingling classical and quantum information. In particular, Nexus allows us to demonstrate our protocol that requires complex decisions for every measurement shot on every node in the graph: circuit branching, mid-circuit measurement and reset, and incorporating fresh randomness. Such requirements are difficult to deliver on most quantum computer frameworks as they are far from conventional gate-based BQP computations; however, Nexus can!
Read the publication here: On-Chip Verified Quantum Computation with an Ion-Trap Quantum Processing Unit
We are thrilled to announce that after these successes, Nexus is coming out of beta access for full launch. We can’t wait to offer Nexus to our customers to enable ground-breaking scientific work, powered by Quantinuum.
Register your interest in gaining access to the best full-stack quantum computing platform, today!