Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
At this year’s Q2B Silicon Valley conference from December 10th – 12th in Santa Clara, California, the Quantinuum team will be participating in plenary and case study sessions to showcase our quantum computing technologies.
Schedule a meeting with us at Q2B
Meet our team at Booth #G9 to discover how Quantinuum is charting the path to universal, fully fault-tolerant quantum computing.
Join our sessions:
Plenary: Advancements in Fault-Tolerant Quantum Computation: Demonstrations and Results
There is industry-wide consensus on the need for fault-tolerant QPU’s, but demonstrations of these abilities are less common. In this talk, Dr. Hayes will review Quantinuum’s long list of meaningful demonstrations in fault-tolerance, including real-time error correction, a variety of codes from the surface code to exotic qLDPC codes, logical benchmarking, beyond break-even behavior on multiple codes and circuit families.
Keynote: Quantum Tokens: Securing Digital Assets with Quantum Physics
Mitsui’s Deputy General Manager, Quantum Innovation Dept., Corporate Development Div., Koji Naniwada, and Quantinuum’s Head of Cybersecurity, Duncan Jones will deliver a keynote presentation on a case study for quantum in cybersecurity. Together, our organizations demonstrated the first implementation of quantum tokens over a commercial QKD network. Quantum tokens enable three previously incompatible properties: unforgeability guaranteed by physics, fast settlement without centralized validation, and user privacy until redemption. We present results from our successful Tokyo trial using NEC's QKD commercial hardware and discuss potential applications in financial services.
Quantinuum and Mitsui Sponsored Happy Hour
Join the Quantinuum and Mitsui teams in the expo hall for a networking happy hour.
Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.
Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation.
For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality.
Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more.
A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments.
CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.”
With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems.
As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”
The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version).
In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.
We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.
Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.
Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe, or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.
The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.
Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.
The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.
The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.
In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.
This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of.
Chemistry plays a central role in the modern global economy, as it has for centuries. From Antoine Lavoisier to Alessandro Volta, Marie Curie to Venkatraman Ramakrishnan, pioneering chemists drove progress in fields such as combustion, electrochemistry, and biochemistry. They contributed to our mastery of critical 21st century materials such as biodegradable plastics, semiconductors, and life-saving pharmaceuticals.
Advances in high-performance computing (HPC) and AI have brought fundamental and industrial science ever more within the scope of methods like data science and predictive analysis. In modern chemistry, it has become routine for research to be aided by computational models run in silico. Yet, due to their intrinsically quantum mechanical nature, “strongly correlated” chemical systems – those involving strongly interacting electrons or highly interdependent molecular behaviors – prove extremely hard to accurately simulate using classical computers alone. Quantum computers running quantum algorithms are designed to meet this need. Strongly correlated systems turn up in potential applications such as smart materials, high-temperature superconductors, next-generation electronic devices, batteries and fuel cells, revealing the economic potential of extending our understanding of these systems, and the motivation to apply quantum computing to computational chemistry.
For senior business and research leaders driving value creation and scientific discovery, a critical question is how will the introduction of quantum computers affect the trajectory of computational approaches to fundamental and industrial science?
This is the exciting context for our announcement of InQuanto v4.0, the latest iteration of our computational chemistry platform for quantum computers. Developed over many years in close partnership with computational chemists and materials scientists, InQuanto has become an essential tool for teams using the most advanced methods for simulating molecular and material systems. InQuanto v4.0 is packed with powerful updates, including the capability to incorporate NVIDIA’s tensor network methods for large-scale classical simulations supported by graphical processing units (GPUs).
When researching chemistry on quantum computers, we use classical HPC to perform tasks such as benchmarking, and for classical pre- and post-processing with computational chemistry methods such as density functional theory. This powerful hybrid quantum-classical combination with InQuanto accelerated our work with partners such as BMW Group, Airbus, and Honeywell. Global businesses and national governments alike are gearing up for the use of such hybrid “quantum supercomputers” to become standard practice.
In a recent technical blog post, we explored the rapid development and deployment of InQuanto for research and enterprise users, offering insights for combining quantum and high-performance classical methods with only a few lines of code. Here, we provide a higher-level overview of the value InQuanto brings to fundamental and industrial research teams.
InQuanto v4.0 is the most powerful version to date of our advanced quantum computational chemistry platform. It supports our users in applying quantum and classical computing methods to problems in chemistry and, increasingly, adjacent fields such as condensed matter physics.
Like previous versions of InQuanto, this one offers state-of-the-art algorithms, methods, and error handling techniques out of the box. Quantum error correction and detection have enabled rapid progress in quantum computing, such as groundbreaking demonstrations in partnership with Microsoft, in April and September 2024, of highly reliable “logical qubits”. Qubits are the core information-carrying components of a quantum computer and by forming them into an ensemble, they are more resistant to errors, allowing more complex problems to be tackled while producing accurate results. InQuanto continues to offer leading-edge quantum error detection protocols as standard and supports users to explore the potential of algorithms for fault-tolerant machines.
InQuanto v4.0 also marks the significant step of introducing native support for tensor networks using GPUs to accelerate simulations. In 2022, Quantinuum and NVIDIA teamed up on one of the quantum computing industry’s earliest quantum-classical collaborations. InQuanto v4.0 introduces classical tensor network methods via an interface with NVIDIA's cuQuantum SDK. Interfacing with cuQuantum enables the simulation of many quantum circuits via the use of GPUs for applications in chemistry that were previously inaccessible, particularly those with larger numbers of qubits.
“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research. With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations” - Tim Costa, Senior Director of HPC and Quantum Computing at NVIDIA
We are also responding to our users’ needs for more robust, enterprise-grade management of applications and data, by incorporating InQuanto into Quantinuum Nexus. This integration makes it far easier and more efficient to build hybrid workflows, decode and store data, and use powerful analytical methods to accelerate scientific and technical progress in critical fields in natural science.
Adding further capabilities, we recently announced our integration of InQuanto with Microsoft’s Azure Quantum Elements (AQE), allowing users to seamlessly combine AQE’s state-of-the-art HPC and AI methods with the enhanced quantum capabilities of InQuanto in a single workflow. The first end-to-end workflow using HPC, AI and quantum computing was demonstrated by Microsoft using AQE and Quantinuum Systems hardware, achieving chemical accuracy and demonstrating the advantage of logical qubits compared to physical qubits in modeling a catalytic reaction.
In the coming years, we expect to see scientific and economic progress using the powerful combination of quantum computing, HPC, and artificial intelligence. Each of these computing paradigms contributes to our ability to solve important problems. Together, their combined impact is far greater than the sum of their parts, and we recognize that these have the potential to drive valuable computational innovation in industrial use-cases that really matter, such as in energy generation, transmission and storage, and in chemical processes essential to agriculture, transport, and medicine.
Building on our recent hardware roadmap announcement, which supports scientific quantum advantage and a commercial tipping point in 2029, we are demonstrating the value of owning and building out the full quantum computing stack with a unified goal of accelerating quantum computing, integrating with HPC and AI resources where it shows promise, and using the power of the “quantum supercomputer” to make a positive difference in fundamental and industrial chemistry and related domains.
In close collaboration with our customers, we are driving towards systems capable of supporting quantum advantage and unlocking tangible and significant business value.
To access InQuanto today, including Quantinuum Systems and third-party hardware and emulators, visit: https://www.quantinuum.com/products-solutions/inquanto
To get started with Quantinuum Nexus, which meets all your quantum computing needs across Quantinuum Systems and third-party backends, visit: https://www.quantinuum.com/products-solutions/nexus
To find out more and access Quantinuum Systems, visit: https://www.quantinuum.com/products-solutions/quantinuum-systems