Blog

Discover how we are pushing the boundaries in the world of quantum computing

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
technical
All
November 14, 2024
Making fault-tolerance a reality: Introducing our QEC decoder toolkit

We are thrilled to announce a groundbreaking addition to our technology suite: the Quantum Error Correction (QEC) decoder toolkit. This essential tool empowers users to decode syndromes and implement real-time corrections, an essential step towards achieving fault-tolerant quantum computing. As the only company offering this crucial capability to our users, we are paving the way for the future of quantum technology.

We are dedicated to realizing universal fault-tolerant quantum computing by the end of this decade. A key component of this mission is equipping our customers with essential QEC workflows, making advanced quantum computing more accessible than ever before.

Our QEC decoding toolkit is enabled by our real-time hybrid compute capability, which executes Web Assembly (Wasm) in our stack in both hardware and emulator environments. This enables the use of libraries (like linear algebra and graph libraries) and complex data structures (like vectors and graphs).

Our real-time hybrid compute capability enables a new frontier in classical-quantum hybrid computing. Our release of the QEC decoder toolkit marks a maturing from just running quantum circuits to running full quantum algorithms, interacting in depth with classical resources in real-time so that each platform (quantum, classical) can be focused where it performs best.

QEC decoding is one of the most exciting – and necessary – applications of hybrid computing capacity. Before now, error correction needed to be done with lookup tables: a list specifying the correction for each syndrome. This is not scalable: the number of syndromes grows exponentially with the distance (which is like the “error correcting power”) of the code. This means that codes hefty enough to run, say, Shor’s algorithm would need a lookup table too massive to search or even store properly.

For universal fault-tolerant quantum computing to become a reality, we need to decode error syndromes algorithmically. During algorithmic decoding, the syndrome is sent to a classical computer which solves (for example) a graph problem to determine the correction to make.

Algorithmic decoding is only half of the puzzle though – the other key ingredient is being able to decode syndromes and correct errors in real time. For universal, fully fault-tolerant computing, real-time decoding is necessary: one can’t just push all corrections to the end of the computation because the errors will propagate and overwhelm the computation. Errors must be corrected as the computation proceeds.

In real-time algorithmic decoding, the syndrome is sent to a classical computer while the qubits are held in stasis , then the computed correction is applied before the computation proceeds.  Alternatively, one can compute the correction while the computation proceeds in parallel, then it is retrieved when needed. This flexibility in implementation allows for maximum freedom in algorithmic design.

Our real-time co-compute capability combined with our industry-leading coherence times (up to 10,000x longer than competitors) is what allows us to be the first to release this capacity to our customers. Our long coherence times also enable our users to benefit from more complex decoders that offer superior results.

Our QEC toolkit is fully flexible and will work with any QEC code - allowing our customers to build their own decoders and explore the results. It also enables users to better understand what fault-tolerant computing on actual hardware is like and improve on ideas developed via theory and simulation. This means building better decoders for the real world.

Our toolkit includes three use cases and includes the relevant source-code needed to test and compile the Wasm binaries. These use cases are:

- Repeat Until Success: Conditionally adding quantum operations to a circuit based on equality comparisons with an in-memory Wasm variable.

- Repetition Code: [[3,1,2]] code that encodes 3 physical qubits into 1 logical qubit, with code distance of 2.

- Surface Code: [[9,1,3]] code that encodes 9 physical qubits into 1 logical qubit, with a code distance of 3.

This is just the beginning of our promise to deliver universal, fault-tolerant quantum computing by the end of the decade. We are proud to be the only company offering advanced capabilities like this to our customers, and to be leading the way towards practical QEC.  

technical
All
November 4, 2024
Establishing Trust

For a novel technology to be successful, it must prove that it is both useful and works as described.

Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.

Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?

We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.

This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.

The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.

Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.

technical
All
October 31, 2024
We’re working on bringing the power of quantum computing – and quantum machine learning - to particle physics

Particle accelerators like the LHC take serious computing power. Often on the bleeding-edge of computing technology, accelerator projects sometimes even drive innovations in computing. In fact, while there is some controversy over exactly where the world wide web was created, it is often attributed to Tim Berners-Lee at CERN, who developed it to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the High Energy Physics community is interested in quantum computing, which offers real solutions to some of their hardest problems. Furthermore, the HEP community is well-positioned to support the early stages of technological development: with budgets in the 10s of billions per year and tens of thousands of scientists and engineers working on accelerator and computational physics, this is a ripe industry for quantum computing to tap.

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable”

The authors go on to state: “Within the existing data reconstruction and analysis paradigm, access to algorithms that exhibit quantum speed-ups would revolutionize the simulation of large-scale quantum systems and the processing of data from complex experimental set-ups. This would enable a new generation of precision measurements to probe deeper into the nature of the universe. Existing measurements may contain the signatures of underlying quantum correlations or other sources of new physics that are inaccessible to classical analysis techniques. Quantum algorithms that leverage these properties could potentially extract more information from a given dataset than classical algorithms.”

Our scientists have been working with a team at DESY, one of the world’s leading accelerator centers, to bring the power of quantum computing to particle physics. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center for fundamental science located in Hamburg and Zeuthen, where the Center for Quantum Technologies and Applications (CQTA) is based.  DESY operates, develops, and constructs particle accelerators used to investigate the structure, dynamics and function of matter, and conducts a broad spectrum of interdisciplinary scientific research. DESY employs about 3,000 staff members from more than 60 nations, and is part of the worldwide computer network to store and analyze the enormous flood of data that is produced by the LHC in Geneva.

In a recent paper, our scientists collaborated with scientists from DESY, the Leiden Institute of Advanced Computer Science (LIACS), and Northeastern University to explore using a generative quantum machine learning model, called a “quantum Boltzmann machine” to untangle data from CERN’s LHC.

The goal was to learn probability distributions relevant to high energy physics better than the corresponding classical models. The data specifically contains “particle jet events”, which describe how colliders collect data about the subatomic particles generated during the experiments.

In some cases the quantum Boltzmann machine was indeed better, compared to a classical Boltzmann machine. The team is analyzed when and why this happens, understanding better how to apply these new quantum tools in this research setting. The team also studied the effect of the data encoding into a quantum state, noting that it can have a decisive effect on the training performance. Especially enticing is that the quantum Boltzmann machine is efficiently trainable, which our scientists showed in a recent paper published in Nature Communications Physics.  

events
All
October 28, 2024
SC24: The International Conference for High Performance Computing, Networking, Storage, and Analysis

Find the Quantinuum team at this year’s SC24 conference from November 17th – 22nd in Atlanta, Georgia. Meet our team at Booth #4351 to discover how Quantinuum is bridging the gap between quantum computing and high-performance compute with leading industry partners.

Schedule time to meet with us

The Quantinuum team will be participating in various events, panels and poster sessions to showcase our quantum computing technologies. Join us at the below sessions: 

Monday, Nov 18, 8:00 - 8:25pm EST

Panel: KAUST booth 1031

Nash Palaniswamy, Quantinuum’s CCO, will join fellow quantum vendors and KAUST partners for the "Quantum First" panel to discuss advancements in quantum technology.

Monday, Nov 18, 9:00 - 11:59pm EST

Beowulf Bash: World of Coca-Cola

This year, we are proudly sponsoring the Beowulf Bash, a unique event organized to bring the HPC community together for a night of unique entertainment!

Tuesday, Nov 19, 2:40 - 3:00pm EST

Presentation: Accelerating Hybrid Quantum-Classical Computing with Microsoft & Quantinuum

Josh Savory, Director Cloud & Hardware Offerings, and Simon McAdams, Chemistry Product Lead, will showcase Quantinuum and Microsoft's latest breakthroughs, including the creation of the most reliable logical qubits on record and a comprehensive and unique hybrid workflow designed to tackle real chemistry problems, seamlessly integrating cloud HPC, AI, and quantum computing.

Wednesday, Nov 20, 3:30 – 5:00pm EST

Panel: Educating for a Hybrid Future: Bridging the Gap between High-Performance and Quantum Computing

Vincent Anandraj, Quantinuum’s Director of Global Ecosystem and Strategic Alliances, will moderate this panel which brings together experts from leading supercomputing centers and the quantum computing industry, including PSC, Leibniz Supercomputing Centre, IQM Quantum Computers, NVIDIA, and National Research Foundation.

Thursday, Nov 21, 11:00 – 11:30am EST 

Presentation: Realizing Quantum Kernel Models at Scale with Matrix Product State Simulation

Pablo Andres-Martinez​, Research Scientist at Quantinuum, will present research done in collaboration with HSBC, where the team applied quantum methods to fraud detection.

technical
All
September 20, 2024
Quantinuum achieves moonshot years ahead of schedule, demonstrating fault-tolerant high-fidelity teleportation of a logical qubit

While it sounds like a gadget from Star Trek, teleportation is real – and it is happening at Quantinuum. In a new paper published in Science, our researchers moved a quantum state from one place to another without physically moving it through space - and they accomplished this feat with fault-tolerance and excellent fidelity. This is an important milestone for the whole quantum computing community and the latest example of Quantinuum achieving critical milestones years ahead of expectations. 

While it seems exotic, teleportation is a critical piece of technology needed for full scale fault-tolerant quantum computing, and it is used widely in algorithm and architecture design. In addition to being essential on its own, teleportation has historically been used to demonstrate a high level of system maturity. The protocol requires multiple qubits, high-fidelity state-preparation, single-qubit operations, entangling operations, mid-circuit measurement, and conditional operations, making it an excellent system-level benchmark.

Our team was motivated to do this work by the US Government Intelligence Advance Research Projects Activity (IARPA), who set a challenge to perform high fidelity teleportation with the goal of advancing the state of science in universal fault-tolerant quantum computing. IARPA further specified that the entanglement and teleportation protocols must also maintain fault-tolerance, a key property that keeps errors local and correctable. 

These ambitious goals required developing highly complex systems, protocols, and other infrastructure to enable exquisite control and operation of quantum-mechanical hardware. We are proud to have accomplished these goals ahead of schedule, demonstrating the flexibility, performance, and power of Quantinuum’s Quantum Charge Coupled Device (QCCD) architecture.

Quantinuum’s demonstration marks the first time that an arbitrary quantum state has been teleported at the logical level (using a quantum error correcting code). This means that instead of teleporting the quantum state of a single physical qubit we have teleported the quantum information encoded in an entangled set of physical qubits, known as a logical qubit. In other words, the collective state of a bunch of qubits is teleported from one set of physical qubits to another set of physical qubits. This is, in a sense, a lot closer to what you see in Star Trek – they teleport the state of a big collection of atoms at once. Except for the small detail of coming up with a pile of matter with which to reconstruct a human body...

This is also the first demonstration of a fully fault-tolerant version of the state teleportation circuit using real-time quantum error correction (QEC), decoding mid-circuit measurement of syndromes and implementing corrections during the protocol. It is critical for computers to be able to catch and correct any errors that happen along the way, and this is not something other groups have managed to do in any robust sense. In addition, our team achieved the result with high fidelity (97.5%±0.2%), providing a powerful demonstration of the quality of our H2 quantum processor, Powered by Honeywell.

Our team also tried several variations of logical teleportation circuits, using both transversal gates and lattice surgery protocols, thanks to the flexibility of our QCCD architecture. This marks the first demonstration of lattice surgery performed on a QEC code.

Lattice surgery is a strategy for implementing logical gates that requires only 2D nearest-neighbor interactions, making it especially useful for architectures whose qubit locations are fixed, such as superconducting architectures. QCCD and other technologies that do not have fixed qubit positioning might employ this method, another method, or some mixture. We are fortunate that our QCCD architecture allows us to explore the use of different logical gating options so that we can optimize our choices for experimental realities.

While the teleportation demonstration is the big result, sometimes it is the behind-the-scenes technology advancements that make the big differences. The experiments in this paper were designed at the logical level using an internally developed logical-level programming language dubbed Simple Logical Representation (SLR). This is yet another marker of our system’s maturity – we are no longer programming at the physical level but have instead moved up one “layer of abstraction”. Someday, all quantum algorithms will need to be run on the logical level with rounds of quantum error correction. This is a markedly different state than most present experiments, which are run on the physical level without quantum error correction. It is also worth noting that these results were generated using the software stack available to any user of Quantinuum’s H-Series quantum computers, and these experiments were run alongside customer jobs – underlining that these results are commercial performance, not hero data on a bespoke system.

Ironically, a key element in this work is our ability to move our qubits through space the “normal” way - this capacity gives us all-to-all connectivity, which was essential for some of the QEC protocols used in the complex task of fault-tolerant logical teleportation. We recently demonstrated solutions to the sorting problem and wiring problem in a new 2D grid trap, which will be essential as we scale up our devices.

technical
All
September 18, 2024
“Talking quantum circuits”
The central question that pre-occupies our team has been:

“How can quantum structures and quantum computers contribute to the effectiveness of AI?”

In previous work we have made notable advances in answering this question, and this article is based on our most recent work in the new papers [arXiv:2406.17583, arXiv:2408.06061], and most notably the experiment in [arXiv:2409.08777].

This article is one of a series that we will be publishing alongside further advances – advances that are accelerated by access to the most powerful quantum computers available.

Large language Models (LLMs) such as ChatGPT are having an impact on society across many walks of life. However, as users have become more familiar with this new technology, they have also become increasingly aware of deep-seated and systemic problems that come with AI systems built around LLM’s.

The primary problem with LLMs is that nobody knows how they work - as inscrutable “black boxes” they aren’t “interpretable”, meaning we can’t reliably or efficiently control or predict their behavior. This is unacceptable in many situations. In addition, Modern LLMs are incredibly expensive to build and run, costing serious – and potentially unsustainable –amounts of power to train and use. This is why more and more organizations, governments, and regulators are insisting on solutions.  

But how can we find these solutions, when we don’t fully understand what we are dealing with now?1

At Quantinuum, we have been working on natural language processing (NLP) using quantum computers for some time now. We are excited to have recently carried out experiments [arXiv: 2409.08777] which demonstrate not only how it is possible to train a model for a quantum computer in a scalable manner, but also how to do this in a way that is interpretable for us. Moreover, we have promising theoretical indications of the usefulness of quantum computers for interpretable NLP [arXiv:2408.06061].

In order to better understand why this could be the case, one needs to understand the ways in which meanings compose together throughout a story or narrative. Our work towards capturing them in a new model of language, which we call DisCoCirc, is reported on extensively in this previous blog post from 2023.

In new work referred to in this article, we embrace “compositional interpretability” as proposed in [arXiv:2406.17583] as a solution to the problems that plague current AI. In brief, compositional interpretability boils down to being able to assign a human friendly meaning, such as natural language, to the components of a model, and then being able to understand how they fit together2.

A problem currently inherent to quantum machine learning is that of being able to train at scale. We avoid this by making use of “compositional generalization”. This means we train small, on classical computers, and then at test time evaluate much larger examples on a quantum computer. There now exist quantum computers which are impossible to simulate classically. To train models for such computers, it seems that compositional generalization currently provides the only credible path.

1. Text as circuits

DisCoCirc is a circuit-based model for natural language that turns arbitrary text into “text circuits” [arXiv:1904.03478, arXiv:2301.10595, arXiv:2311.17892]. When we say that arbitrary text becomes ‘text-circuits’ we are converting the lines of text, which live in one dimension, into text-circuits which live in two-dimensions. These dimensions are the entities of the text versus the events in time.

To see how that works, consider the following story. In the beginning there is Alex and Beau. Alex meets Beau. Later, Chris shows up, and Beau marries Chris. Alex then kicks Beau.

The content of this story can be represented as the following circuit:

Figure 1. A text circuit for a simple story, involving three actors Alex, Beau andChris, who have a number of interactions with one another, making up a story –the circuit is to be read from top to bottom.
2. From text circuits to quantum circuits

Such a text circuit represents how the ‘actors’ in it interact with each other, and how their states evolve by doing so. Initially, we know nothing about Alex and Beau. Once Alex meets Beau, we know something about Alex and Beau’s interaction, then Beau marries Chris, and then Alex kicks Beau, so we know quite a bit more about all three, and in particular, how they relate to each other.

Let’s now take those circuits to be quantum circuits.

In the last section we will elaborate more why this could be a very good choice. For now it’s ok to understand that we simply follow the current paradigm of using vectors for meanings, in exactly the same way that this works in LLMs. Moreover, if we then also want to faithfully represent the compositional structure in language3, we can rely on theorem 5.49 from our book Picturing Quantum Processes, which informally can be stated as follows:

If the manner in which meanings of words (represented by vectors) compose obeys linguistic structure, then those vectors compose in exactly the same way as quantum systems compose.4

In short, a quantum implementation enables us to embrace compositional interpretability, as defined in our recent paper [arXiv:2406.17583].

3. Text circuits on our quantum computer

So, what have we done? And what does it mean?

We implemented a “question-answering” experiment on our Quantinuum quantum computers, for text circuits as described above. We know from our new paper [arXiv:2408.06061] that this is very hard to do on a classical computer due to the fact that as the size of the texts get bigger they very quickly become unrealistic to even try to do this on a classical computer, however powerful it might be. This is worth emphasizing. The experiment we have completed would scale exponentially using classical computers – to the point where the approach becomes intractable.

The experiment consisted of teaching (or training) the quantum computer to answer a question about a story, where both the story and question are presented as text-circuits. To test our model, we created longer stories in the same style as those used in training and questioned these. In our experiment, our stories were about people moving around, and we questioned the quantum computer about who was moving in the same direction at the end of the stories. A harder alternative one could imagine, would be having a murder mystery story and then asking the computer who was the murderer.

And remember - the training in our experiment constitutes the assigning of quantum states and gates to words that occur in the text.

Figure 2. The question-answering task for the language of text circuits as implementable on a quantum computer from [arXiv: 2409.08777]. Above the dotted line is the text we consider. Below are upside-down text circuits which constitute the question we ask. The boxes with words are parameterized as quantum gates. The diagram on the left constitutes one possible answer to the question, and the one on the right the other. Can you figure out what the text is and what the questions are?
4. Compositional generalization

The major reason for our excitement is that the training of our circuits enjoys compositional generalization. That is, we can do the training on small-scale ordinary computers, and do the testing, or asking the important questions, on quantum computers that can operate in ways not possible classically. Figure 4 shows how, despite only being trained on stories with up to 8 actors, the test accuracy remains high, even for much longer stories involving up to 30 actors.

Training large circuits directly in quantum machine learning, leads to difficulties which in many cases undo the potential advantage. Critically - compositional generalization allows us to bypass these issues.

Figure 3. A simplified plot from [arXiv:2409.08777] showing that increasing the sizes of circuits when testing doesn’t affect the accuracy, after training small-scale on ordinary computers. The number of actors correlates with the text size. H1-1 is the name of the Quantinuum quantum computer that was used.
5. Real-world comparison: ChatGPT

We can compare the results of our experiment on a quantum computer, to the success of a classical LLM ChatGPT (GPT-4) when asked the same questions.

What we are considering here is a story about a collection of characters that walk in a number of different directions, and sometimes follow each other. These are just some initial test examples, but it does show that this kind of reasoning is not particularly easy for LLMs.

The input to ChatGPT was:

What we got from ChatGPT:

Can you see where ChatGPT went wrong?

ChatGPT’s score (in terms of accuracy) oscillated around 50% (equivalent to random guessing). Our text circuits consistently outperformed ChatGPT on these tasks. Future work in this area would involve looking at prompt engineering – for example how the phrasing of the instructions can affect the output, and therefore the overall score.

Of course, we note that ChatGPT and other LLM’s will issue new versions that may or may not be marginally better with ‘question-answering’ tasks, and we also note that our own work may become far more effective as quantum computers rapidly become more powerful.

6. What’s next?

We have now turned our attention to work that will show that using vectors to represent meaning and requiring compositional interpretability for natural language takes us mathematically natively into the quantum formalism. This does not mean that there doesn't exist an efficient classical method for solving specific tasks, and it may be hard to prove traditional hardness results whenever there is some machine learning involved. This could be something we might have to come to terms with, just as in classical machine learning.

At Quantinuum we possess the most powerful quantum computers currently available. Our recently published roadmap is going to deliver more computationally powerful quantum computers in the short and medium term, as we extend our lead and push towards universal, fault tolerant quantum computers by the end of the decade. We expect to show even better (and larger scale) results when implementing our work on those machines. In short, we foresee a period of rapid innovation as powerful quantum computers that cannot be classically simulated become more readily available. This will likely be disruptive, as more and more use cases, including ones that we might not be currently thinking about, come into play.

Interestingly and intriguingly, we are also pioneering the use of powerful quantum computers in a hybrid system that has been described as a ‘quantum supercomputer’ where quantum computers, HPC and AI work together in an integrated fashion and look forward to using these systems to advance our work in language processing that can help solve the problem with LLM’s that we highlighted at the start of this article. 

1 And where do we go next, when we don’t even understand what we are dealing with now? On previous occasions in the history of science and technology, when efficient models without a clear interpretation have been developed, such as the Babylonian lunar theory or Ptolemy’s model of epicycles, these initially highly successful technologies vanished, making way for something else.

2 Note that our conception of compositionality is more general than the usual one adopted in linguistics, which is due to Frege. A discussion can be found in [arXiv: 2110.05327].

3 For example, using pregroups here as linguistic structure, which are the cups and caps of PQP.

4 That is, using the tensor product of the corresponding vector spaces.