Blog

Discover how we are pushing the boundaries in the world of quantum computing

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
partnership
All
January 30, 2024
Keyfactor and Quantinuum Announce Integration to Help Organizations Further Post-Quantum Readiness

Keyfactor, the identity-first security solution for modern enterprises, and Quantinuum, the world’s largest integrated quantum computing company, have partnered to strengthen the root of trust, a critical component in reliable public key infrastructures (PKIs) and code signing.

This integration is an important first step in a journey to protect Keyfactor’s users against multiple present-day and future cybersecurity risks, including the growing threat to encrypted communications posed by the potential misuse of rapidly advancing quantum computing technology.

Certainty About Key Quality

Given the rapid rise of bad actors, organizations are facing increasingly sophisticated attacks. In the future, misuse of quantum computing will be another threat that may compromise data. More than ever, data and communications rely on systems and processes to ensure their protection and accuracy. Digital certificates and PKI remain great options to strengthen the security of machine-to-machine communications from attacks.

Regardless of whether post-quantum or classical PKI algorithms are in use, the first step in the production of strong certificates is the generation of good-quality entropy, the random data used for the private keys. Traditionally, this has relied on noise derived from sources such as network and memory latency, as well as hardware assistance where the underlying system is able to provide it. Unfortunately, these approaches cannot guarantee the quality of the entropy, which leaves the strength of certificates against sophisticated attacks in doubt.

Verified quantum entropy sources solve this problem, using the laws of quantum physics to prove a near-perfect level of randomness in the entropy produced. With a high-quality entropy source, users can be confident that the keys they are using reflect the same level of quality and have not, in some way, been compromised in generation.

The Groundwork for Quantum Safety

To ensure high-quality keys, Keyfactor now offers a PKI platform that integrates with Quantum Origin, the world’s only verified source of quantum entropy.

Using verified quantum entropy assures the quality of keys used to provide the root of trust, both now for classical cryptography and in the future as post-quantum cryptographic algorithms also become more widely deployed.

“Quantum-readiness hinges on an organization’s knowledge of its cryptography and ability to defend itself against advanced threats. In this new era of cybersecurity, leaders are feeling a heightened sense of urgency to implement solutions that will secure digital interactions and communications before quantum computing becomes a reality,” said Joe Tong, Senior Vice President of Global Channel Sales, Keyfactor. “Keyfactor’s partnership with Quantinuum, together with our existing collection of post-quantum algorithm implementations, will be able to provide customers with trust-based solutions that are hardened both with quantum technology and the latest post-quantum cryptographic research. Together with Quantinuum, we are building strong cybersecurity foundations for the future, one step at a time.”

“The security and integrity of digital communications and transactions depends on the strength of digital certificates. By integrating Quantum Origin, Keyfactor’s customers can now leverage the world’s only source of verified quantum entropy to strengthen certificate generation.” said Duncan Jones, Head of Cybersecurity at Quantinuum.

technical
All
January 19, 2024
Differentiation of Optical Circuits

Quantum computing is a young, dynamic field – so young that the community is still exploring multiple different “architectures” for quantum computers. The computer “architecture” can roughly be described as what the computer is made out of – in other words, is it made out of superconductors or semiconductors? Are the qubits made from ions, superconducting “squids”, atoms, or even particles of light? We call these different physical realizations the “architecture” or “modality”.

Exploring the pros and cons of all the different modalities is an important part of current quantum computing research. Because Quantinuum is committed to the community, and even though our hardware is trapped-ion based, we often work in partnership with researchers exploring alternate options. This work allows us to both develop quantum technologies outside our own architecture while better developing our hardware-agnostic software.

Recently, our Oxford team has made big strides in our understanding of “photonic”, or light-based, quantum computing. First, they developed a string-diagram formalism for describing linear and nonlinear optics. Then, they applied their formalism to solve outstanding problems in the field. 

The graphical approach made solving some problems in particular much easier than they would have been using more standard linear algebra techniques, in part because the circuits they are describing have a two-dimensional structure, just like the string diagrams themselves. By creating a diagrammatic representation of the circuits themselves, the researchers are more easily able to compute things such as the change in the circuit when a parameter is adjusted. 

In their most recent paper, the team figured out how to take the derivative of (or “differentiate) linear optical circuits, which means they can now figure out how the circuit will change when certain parameters are adjusted. Differentiation is central to a whole class of algorithms (including optimization algorithms and any algorithm making use of “gradient descent”, which is a key component of many machine learning and AI techniques), making the teams’ results incredibly useful. This work will form the basis for an upcoming software platform for photonic quantum computing. 

In addition, this graphical approach to describing optical circuits is particularly advantageous for reasoning about multiple particles and composite quantum systems, like one must to understand fault-tolerance in quantum computing. While graphical languages are fairly new in the photonics sphere, they already seem to offer an insightful new perspective. Their current results open the door to “variational” approaches, which are used to solve things like combinatorial graph problems or problems in quantum chemistry.

technical
All
January 8, 2024
Protecting Expressive Circuits with a Quantum Error Detection Code

Detecting and correcting errors has become a critical area of development in quantum computing, a key that will unlock results which put quantum computers in a different league from their classical counterparts. 

Researchers are working on ways to handle errors so that the hardware we will have in the coming months will be capable of performing useful tasks that are intractable for any classical computer — in other words, to achieve “quantum advantage”. 

The full monty, known as “large-scale fault-tolerant quantum error correction” remains an open challenge in the quantum computing landscape, placing incredibly demanding constraints on the hardware. A promising start is to implement error detection instead of full error correction. In this approach, the system regularly checks for errors, and if one is detected, throws out the computation and restarts. 

The team at Quantinuum realized that just such a code, nicknamed the “iceberg code”, if optimized to take advantage of the industry-leading components in Quantinuum’s trapped-ion quantum computers, could offer real potential for early fault-tolerance. Quantinuum’s H-Series hardware boasts mobile qubits, mid circuit measurement and the ability to program circuits with arbitrary-angle gates – making it ripe for new algorithm implementation and development. The team’s results, published today in Nature Physics Protecting expressive circuits with a quantum error detection code, detail a code that’s so efficient it was able to protect much deeper and more expressive circuits than had previously been realized with quantum error correction, and it did so making extremely efficient use of the very high-fidelity qubits and gates available in Quantinuum’s quantum charge-coupled device (QCCD) architecture. 

“Our work sets the bar for what more advanced fully fault-tolerant codes need to beat on hardware,” said David Amaro, an author on the paper.

A key advantage of the iceberg code is how efficiently it squeezes out the maximum number of logical qubits from the given set of physical qubits – it can make k logical qubits out of only k+2 physical qubits. Every logical gate is implemented by a unique two-qubit physical gate, making it a very fast, clean, and expressive implementation. In addition to this, it needs only 2 more ancilla qubits for syndrome measurement, making for a very small overhead of only 4 physical qubits. Using the original 12-qubit configuration of Quantinuum’s H1-2 computer (since increased to 20), this meant the team could realize 8 logical qubits.

With these 8 logical qubits, the team implemented much deeper and more expressive circuits than had previously been demonstrated with quantum error correction codes. 

The team’s work is the first experimental demonstration that sophisticated quantum error detection techniques are useful to successfully protect very expressive circuits on a real quantum computer. In contrast, previous demonstrations of fully fault-tolerant codes on hardware showed protection only of basic logical gates or “primitives” (the building blocks of full algorithms). 

The Iceberg code is a method that’s useful today for practitioners, and can be used to protect near-term algorithms like the ‘quantum approximate optimization algorithm’, or the ‘variational quantum eigensolver’, algorithms currently put to work in domains including chemical simulation, quantum machine learning and financial optimization. In fact, it was used by a team at Quantinuum to protect the quantum phase estimation algorithm, a critical piece for many other quantum algorithms, and deployed in a state-of-the-art simulation of a real-world hydrogen molecule using logically-encoded qubits — a feat not possible using any other quantum computing hardware yet developed.

Looking forwards, the team plans to push the code as far as possible to determine if it is sufficient to protect quantum circuits capable of a quantum advantage. This will require setting a “minimal” quantum advantage experiment, working on careful engineering and benchmarking of every aspect of the code, and the use of Quantinuum’s best-in-class high fidelity gates. In parallel, they will also be working to understand if and how the Iceberg code can contribute to minimize the resource overhead of some of the most promising fully fault-tolerant codes.

technical
All
January 8, 2024
Sequence Processing with Quantum Tensor Networks

For the first time, Quantinuum researchers have run scalable quantum natural language processing (QNLP) models, able to parse and process real-world data, on a quantum computer. In a recent paper, the researchers define machine learning models for the task of classifying sequences – which can be anything from sentences in natural language, like movie reviews, to bioinformatic strings, like DNA sequences. Classifying sequences of symbols – letters, words, or longer fragments of text – is an obviously useful computational task, and has led to some of the decade’s biggest changes; we now see this technology in use in everything from chatbots to legal cases.  

Current classical models, which are based on neural networks, primarily look at the statistical distributions of where words are put with respect to each other – they don’t really consider the structure of language a priori (they could, but they don’t). In contrast, syntactic information scaffolds Quantinuum’s new quantum models, which are based on tensor networks, making them “syntax-aware”. Considering things like structure and syntax from the beginning allows scientists to create models with far fewer parameters, that require fewer gate operations to run, while allowing for interpretability thanks to the meaningful structure baked in from the start. Interpretability is the most pressing challenge in artificial intelligence (AI) — because if we don’t know why an algorithm has given an answer, we can’t trust it in critical applications, for instance in making medical decisions, or in scenarios where human lives are at stake.

Both neural and tensor networks can capture complex correlations in large data, but the way they do it is fundamentally different. In addition, since quantum theory inherently is described by tensor networks, using them to build quantum natural language processing models allows for the investigation of the potential that quantum processors can bring to natural language processing specifically, and artificial intelligence in general.

Thanks to best-in-class features like mid-circuit measurement and qubit reuse on Quantinuum’s H2-1 quantum processor, they were able to fit much larger circuits than one might naively expect. For example, the researchers were able to run a circuit that would normally take 64 qubits on only 11 qubits. Combined with the reduced number of gates required, these models are entirely feasible on current quantum hardware.

This paper shows us that we can run, train, and deploy QNLP models on present-day quantum computers. When compared to neural-network-based classifiers, the quantum model does just as well on this task in terms of prediction accuracy. What’s more, this work encourages the exploration of quantum language models, as sampling from quantum circuits of the types used in this work could require polynomially fewer resources than simulating them classically.