Quantum Seed

Hardware Quantum Credits is the way in which Quantinuum meters access to its quantum computers and emulators. The equation for calculating a circuit's HQC cost can be found in the Quantinuum Documentation website.

Hardware Quantum Credits is the way in which Quantinuum meters access to its quantum computers and emulators. The equation for calculating a circuit's HQC cost can be found in the Quantinuum Documentation website.

The ZX-calculus is a rigorous graphical language for reasoning about linear maps between qubits, which are represented as string diagrams called ZX-diagrams. A ZX-diagram consists of a set of generators called spiders that represent specific tensors. These are connected together to form a tensor network similar to Penrose graphical notation. Due to the symmetries of the spiders and the properties of the underlying category, topologically deforming a ZX-diagram (i.e. moving the generators without changing their connections) does not affect the linear map it represents. In addition to the equalities between ZX-diagrams that are generated by topological deformations, the calculus also has a set of graphical rewrite rules for transforming diagrams into one another. The ZX-calculus is universal in the sense that any linear map between qubits can be represented as a diagram, and different sets of graphical rewrite rules are complete for different families of linear maps. ZX-diagrams can be seen as a generalization of quantum circuit notation, and they form a strict subset of tensor networks which represent general fusion categories and wavefunctions of quantum spin systems.

The ZX-calculus is a rigorous graphical language for reasoning about linear maps between qubits, which are represented as string diagrams called ZX-diagrams. A ZX-diagram consists of a set of generators called spiders that represent specific tensors. These are connected together to form a tensor network similar to Penrose graphical notation. Due to the symmetries of the spiders and the properties of the underlying category, topologically deforming a ZX-diagram (i.e. moving the generators without changing their connections) does not affect the linear map it represents. In addition to the equalities between ZX-diagrams that are generated by topological deformations, the calculus also has a set of graphical rewrite rules for transforming diagrams into one another. The ZX-calculus is universal in the sense that any linear map between qubits can be represented as a diagram, and different sets of graphical rewrite rules are complete for different families of linear maps. ZX-diagrams can be seen as a generalization of quantum circuit notation, and they form a strict subset of tensor networks which represent general fusion categories and wavefunctions of quantum spin systems.

The wavefunction is a fundamental concept in quantum mechanics that describes the quantum state of a system. It is a mathematical function that encodes information about the probabilities of outcomes for measurements on the system.

The wavefunction is a fundamental concept in quantum mechanics that describes the quantum state of a system. It is a mathematical function that encodes information about the probabilities of outcomes for measurements on the system.

The Variational Quantum Eigensolver (VQE) is a variational quantum algorithm for finding the ground-state of a quantum system. VQE is typically used for quantum chemistry applications and to solve classical optimization problems.

The Variational Quantum Eigensolver (VQE) is a variational quantum algorithm for finding the ground-state of a quantum system. VQE is typically used for quantum chemistry applications and to solve classical optimization problems.

A device or mechanism that generates random numbers from a physical process, rather than a computational algorithm. While TRNGs produce random numbers, they often lack provable randomness and have been shown to be vulnerable to various attacks and characterization. Their output quality can be significantly enhanced through quantum entropy extraction techniques like those used in Quantum Origin.

A device or mechanism that generates random numbers from a physical process, rather than a computational algorithm. While TRNGs produce random numbers, they often lack provable randomness and have been shown to be vulnerable to various attacks and characterization. Their output quality can be significantly enhanced through quantum entropy extraction techniques like those used in Quantum Origin.

A quantum software development platform produced by Cambridge Quantum. The heart of TKET is a language-agnostic optimizing compiler designed to generate code for a variety of NISQ devices, which has several features designed to minimize the influence of device error.

A quantum software development platform produced by Cambridge Quantum. The heart of TKET is a language-agnostic optimizing compiler designed to generate code for a variety of NISQ devices, which has several features designed to minimize the influence of device error.

A special way of doing operations, like gates, that is particularly simple and robust to noise. Essentially, transversal gates just doing a single physical gate between each physical qubit in a logical qubit. Generally, the simpler the logical gate the better performing it tends to be.

A special way of doing operations, like gates, that is particularly simple and robust to noise. Essentially, transversal gates just doing a single physical gate between each physical qubit in a logical qubit. Generally, the simpler the logical gate the better performing it tends to be.

Threshold refers to the physical error rate of a family of codes - once you get below it, the error rate should decrease exponentially as you scale the size of the code. Very roughly, increasing the “size” of the code means adding more physical qubits per logical qubit, and being able to correct larger errors.

Threshold refers to the physical error rate of a family of codes - once you get below it, the error rate should decrease exponentially as you scale the size of the code. Very roughly, increasing the “size” of the code means adding more physical qubits per logical qubit, and being able to correct larger errors.

A quantum random number generator that can be integrated into existing systems without additional hardware. Quantum Origin exemplifies this approach, allowing organizations to benefit from near-perfect randomness without specialized quantum hardware installations (such as QRNG chips or PCIe boards). This facilitates easier adoption and integration into existing security infrastructures, avoiding the environmental sensitivity, size constraints, and high costs associated with hardware-based QRNGs.

A quantum random number generator that can be integrated into existing systems without additional hardware. Quantum Origin exemplifies this approach, allowing organizations to benefit from near-perfect randomness without specialized quantum hardware installations (such as QRNG chips or PCIe boards). This facilitates easier adoption and integration into existing security infrastructures, avoiding the environmental sensitivity, size constraints, and high costs associated with hardware-based QRNGs.

Shor’s algorithm is a quantum algorithm used for finding the prime factors of an integer.

Shor’s algorithm is a quantum algorithm used for finding the prime factors of an integer.

An algorithm that combines a quantum seed with a classical random source to produce near-perfect random numbers. In Quantum Origin, randomness extractors use peer-reviewed, cryptographer-verified methods to create volumes of near-perfect random output from existing random number generators, without requiring hardware changes.

An algorithm that combines a quantum seed with a classical random source to produce near-perfect random numbers. In Quantum Origin, randomness extractors use peer-reviewed, cryptographer-verified methods to create volumes of near-perfect random output from existing random number generators, without requiring hardware changes.

In quantum physics, a quantum state is a mathematical entity that embodies the knowledge of a quantum system. Quantum mechanics specifies the construction, evolution, and measurement of a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system.

In quantum physics, a quantum state is a mathematical entity that embodies the knowledge of a quantum system. Quantum mechanics specifies the construction, evolution, and measurement of a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system.

A provably random initial value generated from a quantum computer. This seed provides a perfect min-entropy 1 source, essential for creating the near-perfect random output of Quantum Origin. The quantum seed is a mandatory component that enables the production of near-perfect randomness without requiring specialized hardware.

A provably random initial value generated from a quantum computer. This seed provides a perfect min-entropy 1 source, essential for creating the near-perfect random output of Quantum Origin. The quantum seed is a mandatory component that enables the production of near-perfect randomness without requiring specialized hardware.

A device or system that generates random numbers by exploiting the inherent randomness of quantum mechanical processes. There are several types of QRNGs, including:

  • Hardware-based QRNGs: These devices measure quantum phenomena directly, such as photon arrival times, electron tunneling, or shot noise. While they can produce     high-quality randomness, they may be susceptible to environmental influences and hardware imperfections.
  • Optical QRNGs: These systems use quantum optics principles, often involving beam splitters or phase randomization, to generate random numbers. They can achieve high bit rates but may require careful calibration and maintenance.
  • Chip-based QRNGs: Integrated circuits that incorporate quantum phenomena for random number generation. While compact, they may face challenges in proving the quantum nature of their output.
  • Software-deployed QRNGs: These systems, like Quantum Origin, use quantum processes to generate a seed of provable randomness, which is then used to produce high-quality random numbers through classical algorithms. This approach combines the strengths of quantum randomness with the practicality of software deployment, offering scalability and ease of integration.

A device or system that generates random numbers by exploiting the inherent randomness of quantum mechanical processes. There are several types of QRNGs, including:

  • Hardware-based QRNGs: These devices measure quantum phenomena directly, such as photon arrival times, electron tunneling, or shot noise. While they can produce     high-quality randomness, they may be susceptible to environmental influences and hardware imperfections.
  • Optical QRNGs: These systems use quantum optics principles, often involving beam splitters or phase randomization, to generate random numbers. They can achieve high bit rates but may require careful calibration and maintenance.
  • Chip-based QRNGs: Integrated circuits that incorporate quantum phenomena for random number generation. While compact, they may face challenges in proving the quantum nature of their output.
  • Software-deployed QRNGs: These systems, like Quantum Origin, use quantum processes to generate a seed of provable randomness, which is then used to produce high-quality random numbers through classical algorithms. This approach combines the strengths of quantum randomness with the practicality of software deployment, offering scalability and ease of integration.

Quantum phase estimation is a quantum algorithm, often used as a subroutine in other algorithms, to estimate the phase corresponding to an eigenvalue of a given unitary operator.

Quantum phase estimation is a quantum algorithm, often used as a subroutine in other algorithms, to estimate the phase corresponding to an eigenvalue of a given unitary operator.

A software-based quantum random number generator that uses quantum processes to produce provably near-perfect random numbers for strengthening all security functions. Quantum Origin stands out for its ability to provide near-perfect randomness through a software deployment model, making it uniquely accessible and scalable for a wide range of applications, from key generation to general-purpose use in security systems.

A software-based quantum random number generator that uses quantum processes to produce provably near-perfect random numbers for strengthening all security functions. Quantum Origin stands out for its ability to provide near-perfect randomness through a software deployment model, making it uniquely accessible and scalable for a wide range of applications, from key generation to general-purpose use in security systems.

The design and implementation of Natural Language Processing (NLP) models that exploit certain quantum phenomena such as superposition, entanglement, and interference to perform language-related tasks on quantum hardware.

The design and implementation of Natural Language Processing (NLP) models that exploit certain quantum phenomena such as superposition, entanglement, and interference to perform language-related tasks on quantum hardware.

A cryptographic technique that leverages quantum mechanical principles to securely distribute cryptographic keys between two parties. QKD systems typically use individual photons to transmit key information, relying on the principles of quantum mechanics, such as the no-cloning theorem and the observer effect, to detect any eavesdropping attempts. While QKD offers theoretical unconditional security for key distribution, it faces practical challenges in implementation, including the need for specialized hardware, limited distance, and vulnerability to side-channel attacks. Unlike software-based quantum randomness solutions, QKD requires a dedicated quantum channel, usually a fiber optic link or free-space optical path, between the communicating parties.

A cryptographic technique that leverages quantum mechanical principles to securely distribute cryptographic keys between two parties. QKD systems typically use individual photons to transmit key information, relying on the principles of quantum mechanics, such as the no-cloning theorem and the observer effect, to detect any eavesdropping attempts. While QKD offers theoretical unconditional security for key distribution, it faces practical challenges in implementation, including the need for specialized hardware, limited distance, and vulnerability to side-channel attacks. Unlike software-based quantum randomness solutions, QKD requires a dedicated quantum channel, usually a fiber optic link or free-space optical path, between the communicating parties.

A quantum implementation of the Fourier Transform algorithm.

A quantum implementation of the Fourier Transform algorithm.

Similar to Quantum Error Correction, but usually refers to a code with the power to herald the presence of an error but lacking the power to prescribe a correction. Therefore, upon detection, the data is usually discarded.

Similar to Quantum Error Correction, but usually refers to a code with the power to herald the presence of an error but lacking the power to prescribe a correction. Therefore, upon detection, the data is usually discarded.

We take the original definition: “A quantum error-correcting code is defined to be a unitary mapping (encoding) of k qubits (2-state quantum systems) into a subspace of the quantum state space of n qubits such that if any t of the qubits undergo arbitrary decoherence, not necessarily independently, the resulting n qubits can be used to faithfully reconstruct the original quantum state of the k encoded qubits.”

An [[n,k,d]] code is a quantum error correction code which encodes k qubits in an n-qubit state, in such a way that any operation which maps some encoded state to another encoded state must act on at least dd qubits. (So, for example, any encoded state which has been subjected to an error consisting of at most ⌊(d−1)/2⌋ Pauli operations can in principle be recovered perfectly).

This notation generalizes the notation [n,k,d] for classical error correction codes, in which k-bit "plaintext" strings are encoded in n-bit "codeword" strings, in such a way that at least d bits must be flipped to transform between any two codewords representing different plain texts. (In this context and in the quantum case, d is referred to as the code distance.) The double-brackets are used simply to denote that the code being referred to is a quantum error correction code rather than a classical code.

Another way to understand quantum error correcting codes is offered by John Preskill: “A quantum error-correcting code can be viewed as a mapping of k qubits (a Hilbert space of dimension 2k) into n qubits (a Hilbert space of dimension 2n), where n > k. The k qubits are the “logical qubits” or “encoded qubits” that we wish to protect from error. The additional n − k qubits allow us to store the k logical qubits in a redundant fashion, so that the encoded information is not easily damaged.”

We take the original definition: “A quantum error-correcting code is defined to be a unitary mapping (encoding) of k qubits (2-state quantum systems) into a subspace of the quantum state space of n qubits such that if any t of the qubits undergo arbitrary decoherence, not necessarily independently, the resulting n qubits can be used to faithfully reconstruct the original quantum state of the k encoded qubits.”

An [[n,k,d]] code is a quantum error correction code which encodes k qubits in an n-qubit state, in such a way that any operation which maps some encoded state to another encoded state must act on at least dd qubits. (So, for example, any encoded state which has been subjected to an error consisting of at most ⌊(d−1)/2⌋ Pauli operations can in principle be recovered perfectly).

This notation generalizes the notation [n,k,d] for classical error correction codes, in which k-bit "plaintext" strings are encoded in n-bit "codeword" strings, in such a way that at least d bits must be flipped to transform between any two codewords representing different plain texts. (In this context and in the quantum case, d is referred to as the code distance.) The double-brackets are used simply to denote that the code being referred to is a quantum error correction code rather than a classical code.

Another way to understand quantum error correcting codes is offered by John Preskill: “A quantum error-correcting code can be viewed as a mapping of k qubits (a Hilbert space of dimension 2k) into n qubits (a Hilbert space of dimension 2n), where n > k. The k qubits are the “logical qubits” or “encoded qubits” that we wish to protect from error. The additional n − k qubits allow us to store the k logical qubits in a redundant fashion, so that the encoded information is not easily damaged.”

A source of randomness based on fundamentally unpredictable quantum mechanical processes. Unlike classical entropy sources, quantum entropy sources leverage the inherent randomness of quantum phenomena, providing a higher degree of unpredictability crucial for cryptographic applications. Different QRNG technologies use various quantum processes as entropy sources:

  • Quantum computer-based (used by Quantum Origin): Utilizes entangled qubits in a quantum computer to generate initial quantum randomness. This method allows for Bell test verification to establish a lower bound on min-entropy. The output then undergoes two-source extraction to produce a near-perfect quantum seed, which is mathematically provable.
  • Photonic QRNGs: Exploit the quantum properties of light, such as: a) Photon counting: Measures the random arrival times of photons. b) Beam splitting: Uses the unpredictable path a photon takes when encountering a beam splitter.
  • Electronic QRNGs: Leverage quantum effects in electronic circuits, including a) Shot noise: Random fluctuations in electric current due to the discrete nature of charge carriers. b) Thermal noise: Random electron movement due to temperature.
  • Radioactive decay: Uses the unpredictable timing of radioactive particle emissions.

Most hardware-based QRNGs require additional processing like noise reduction and bias correction to improve their output. Quantum Origin's approach differs by using a quantum computer to generate initial randomness with a provable min-entropy bound, followed by a mathematically rigorous two-source extraction process. This method results in a near-perfect quantum seed, eliminating the need for the kind of extensive post-processing typically required by hardware QRNGs to address environmental and hardware-related imperfections.

A source of randomness based on fundamentally unpredictable quantum mechanical processes. Unlike classical entropy sources, quantum entropy sources leverage the inherent randomness of quantum phenomena, providing a higher degree of unpredictability crucial for cryptographic applications. Different QRNG technologies use various quantum processes as entropy sources:

  • Quantum computer-based (used by Quantum Origin): Utilizes entangled qubits in a quantum computer to generate initial quantum randomness. This method allows for Bell test verification to establish a lower bound on min-entropy. The output then undergoes two-source extraction to produce a near-perfect quantum seed, which is mathematically provable.
  • Photonic QRNGs: Exploit the quantum properties of light, such as: a) Photon counting: Measures the random arrival times of photons. b) Beam splitting: Uses the unpredictable path a photon takes when encountering a beam splitter.
  • Electronic QRNGs: Leverage quantum effects in electronic circuits, including a) Shot noise: Random fluctuations in electric current due to the discrete nature of charge carriers. b) Thermal noise: Random electron movement due to temperature.
  • Radioactive decay: Uses the unpredictable timing of radioactive particle emissions.

Most hardware-based QRNGs require additional processing like noise reduction and bias correction to improve their output. Quantum Origin's approach differs by using a quantum computer to generate initial randomness with a provable min-entropy bound, followed by a mathematically rigorous two-source extraction process. This method results in a near-perfect quantum seed, eliminating the need for the kind of extensive post-processing typically required by hardware QRNGs to address environmental and hardware-related imperfections.

The process of enhancing the output of a True Random Number Generator (TRNG) to produce near-perfect entropy by combining it with a quantum seed through advanced randomness extractors. This technique, central to Quantum Origin's functionality, allows for significant improvements in randomness quality without requiring changes to existing hardware infrastructure.

The process of enhancing the output of a True Random Number Generator (TRNG) to produce near-perfect entropy by combining it with a quantum seed through advanced randomness extractors. This technique, central to Quantum Origin's functionality, allows for significant improvements in randomness quality without requiring changes to existing hardware infrastructure.

QAOA is a variational quantum algorithm for solving combinatorial optimization problems on quantum computers. It works by using a classical optimizer in conjunction with a quantum computer to prepare quantum states that are close to the ground state of a Hamiltonian encoding the problem of interest. As it often requires many circuits to optimize the preparation of these states, Quantinuum's scientists work on both improvements and alternative methods to QAOA in pursuit of a scaling advantage for combinatorial optimization.

QAOA is a variational quantum algorithm for solving combinatorial optimization problems on quantum computers. It works by using a classical optimizer in conjunction with a quantum computer to prepare quantum states that are close to the ground state of a Hamiltonian encoding the problem of interest. As it often requires many circuits to optimize the preparation of these states, Quantinuum's scientists work on both improvements and alternative methods to QAOA in pursuit of a scaling advantage for combinatorial optimization.

There is no universally accepted definition of this term. John Preskill originally defined the related term “quantum supremacy” as “…computational tasks performable by quantum devices, where one could argue persuasively that no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect." Given this, we might then define quantum advantage as “the ability to execute an algorithm (or family of algorithms) on a quantum computer in a way that yields a resource savings (such as time, energy, or money) compared to the best-known classical methods running on the best existing classical hardware, especially when the algorithm(s) can be made classically intractable by scaling to larger size.” Another definition might be business-centered: “A quantum computer has realized quantum (economic) advantage when a firm solves a business problem using a quantum computer that they couldn’t solve any other way.” A less stringent definition might be “A quantum computer has realized quantum advantage when it solves a problem in significantly less time, money, or energy than any other known methods to solve that problem”. As you can see, there are many ways to define this term.

There is no universally accepted definition of this term. John Preskill originally defined the related term “quantum supremacy” as “…computational tasks performable by quantum devices, where one could argue persuasively that no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect." Given this, we might then define quantum advantage as “the ability to execute an algorithm (or family of algorithms) on a quantum computer in a way that yields a resource savings (such as time, energy, or money) compared to the best-known classical methods running on the best existing classical hardware, especially when the algorithm(s) can be made classically intractable by scaling to larger size.” Another definition might be business-centered: “A quantum computer has realized quantum (economic) advantage when a firm solves a business problem using a quantum computer that they couldn’t solve any other way.” A less stringent definition might be “A quantum computer has realized quantum advantage when it solves a problem in significantly less time, money, or energy than any other known methods to solve that problem”. As you can see, there are many ways to define this term.

A Python interface for the TKET compiler.

A Python interface for the TKET compiler.

Pseudo-threshold refers when a single code’s logical error rate matches the physical error rate. (You are then break-even for the code.)

Pseudo-threshold refers when a single code’s logical error rate matches the physical error rate. (You are then break-even for the code.)

An algorithm that generates a sequence of numbers with properties that approximate true randomness. PRNGs are deterministic, meaning they produce the same sequence when initialized with the same seed value. While computationally efficient, PRNGs are not truly random and may be vulnerable to prediction if their state becomes known. The term PRNG is commonly used interchangeably with Deterministic Random Bit Generator (DRBG).

An algorithm that generates a sequence of numbers with properties that approximate true randomness. PRNGs are deterministic, meaning they produce the same sequence when initialized with the same seed value. While computationally efficient, PRNGs are not truly random and may be vulnerable to prediction if their state becomes known. The term PRNG is commonly used interchangeably with Deterministic Random Bit Generator (DRBG).

Random numbers whose quality can be mathematically proven, rather than just statistically estimated. This concept is central to Quantum Origin, offering certainty about randomness quality unattainable with traditional generators. It encompasses both the Bell Test used in quantum seed creation and the peer-reviewed mathematical proofs underlying the extraction process.

Random numbers whose quality can be mathematically proven, rather than just statistically estimated. This concept is central to Quantum Origin, offering certainty about randomness quality unattainable with traditional generators. It encompasses both the Bell Test used in quantum seed creation and the peer-reviewed mathematical proofs underlying the extraction process.

In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In regular perturbation theory, the solution is expressed as a power series in a small parameter [epsilon]. The first terms in the known solution to the solvable problem. Successive terms in the series at higher powers of [epsilon] usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.

Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.

In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In regular perturbation theory, the solution is expressed as a power series in a small parameter [epsilon]. The first terms in the known solution to the solvable problem. Successive terms in the series at higher powers of [epsilon] usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.

Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.

In quantum mechanics, an observable is a physical quantity that can be measured, such as energy, momentum, or position. Mathematically, observables are represented by operators on the quantum state.

In quantum mechanics, an observable is a physical quantity that can be measured, such as energy, momentum, or position. Mathematically, observables are represented by operators on the quantum state.

An operator is a mathematical object that acts on quantum states to produce another quantum state. For example, the Hamiltonian operator corresponds to the total energy of a system.

An operator is a mathematical object that acts on quantum states to produce another quantum state. For example, the Hamiltonian operator corresponds to the total energy of a system.

In physics, the no-cloning theorem states that it is impossible to create an independent and identical copy of an arbitrary unknown quantum state, a statement which has profound implications in the field of quantum computing among others.

In physics, the no-cloning theorem states that it is impossible to create an independent and identical copy of an arbitrary unknown quantum state, a statement which has profound implications in the field of quantum computing among others.

[[n,k,d]] is common quantum error correction notation. An [[n,k,d]] code is a quantum error correction code which encodes k qubits in an n-qubit state. Here, n=number of physical qubits, k= number of logical qubits, d = distance of the code.

[[n,k,d]] is common quantum error correction notation. An [[n,k,d]] code is a quantum error correction code which encodes k qubits in an n-qubit state. Here, n=number of physical qubits, k= number of logical qubits, d = distance of the code.

A conservative measure of unpredictability in a random number generator, with higher values indicating stronger randomness. Min-entropy quantifies the worst-case scenario of predictability in a system, making it crucial for cryptographic applications.

A conservative measure of unpredictability in a random number generator, with higher values indicating stronger randomness. Min-entropy quantifies the worst-case scenario of predictability in a system, making it crucial for cryptographic applications.

Lambeq is an open-source, modular, extensible high-level Python library for experimental Quantum Natural Language Processing (QNLP). the library allows the conversion of any sentence to a quantum circuit, based on a given compositional model and certain parameterization and choices of ansätze, and facilitates training for both quantum and classical NLP experiments.

Lambeq is an open-source, modular, extensible high-level Python library for experimental Quantum Natural Language Processing (QNLP). the library allows the conversion of any sentence to a quantum circuit, based on a given compositional model and certain parameterization and choices of ansätze, and facilitates training for both quantum and classical NLP experiments.

The Kerr effect refers to the change in the refractive index of a material in response to an applied electric field. In quantum optics, it can describe how the propagation of light in nonlinear media is influenced by an external field.

The Kerr effect refers to the change in the refractive index of a material in response to an applied electric field. In quantum optics, it can describe how the propagation of light in nonlinear media is influenced by an external field.

InQuanto™ is Quantinuum’s proprietary state-of-the-art software platform designed to accelerate quantum computational chemistry research for our industrial & academic collaborators.

InQuanto™ is Quantinuum’s proprietary state-of-the-art software platform designed to accelerate quantum computational chemistry research for our industrial & academic collaborators.

Radiofrequency Paul traps used in the quantum charge-coupled device architecture typically trap ions using a linear trap, where the rf rails and dc control electrodes that are used to confine the ions are spread out along a line and the ions line up along the axis of the trap. However, trapping ions in a two-dimensional grid enables more efficient sorting than in the more traditional linear traps. Junction transport is the process of moving an ion crystal through an intersection (or a junction) between two streets of a grid of linear Paul traps.

Radiofrequency Paul traps used in the quantum charge-coupled device architecture typically trap ions using a linear trap, where the rf rails and dc control electrodes that are used to confine the ions are spread out along a line and the ions line up along the axis of the trap. However, trapping ions in a two-dimensional grid enables more efficient sorting than in the more traditional linear traps. Junction transport is the process of moving an ion crystal through an intersection (or a junction) between two streets of a grid of linear Paul traps.

In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater amplitude (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves.

In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater amplitude (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves.

Hartree-Fock theory is fundamental to much of electronic structure theory. It is the basis of molecular orbital (MO) theory, which posits that each electron’s motion can be described by a single-particle function (orbital) which does not depend explicitly on the instantaneous motions of the other electrons. The ubiquity of orbital concepts in chemistry is a testimony to the predictive power and intuitive appeal of Hartree-Fock MO theory. However, it is important to remember that these orbitals are mathematical constructs which only approximate reality. Only for the hydrogen atom (or other one-electron systems, like He+) are orbitals exact eigenfunctions of the full electronic Hamiltonian. As long as we are content to consider molecules near their equilibrium geometry, Hartree-Fock theory often provides a good starting point for more elaborate theoretical methods which are better approximations to the electronic Schrödinger equation.

Hartree-Fock theory is fundamental to much of electronic structure theory. It is the basis of molecular orbital (MO) theory, which posits that each electron’s motion can be described by a single-particle function (orbital) which does not depend explicitly on the instantaneous motions of the other electrons. The ubiquity of orbital concepts in chemistry is a testimony to the predictive power and intuitive appeal of Hartree-Fock MO theory. However, it is important to remember that these orbitals are mathematical constructs which only approximate reality. Only for the hydrogen atom (or other one-electron systems, like He+) are orbitals exact eigenfunctions of the full electronic Hamiltonian. As long as we are content to consider molecules near their equilibrium geometry, Hartree-Fock theory often provides a good starting point for more elaborate theoretical methods which are better approximations to the electronic Schrödinger equation.

In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

A concept in quantum information theory where the security or functionality of a protocol does not depend on the specific implementation of the quantum devices used. This principle is particularly important in quantum cryptography and randomness generation.

In the context of randomness generation, device-independent protocols, like those used in Quantum Origin's quantum seed creation, ensure that the randomness quality is guaranteed. This is achieved through rigorous mathematical proofs based on observed violations of Bell inequalities.

Device independence provides a robust security framework, reducing reliance on hardware integrity and manufacturer trust. This approach is valuable in various quantum protocols, including quantum key distribution and randomness generation, offering security guarantees based on fundamental physical principles rather than specific hardware implementations.

Having a seed of guaranteed quality significantly reduces hardware trust requirements on the client-side. The mathematically proven extraction process, as detailed in the peer-reviewed paper "Practical randomness amplification and privatization with implementations on quantum computers" (Foreman et al., 2023), guarantees the amplification of input randomness to near-perfect randomness using the quantum seed. This process works effectively even with low-quality input sources, ensuring high-quality output regardless of the specific hardware implementation or potential imperfections in the local randomness source.

A concept in quantum information theory where the security or functionality of a protocol does not depend on the specific implementation of the quantum devices used. This principle is particularly important in quantum cryptography and randomness generation.

In the context of randomness generation, device-independent protocols, like those used in Quantum Origin's quantum seed creation, ensure that the randomness quality is guaranteed. This is achieved through rigorous mathematical proofs based on observed violations of Bell inequalities.

Device independence provides a robust security framework, reducing reliance on hardware integrity and manufacturer trust. This approach is valuable in various quantum protocols, including quantum key distribution and randomness generation, offering security guarantees based on fundamental physical principles rather than specific hardware implementations.

Having a seed of guaranteed quality significantly reduces hardware trust requirements on the client-side. The mathematically proven extraction process, as detailed in the peer-reviewed paper "Practical randomness amplification and privatization with implementations on quantum computers" (Foreman et al., 2023), guarantees the amplification of input randomness to near-perfect randomness using the quantum seed. This process works effectively even with low-quality input sources, ensuring high-quality output regardless of the specific hardware implementation or potential imperfections in the local randomness source.

A formal term, often used in cryptographic standards, for what is commonly known as a Pseudorandom Number Generator (PRNG).

A formal term, often used in cryptographic standards, for what is commonly known as a Pseudorandom Number Generator (PRNG).

The process of creating cryptographic keys using random numbers, where the quality of randomness directly impacts security. The quality can be measured by how long an RNG can produce outputs before they become distinguishable from perfect randomness. For example, the National Institute of Standards and Technology’s (NIST) standard becomes distinguishable from perfect randomness with probability 2-9 after 248 outputs, while Quantum Origin maintains indistinguishability with probability 2-128 even after 2300 outputs - providing both stronger security (128 bits vs 9 bits) and substantially more secure outputs.

The process of creating cryptographic keys using random numbers, where the quality of randomness directly impacts security. The quality can be measured by how long an RNG can produce outputs before they become distinguishable from perfect randomness. For example, the National Institute of Standards and Technology’s (NIST) standard becomes distinguishable from perfect randomness with probability 2-9 after 248 outputs, while Quantum Origin maintains indistinguishability with probability 2-128 even after 2300 outputs - providing both stronger security (128 bits vs 9 bits) and substantially more secure outputs.

A cryptographic key is a bit string used to control cryptographic algorithms, such as those for data encryption and digital signatures. The security of cryptographic operations relies on both key length and the unpredictability, or min-entropy, of the random source that generates the key. Keys derived from provably random sources significantly enhance security, ensuring resistance against both brute-force attacks and future advances in computational power. Real-world security failures have highlighted the risks of low-quality randomness, underscoring the necessity for high-quality keys in cryptographic systems.

A cryptographic key is a bit string used to control cryptographic algorithms, such as those for data encryption and digital signatures. The security of cryptographic operations relies on both key length and the unpredictability, or min-entropy, of the random source that generates the key. Keys derived from provably random sources significantly enhance security, ensuring resistance against both brute-force attacks and future advances in computational power. Real-world security failures have highlighted the risks of low-quality randomness, underscoring the necessity for high-quality keys in cryptographic systems.

In the QCCD architecture, quantum operations (gates, measurement) occur within specified zones on the device. Additional auxiliary zones exist for storing qubits or for loading ions into the device.

In the QCCD architecture, quantum operations (gates, measurement) occur within specified zones on the device. Additional auxiliary zones exist for storing qubits or for loading ions into the device.

Today’s quantum computers have too few qubits and too high error rates for fault-tolerant quantum computing. This constrains quantum circuits executable today to those requiring a short circuit depth. A way to constrain circuit depth is via the variational method. In this method a given computational task is formulated as an optimization problem such that the minimum of a cost function corresponds to the solution of the original task. Variational quantum algorithms compute the cost function using the measurement outcomes or expectation values of a short-depth parameterized quantum circuit. In typical implementations, the parameterized quantum circuits are evaluated on a quantum computer and parameters are optimized using classical optimization techniques. The feasibility of this approach for solving computational tasks at relevant scales is debated and subject to ongoing research.

Today’s quantum computers have too few qubits and too high error rates for fault-tolerant quantum computing. This constrains quantum circuits executable today to those requiring a short circuit depth. A way to constrain circuit depth is via the variational method. In this method a given computational task is formulated as an optimization problem such that the minimum of a cost function corresponds to the solution of the original task. Variational quantum algorithms compute the cost function using the measurement outcomes or expectation values of a short-depth parameterized quantum circuit. In typical implementations, the parameterized quantum circuits are evaluated on a quantum computer and parameters are optimized using classical optimization techniques. The feasibility of this approach for solving computational tasks at relevant scales is debated and subject to ongoing research.

All computers, whether classical or quantum, should be able to perform arbitrary computations. Given an algorithm, the computer should be able to break or compile the algorithm down into smaller and smaller, simpler pieces. These small, simple pieces are the instruction set or “gate set” that the computer can execute. A gate set that is able to perform any algorithm has a special name: a universal gate set. For example, if one wants to calculate 32, rather than calculating the exponent directly, we can first break it down into multiplication: 32 = 3*3. We can then further break this down into addition: 3*3 = (3+3+3). To calculate the square of three, we only need to understand how to break this equation down (compiling) and perform simple addition (native instructions).

All computers, whether classical or quantum, should be able to perform arbitrary computations. Given an algorithm, the computer should be able to break or compile the algorithm down into smaller and smaller, simpler pieces. These small, simple pieces are the instruction set or “gate set” that the computer can execute. A gate set that is able to perform any algorithm has a special name: a universal gate set. For example, if one wants to calculate 32, rather than calculating the exponent directly, we can first break it down into multiplication: 32 = 3*3. We can then further break this down into addition: 3*3 = (3+3+3). To calculate the square of three, we only need to understand how to break this equation down (compiling) and perform simple addition (native instructions).

At Quantinuum, we develop trapped-ion quantum technologies. Our systems “trap” charges atoms (ions) with electromagnetic fields so they can be manipulated and encoded with information using microwave signals and lasers. The design offers some distinct advantages, including high fidelities and longer coherence times (trapped-ion qubits maintain their quantum state longer) than other quantum computing technologies.

At Quantinuum, we develop trapped-ion quantum technologies. Our systems “trap” charges atoms (ions) with electromagnetic fields so they can be manipulated and encoded with information using microwave signals and lasers. The design offers some distinct advantages, including high fidelities and longer coherence times (trapped-ion qubits maintain their quantum state longer) than other quantum computing technologies.

A quantum computer produces a default initial state for all qubits and measures in a default basis. It’s difficult to differentiate the errors associated with the state preparation from the errors associated with measurement so we often combine them into state preparation and measurement (SPAM) errors. Typically, a quantum algorithm is designed to put a set of qubits on a quantum computer into a desired quantum state, usually to solve a problem of interest.

A quantum computer produces a default initial state for all qubits and measures in a default basis. It’s difficult to differentiate the errors associated with the state preparation from the errors associated with measurement so we often combine them into state preparation and measurement (SPAM) errors. Typically, a quantum algorithm is designed to put a set of qubits on a quantum computer into a desired quantum state, usually to solve a problem of interest.

Mid-circuit measurement is a feature of quantum computing devices that allows users to measure the state of a qubit. The measured qubit can then be reset mid-circuit, enabling that qubit to be used for further computations in the quantum circuit. This “qubit reuse” enables quantum error correction as well as re-writing quantum circuits to use fewer qubits than would otherwise be necessary.

Mid-circuit measurement is a feature of quantum computing devices that allows users to measure the state of a qubit. The measured qubit can then be reset mid-circuit, enabling that qubit to be used for further computations in the quantum circuit. This “qubit reuse” enables quantum error correction as well as re-writing quantum circuits to use fewer qubits than would otherwise be necessary.

A specific test of a quantum computer’s performance on complex circuits. The higher the quantum volume the more powerful the system. Our 56-qubit System Model H2 achieved a record quantum volume of 2,097,152 in August 2024.

A specific test of a quantum computer’s performance on complex circuits. The higher the quantum volume the more powerful the system. Our 56-qubit System Model H2 achieved a record quantum volume of 2,097,152 in August 2024.

Quantum error correction (QEC) is a set of techniques used to protect quantum information from errors due to decoherence and other sources of noise. Quantum error correction is one approach to fault-tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum state preparation, and faulty measurements. Effective quantum error correction allows quantum computers to execute algorithms of higher complexity or greater circuit depth.

Many proposals exist for the implementation of quantum error correction. A popular approach is to compute with sets of entangled physical qubits, called “logical qubits”, that enable the detection and correction of errors without breaking quantum physics’ “no-cloning theorem” while circumventing the “measurement problem”.

Copying quantum information is not possible due to the no-cloning theorem. To get around this you can spread the (logical) information of one logical qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto an entangled state of nine qubits.

Quantum error correction (QEC) is a set of techniques used to protect quantum information from errors due to decoherence and other sources of noise. Quantum error correction is one approach to fault-tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum state preparation, and faulty measurements. Effective quantum error correction allows quantum computers to execute algorithms of higher complexity or greater circuit depth.

Many proposals exist for the implementation of quantum error correction. A popular approach is to compute with sets of entangled physical qubits, called “logical qubits”, that enable the detection and correction of errors without breaking quantum physics’ “no-cloning theorem” while circumventing the “measurement problem”.

Copying quantum information is not possible due to the no-cloning theorem. To get around this you can spread the (logical) information of one logical qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto an entangled state of nine qubits.

Compilers used in classical computing translate code written in a high-level language understandable by a programmer to a lower-level language of instructions that get executed on the computer. Similarly, a quantum compiler translates code written in the language of quantum circuits and translates these to the set of instructions that will run on the quantum computer. On trapped-ion devices such as Quantinuum’s, the compiler translates the quantum circuit to the to the gate-pulse sequence performed on the qubits, along with the physical movement of qubits in the device.

Compilers used in classical computing translate code written in a high-level language understandable by a programmer to a lower-level language of instructions that get executed on the computer. Similarly, a quantum compiler translates code written in the language of quantum circuits and translates these to the set of instructions that will run on the quantum computer. On trapped-ion devices such as Quantinuum’s, the compiler translates the quantum circuit to the to the gate-pulse sequence performed on the qubits, along with the physical movement of qubits in the device.

These machines are set up specifically to perform adiabatic quantum algorithms to find good solutions for optimization problems. The quantum state is set up at the beginning, and the final configuration following the natural evolution represents a solution. Quantum annealers contrast with universal quantum computers that, as the name suggests, can be set up to run any quantum algorithm for a much broader range of problems by controlling the evolution of the quantum state over time. At Quantinuum, we build universal quantum computers.

These machines are set up specifically to perform adiabatic quantum algorithms to find good solutions for optimization problems. The quantum state is set up at the beginning, and the final configuration following the natural evolution represents a solution. Quantum annealers contrast with universal quantum computers that, as the name suggests, can be set up to run any quantum algorithm for a much broader range of problems by controlling the evolution of the quantum state over time. At Quantinuum, we build universal quantum computers.

There are different methods of removing “noise” from quantum data. One such method is called “post processing”, where results from a quantum calculation are compared against data from classical computers so that “noise” can be identified and removed after a computation is completed. This is a useful technique during this early stage of quantum computing to verify and validate calculations. But this will not be feasible as quantum computers scale and we begin tackling calculations too complex for classical computers.

There are different methods of removing “noise” from quantum data. One such method is called “post processing”, where results from a quantum calculation are compared against data from classical computers so that “noise” can be identified and removed after a computation is completed. This is a useful technique during this early stage of quantum computing to verify and validate calculations. But this will not be feasible as quantum computers scale and we begin tackling calculations too complex for classical computers.

Articles about quantum computing sometimes reference the “NISQ era.” Pronounced “nis-k,” this acronym stands for Noisy Intermediate-Scale Quantum Computing. It refers to near-term quantum computers on which full quantum error correction has not yet been implemented.

Articles about quantum computing sometimes reference the “NISQ era.” Pronounced “nis-k,” this acronym stands for Noisy Intermediate-Scale Quantum Computing. It refers to near-term quantum computers on which full quantum error correction has not yet been implemented.

With this feature, qubits can be selectively measured at a point other than the end of a quantum circuit. The quantum information of a measured qubit collapses to a classical state (zero or one), but the non-measured qubits retain their quantum state. Based on the measured qubit, users can decide what actions to take further in the circuit, enabling much more dynamic and flexible quantum computer programming than would otherwise be possible. We were the first to incorporate this type of measurement into our commercial offerings.

With this feature, qubits can be selectively measured at a point other than the end of a quantum circuit. The quantum information of a measured qubit collapses to a classical state (zero or one), but the non-measured qubits retain their quantum state. Based on the measured qubit, users can decide what actions to take further in the circuit, enabling much more dynamic and flexible quantum computer programming than would otherwise be possible. We were the first to incorporate this type of measurement into our commercial offerings.

The process of observing the state of a quantum system is known as measurement. Measuring a quantum system generally changes the quantum state that describes that system. The act of measurement will project the quantum state onto one of the eigenvectors of the Hilbert space that represents the system. When your system is a qubit, it will collapse from a value anywhere on the Bloch Sphere to either a 0 or 1 (the “north” or “south” pole of the Bloch Sphere; this is called Z-basis measurement). Measurement is one of the key features that distinguishes quantum mechanics from classical mechanics: if one measures a quantum state, the full quantum information is lost and the quantum state changes. This is sometimes referred to as the “measurement problem”.

The process of observing the state of a quantum system is known as measurement. Measuring a quantum system generally changes the quantum state that describes that system. The act of measurement will project the quantum state onto one of the eigenvectors of the Hilbert space that represents the system. When your system is a qubit, it will collapse from a value anywhere on the Bloch Sphere to either a 0 or 1 (the “north” or “south” pole of the Bloch Sphere; this is called Z-basis measurement). Measurement is one of the key features that distinguishes quantum mechanics from classical mechanics: if one measures a quantum state, the full quantum information is lost and the quantum state changes. This is sometimes referred to as the “measurement problem”.

Magic states are states that can be used to enact a non-Clifford gate, which is required to achieve a universal gate set.

Magic states are states that can be used to enact a non-Clifford gate, which is required to achieve a universal gate set.

Many proposals exist for the implementation of quantum error correction. A popular approach is to compute with sets of entangled physical qubits, called “logical qubits”, that enable the detection and correction of errors without breaking quantum physics’ rules about measurement and how it affects systems.

Copying quantum information is not possible due to the no cloning theorem. In classical computers, error correction often employs redundancy: for example, if you duplicate each bit 10 times then it is easy to detect and correct a single bit flip. To get around the no cloning theorem in quantum error correction you can spread the (logical) information of one logical qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits.

Working with entangled units of qubits also allows one to circumvent quantum mechanics’ measurement problem: when a qubit is measured, its delicate quantum information is collapsed into a specific state and the richness of information is lost, therefore, one must measure the errors and not the qubits themselves.

Many proposals exist for the implementation of quantum error correction. A popular approach is to compute with sets of entangled physical qubits, called “logical qubits”, that enable the detection and correction of errors without breaking quantum physics’ rules about measurement and how it affects systems.

Copying quantum information is not possible due to the no cloning theorem. In classical computers, error correction often employs redundancy: for example, if you duplicate each bit 10 times then it is easy to detect and correct a single bit flip. To get around the no cloning theorem in quantum error correction you can spread the (logical) information of one logical qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits.

Working with entangled units of qubits also allows one to circumvent quantum mechanics’ measurement problem: when a qubit is measured, its delicate quantum information is collapsed into a specific state and the richness of information is lost, therefore, one must measure the errors and not the qubits themselves.

In quantum mechanics, the state of a physical system is represented as a vector in a Hilbert space: a complex vector space with an inner product.

In quantum mechanics, the state of a physical system is represented as a vector in a Hilbert space: a complex vector space with an inner product.

In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system. Its spectrum, the system’s energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.

In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system. Its spectrum, the system’s energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.

Gates act on one or more qubits and perform operations modifying the quantum state. Quantum logic gates differ from conventional logic gates as they must correspond to unitary (reversible) operations.

Gates act on one or more qubits and perform operations modifying the quantum state. Quantum logic gates differ from conventional logic gates as they must correspond to unitary (reversible) operations.

In quantum mechanics, fidelity measures the wavefunction overlap between the actual state and the ideal state (i.e. it is the trace of the density matrix). In practice, fidelity measures how close the state or operation is to the ideal version that we write down mathematically. For example, the fidelity of a two-qubit gate operation roughly measures how often you get the correct outcome - a fidelity of 99.9% means the operation completes successfully 999/1000 times. We were the first in the industry to reach 99.9% fidelity.

In quantum mechanics, fidelity measures the wavefunction overlap between the actual state and the ideal state (i.e. it is the trace of the density matrix). In practice, fidelity measures how close the state or operation is to the ideal version that we write down mathematically. For example, the fidelity of a two-qubit gate operation roughly measures how often you get the correct outcome - a fidelity of 99.9% means the operation completes successfully 999/1000 times. We were the first in the industry to reach 99.9% fidelity.

Fault-tolerance is a circuit design principle that prevents errors from cascading throughout a system (and, in the case of quantum computing, corrupting circuits). Today’s classical computers are fault-tolerant. Quantum computers must be made fault-tolerant as well to handle large calculations. One path towards fault tolerance is the successful implementation of quantum error correction.

Fault-tolerance is a circuit design principle that prevents errors from cascading throughout a system (and, in the case of quantum computing, corrupting circuits). Today’s classical computers are fault-tolerant. Quantum computers must be made fault-tolerant as well to handle large calculations. One path towards fault tolerance is the successful implementation of quantum error correction.

In quantum computing, coherence time refers to the characteristic time during which a qubit can maintain its quantum state to a high degree before it decays significantly, due to something like spontaneous emission or absorption of a stray photon.

In quantum computing, coherence time refers to the characteristic time during which a qubit can maintain its quantum state to a high degree before it decays significantly, due to something like spontaneous emission or absorption of a stray photon.

This is the longest path in the circuit from the data input to the output, representing the maximum number of gates executed on a single qubit.

This is the longest path in the circuit from the data input to the output, representing the maximum number of gates executed on a single qubit.

A circuit is a sequence of operations performed in the quantum processing unit (QPU), including gates, measurements and resets. Operations in a circuit may be conditioned on the results of real-time classical co-computation.

A circuit is a sequence of operations performed in the quantum processing unit (QPU), including gates, measurements and resets. Operations in a circuit may be conditioned on the results of real-time classical co-computation.

A logical encoding results in higher-fidelity quantum circuit elements compared to physical qubit implementations. If physical operations have an error rate that is too high, encoding data and operations will further increase that error rate. In contrast, when your physical operations have an error rate that is below threshold quantum error correction (QEC) has the potential to actually reduce the error rates, as intended. The “break-even” point is the physical error rate below which QEC helps, and above which QEC hurts. Different quantum error correction codes, operations and circuit families will have different “break-even” points.

A logical encoding results in higher-fidelity quantum circuit elements compared to physical qubit implementations. If physical operations have an error rate that is too high, encoding data and operations will further increase that error rate. In contrast, when your physical operations have an error rate that is below threshold quantum error correction (QEC) has the potential to actually reduce the error rates, as intended. The “break-even” point is the physical error rate below which QEC helps, and above which QEC hurts. Different quantum error correction codes, operations and circuit families will have different “break-even” points.

In classical computing, bit flip errors occur when a binary digit or bit inadvertently switches from a zero to one (or vice versa). Quantum computers experience this error as well as phase flips. Phase flips are when the “phase”, or sign, of a qubit gets inadvertently switched. Both errors can cause qubits to lose their quantum state (or decohere). Classical computers often get around this by cloning data to detect and correct errors. However, this method does not work in quantum computing, so instead we need quantum error correction.

In classical computing, bit flip errors occur when a binary digit or bit inadvertently switches from a zero to one (or vice versa). Quantum computers experience this error as well as phase flips. Phase flips are when the “phase”, or sign, of a qubit gets inadvertently switched. Both errors can cause qubits to lose their quantum state (or decohere). Classical computers often get around this by cloning data to detect and correct errors. However, this method does not work in quantum computing, so instead we need quantum error correction.

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning). QAOA is an example of a quantum algorithm.

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning). QAOA is an example of a quantum algorithm.

In 1998, researchers at the National Institute of Standards and Technology (NIST) laid out a proposal for building a quantum computer that uses movable ions as qubits. This is analogous to a charge-coupled device (CCD) camera, which stores and processes imaging information as movable electrical charges in coupled pixels. The QCCD computer, instead, stores quantum information in the internal state of ions that are transported between different processing zones using dynamic electromagnetic fields. Quantinuum’s quantum computers follow the QCCD architecture, which enables the low error rates of these devices and all-to-all connectivity.

In 1998, researchers at the National Institute of Standards and Technology (NIST) laid out a proposal for building a quantum computer that uses movable ions as qubits. This is analogous to a charge-coupled device (CCD) camera, which stores and processes imaging information as movable electrical charges in coupled pixels. The QCCD computer, instead, stores quantum information in the internal state of ions that are transported between different processing zones using dynamic electromagnetic fields. Quantinuum’s quantum computers follow the QCCD architecture, which enables the low error rates of these devices and all-to-all connectivity.

In classical computing, the smallest unit of data is a binary digit or bit, which can be either 0 (off) or 1 (on). A quantum bit, or qubit, is the smallest unit of data in quantum computing. Instead of existing as 0s and 1s, the state of a qubit exists on the Bloch sphere. In our QCCD architecture, qubits are made from 2 energy levels in a trapped ion.

In classical computing, the smallest unit of data is a binary digit or bit, which can be either 0 (off) or 1 (on). A quantum bit, or qubit, is the smallest unit of data in quantum computing. Instead of existing as 0s and 1s, the state of a qubit exists on the Bloch sphere. In our QCCD architecture, qubits are made from 2 energy levels in a trapped ion.

Quantum bits, or qubits, are the smallest unit of data in quantum computers. The quantum information stored in qubits is fragile - qubits tend to interact with their environment and one another, which changes the quantum state and corrupts the quantum information. We call this “noise”.

Quantum bits, or qubits, are the smallest unit of data in quantum computers. The quantum information stored in qubits is fragile - qubits tend to interact with their environment and one another, which changes the quantum state and corrupts the quantum information. We call this “noise”.

All quantum systems can be described by a linear combination of basis states. Basis states describe the possible outcome states of measurements. For example, a qubit is described by a point on the Bloch Sphere, written as c0|0⟩+c1|1⟩. The complex numbers c0, c1 determine the probabilities |c0|2, |c1|2 of measuring the qubit in either the |0⟩ or |1⟩ state of a particular measurement (Z-measurement in this example), respectively. When a quantum system is written as such a sum, we say it is in a superposition of the outcome states.

All quantum systems can be described by a linear combination of basis states. Basis states describe the possible outcome states of measurements. For example, a qubit is described by a point on the Bloch Sphere, written as c0|0⟩+c1|1⟩. The complex numbers c0, c1 determine the probabilities |c0|2, |c1|2 of measuring the qubit in either the |0⟩ or |1⟩ state of a particular measurement (Z-measurement in this example), respectively. When a quantum system is written as such a sum, we say it is in a superposition of the outcome states.

Entanglement describes a set of quantum systems where the quantum state of each system in the group cannot be described independently of the state of the others, including when the systems are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.

Entanglement describes a set of quantum systems where the quantum state of each system in the group cannot be described independently of the state of the others, including when the systems are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.

The Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch. The poles of the sphere correspond to the |0>and |1> states. Other intermediate points on the surface correspond to superpositions of |0>and |1> states.

The Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch. The poles of the sphere correspond to the |0>and |1> states. Other intermediate points on the surface correspond to superpositions of |0>and |1> states.

A fundamental quantum mechanical experiment used to verify the presence of quantum entanglement and certify randomness quality. In quantum random number generation, Bell tests provide a rigorous, Nobel Prize-winning method to quantify the unpredictability of generated random numbers, offering certainty that surpasses traditional statistical tests.

A fundamental quantum mechanical experiment used to verify the presence of quantum entanglement and certify randomness quality. In quantum random number generation, Bell tests provide a rigorous, Nobel Prize-winning method to quantify the unpredictability of generated random numbers, offering certainty that surpasses traditional statistical tests.

Also known as any-to-any, all-to-all connectivity denotes the ability to “connect” any qubit to any other qubit. In practice, this means moving our qubits around in space so that any two can be, for example, gated together. All-to-all connectivity comes for “free” with our architecture, while it is nearly impossible for architectures with fixed qubit locations, such as superconducting or NV center. 



All-to-all connectivity improves the efficiency and flexibility of quantum computers, eliminating the need for “swap gates” (which add noise), allowing for exotic error correcting codes (which have multiple benefits), and improving the computational power of the quantum processing unit (QPU).

Also known as any-to-any, all-to-all connectivity denotes the ability to “connect” any qubit to any other qubit. In practice, this means moving our qubits around in space so that any two can be, for example, gated together. All-to-all connectivity comes for “free” with our architecture, while it is nearly impossible for architectures with fixed qubit locations, such as superconducting or NV center. 



All-to-all connectivity improves the efficiency and flexibility of quantum computers, eliminating the need for “swap gates” (which add noise), allowing for exotic error correcting codes (which have multiple benefits), and improving the computational power of the quantum processing unit (QPU).