Covering Disruptive Technology Powering Business in The Digital Age

image
Quantum Computing Brings New Error Correction Challenges
image
December 1, 2022 Blogs

Authored By: Arnaud Carignan-Dugas, Research Scientist, and Stefanie Beale, R&D Engineer on the Quantum Engineering Solutions Team Keysight Technologies

 

Carignan-Dugas

Many in the quantum computing industry have boasted about the disruptive potential of quantum computers to empower rapid growth and innovation in many fields including medicine, materials science, and finance. However, one major barrier to reaching this potential is quantum computing’s high susceptibility to noise and calibration.

Our ability to manage or reduce the error rates in quantum computers will determine the pace at which we reach the capacity to begin leveraging quantum computers for these innovative leaps. If we can understand the impact of errors and how well current techniques can compensate for them, we can gain insight into what stage of development the quantum computing industry has reached.

How Are Quantum Computing Errors Different from Classical Computing Errors?

Computing devices deal with the processing of information. Classical computers store information and perform operations on bits, which are hardware memory elements with two discrete states labelled 0 and 1. These computers perform operations by manipulating information stored on bits according to program specifications.

Quantum computers have a hardware component that is analogous to the classical “bit,” a “qubit” (or quantum bit). Qubits can store the same binary states allowed by a conventional computer, but quantum mechanical features—namely superposition and entanglement—also allow for additional states to be stored and manipulated. Researchers posit that this extra capacity introduced by quantum mechanics will allow quantum computers to achieve performance that is impossible for classical computers—notably, quantum computing algorithms aim to solve dense, combinatoric problems that would require a prohibitively large amount of time for their classical counterparts.

Beale

A computing error, quantum or not, is any undesired operation that replaces the state of memory with another state. In conventional computers, an error on a single bit is limited to an accidental flip from 0 to 1, or from 1 to 0. Since additional states are featured beyond sequences of bits in quantum computing, errors can take many more forms. There are more quantum states than conventional bit sequences, leaving room for more types of undesired state alterations.

Because qubits must leverage the effects of quantum mechanics, they are inherently small and very sensitive to interactions with their environment, which can introduce errors or destroy the stored state entirely.

Because qubits must leverage the effects of quantum mechanics, they are inherently small and very sensitive to interactions with their environment, which can introduce errors or destroy the stored state entirely. Below are some examples of noise sources that can be detrimental to a quantum computer’s ability to perform a calculation.

Sources of Quantum Computing Errors

  • External forces. Even small vibrations or variations in magnetic forces, electric currents, or ambient temperature can cause quantum computations to return incorrect results or, in some types of quantum computers, to lose the state of memory entirely.
  • Internal control.Since qubits are extremely sensitive to small fluctuations, the precision of the signals used to act on the stored states for computations must be very high. Any deviation from a perfect signal will result in errors.

What Is Computing Error Correction? 

Conventional computing errors typically occur because one or more bits unexpectedly flip. Error correction strategies have been developed to correct these bit flips and return the system to the expected state. The usage of error correction was prevalent in early computing systems before the technology was sufficiently advanced to be very robust to changes in the environment.

Today, classical computing error correction is usually unnecessary and is used when a failure would be catastrophic and/or when the computer will be in an environment that is more likely to introduce errors, such as for space missions.

The simplest example of a classical code is the repetition code, in which every bit is copied to introduce redundancy:

0 -> 000

1 -> 111

This mapping from a state stored on one bit to the same state stored on (or encoded in) multiple bits is called “encoding”; hence, the use of the word “code” for specifying the error correction strategy.

In the above 3-bit repetition code, if we have a 0 state encoded as 000 and a bit flip error is introduced on the second bit, we will find the state 010. By looking at the state, we see that there are more 0s than 1s, and assuming the probability of error is low, it is safe to assume that the correct state is an encoded 0, so we correct back to 000.

In general, error correction consists of three pieces:

  1. Encoding states into more bits
  2. Looking at the encoded state on a regular time interval
  3. Correcting the state based on the observation from step two

If the rate of errors is low, we can use error correction strategies to identify and correct changes as they occur. When the rate of errors is higher, we begin to run into problems. For example, imagine that we did nt look at an encoded state 111 for a while and, in the meantime, two errors occurred, bringing it to, say, 001. If we looked at the state then, we would wrongly assume that the last bit was flipped and correct it to 000 so that the final state would be incorrect.

There are strategies to account for higher error rates, such as introducing more redundancy in the encoding. For example, we can use a 5- or 7-bit repetition code and employ the same strategies we have described for the 3-bit repetition code. In these cases, we can recover from up to 2- or 3-bit flip errors, respectively.

These error correction strategies only work if the rate of errors is lower than the rate at which we can correct for a given code. Leaving more time between corrections results in more chances for bit flip errors to occur, so any latency in the system is problematic when systems are error prone. As a result, the biggest challenge for error correction has been speed — finding more effective and efficient ways to correct errors before they cause significant problems.

Leaving more time between corrections results in more chances for bit flip errors to occur, so any latency in the system is problematic when systems are error prone. As a result, the biggest challenge for error correction has been speed—finding more effective and efficient ways to detect errors before they cause significant problems.

Why Is Quantum Error Correction so Challenging? 

As we begin to scale up quantum computers, we will need error correction strategies analogous to those developed for classical computers. Quantum error correction procedures follow the encoding, measurement and recovery procedures used for conventional computers. However, there are new challenges to applying these steps to quantum computers.

In classical computing, we look at the encoded state to see what went wrong to apply a correction. This is not possible with quantum computers. One fundamental tenet of quantum mechanics is that looking at a quantum state changes it. This means that we cannot measure the encoded state directly without destroying the information that we’re trying to preserve. For this reason, quantum researchers have developed methods that allow us to retrieve information about the errors on the state without measuring the state directly. These methods involve indirect measurements, which do not give us information about which logical state we have, and ideally, will not impact the state.

Quantum Computers Require Larger Encodings 

Given how fragile quantum states are to their environment, it is likely that large encodings will be needed. That is, hundreds if not thousands of qubits may be required to encode a single qubit state. As noted by Science.org, Google researchers believe it may be possible to sustain a qubit indefinitely by expanding error correction efforts across 1,000 qubits.

Much like in classical computing where there was uncertainty around which an error has occurred when a state is measured, the quantum computing measurement results tell us only that one error from a given set of possible errors happened; we do not know for sure which of these errors occurred. Since states are more complicated for qubits than for bits, there are more types of errors, resulting in more uncertainty about which correction will return us to the correct state. Finding the best method for choosing a correction is a difficult problem, and one where there is still ongoing work.

If we know the noise acting on a system, we can calculate the best possible strategy for small codes. For larger codes, however, it becomes prohibitively expensive. Take, for example, the surface code, which is the most popular large quantum error correction code. Rather than pre-selecting corrections for each measurement outcome and using a lookup table, a classical algorithm is used to select recovery operations during every error correction step. This algorithm introduces significant latency.

Even for smaller codes using lookup tables, though, classical computers are still required to route measurement outcomes, select a recovery operation, and send that information back to the quantum computer. This introduces significant latency, thereby making the codes less effective. This is one major bottleneck to effective quantum error correction that many in the field are working actively to overcome.

Keysight is working with researchers to accelerate progress in the pursuit of a viable quantum computer. Learn more about quantum computing and Keysight’s involvement by visiting the Quantum Solutions page.

(0)(0)

Archive