*Authored By*: *Arnaud Carignan-Dugas, Research Scientist, and Stefanie Beale, **R&D Engineer on the Quantum Engineering Solutions Team* *–** Keysight Technologies*

** **

Many in the quantum computing industry have boasted about the disruptive potential of quantum computers to empower rapid growth and innovation in many fields including medicine, materials science, and finance. However, one major barrier to reaching this potential is quantum computing’s high susceptibility to noise and calibration.

Our ability to manage or reduce the error rates in quantum computers will determine the pace at which we reach the capacity to begin leveraging quantum computers for these innovative leaps. If we can understand the impact of errors and how well current techniques can compensate for them, we can gain insight into what stage of development the quantum computing industry has reached.

**How Are Quantum Computing Errors Different from Classical Computing Errors?**

Computing devices deal with the processing of information. Classical computers store information and perform operations on bits, which are hardware memory elements with two discrete states labelled 0 and 1. These computers perform operations by manipulating information stored on bits according to program specifications.

Quantum computers have a hardware component that is analogous to the classical “bit,” a “qubit” (or quantum bit). Qubits can store the same binary states allowed by a conventional computer, but quantum mechanical features—namely superposition and entanglement—also allow for additional states to be stored and manipulated. Researchers posit that this extra capacity introduced by quantum mechanics will allow quantum computers to achieve performance that is impossible for classical computers—notably, quantum computing algorithms aim to solve dense, combinatoric problems that would require a prohibitively large amount of time for their classical counterparts.

A computing error, quantum or not, is any undesired operation that replaces the state of memory with another state. In conventional computers, an error on a single bit is limited to an accidental flip from 0 to 1, or from 1 to 0. Since additional states are featured beyond sequences of bits in quantum computing, errors can take many more forms. There are more quantum states than conventional bit sequences, leaving room for more types of undesired state alterations.

Because qubits must leverage the effects of quantum mechanics, they are inherently small and very sensitive to interactions with their environment, which can introduce errors or destroy the stored state entirely.

Because qubits must leverage the effects of quantum mechanics, they are inherently small and very sensitive to interactions with their environment, which can introduce errors or destroy the stored state entirely. Below are some examples of noise sources that can be detrimental to a quantum computer’s ability to perform a calculation.

**Sources of Quantum Computing Errors**

**External forces.**Even small vibrations or variations in magnetic forces, electric currents, or ambient temperature can cause quantum computations to return incorrect results or, in some types of quantum computers, to lose the state of memory entirely.**Internal control.**Since qubits are extremely sensitive to small fluctuations, the precision of the signals used to act on the stored states for computations must be very high. Any deviation from a perfect signal will result in errors.

**What Is Computing Error Correction? **

Conventional computing errors typically occur because one or more bits unexpectedly flip. Error correction strategies have been developed to correct these bit flips and return the system to the expected state. The usage of error correction was prevalent in early computing systems before the technology was sufficiently advanced to be very robust to changes in the environment.

Today, classical computing error correction is usually unnecessary and is used when a failure would be catastrophic and/or when the computer will be in an environment that is more likely to introduce errors, such as for space missions.

The simplest example of a classical code is the repetition code, in which every bit is copied to introduce redundancy:

0 -> 000

1 -> 111

This mapping from a state stored on one bit to the same state stored on (or encoded in) multiple bits is called “encoding”; hence, the use of the word “code” for specifying the error correction strategy.

In the above 3-bit repetition code, if we have a 0 state encoded as 000 and a bit flip error is introduced on the second bit, we will find the state 010. By looking at the state, we see that there are more 0s than 1s, and assuming the probability of error is low, it is safe to assume that the correct state is an encoded 0, so we correct back to 000.

In general, error correction consists of three pieces:

- Encoding states into more bits
- Looking at the encoded state on a regular time interval
- Correcting the state based on the observation from step two

If the rate of errors is low, we can use error correction strategies to identify and correct changes as they occur. When the rate of errors is higher, we begin to run into problems. For example, imagine that we did nt look at an encoded state 111 for a while and, in the meantime, two errors occurred, bringing it to, say, 001. If we looked at the state then, we would wrongly assume that the last bit was flipped and correct it to 000 so that the final state would be incorrect.

There are strategies to account for higher error rates, such as introducing more redundancy in the encoding. For example, we can use a 5- or 7-bit repetition code and employ the same strategies we have described for the 3-bit repetition code. In these cases, we can recover from up to 2- or 3-bit flip errors, respectively.

These error correction strategies only work if the rate of errors is lower than the rate at which we can correct for a given code. Leaving more time between corrections results in more chances for bit flip errors to occur, so any latency in the system is problematic when systems are error prone. As a result, the biggest challenge for error correction has been speed — finding more effective and efficient ways to correct errors before they cause significant problems.

Leaving more time between corrections results in more chances for bit flip errors to occur, so any latency in the system is problematic when systems are error prone. As a result, the biggest challenge for error correction has been speed—finding more effective and efficient ways to detect errors before they cause significant problems.

**Why Is Quantum Error Correction so Challenging? **

As we begin to scale up quantum computers, we will need error correction strategies analogous to those developed for classical computers. Quantum error correction procedures follow the encoding, measurement and recovery procedures used for conventional computers. However, there are new challenges to applying these steps to quantum computers.

In classical computing, we look at the encoded state to see what went wrong to apply a correction. This is not possible with quantum computers. One fundamental tenet of quantum mechanics is that looking at a quantum state changes it. This means that we cannot measure the encoded state directly without destroying the information that we’re trying to preserve. For this reason, quantum researchers have developed methods that allow us to retrieve information about the errors on the state without measuring the state directly. These methods involve indirect measurements, which do not give us information about which logical state we have, and ideally, will not impact the state.

**Quantum Computers Require Larger Encodings **

Given how fragile quantum states are to their environment, it is likely that large encodings will be needed. That is, hundreds if not thousands of qubits may be required to encode a single qubit state. As noted by Science.org, Google researchers believe it may be possible to sustain a qubit indefinitely by expanding error correction efforts across 1,000 qubits.

Much like in classical computing where there was uncertainty around which an error has occurred when a state is measured, the quantum computing measurement results tell us only that one error from a given set of possible errors happened; we do not know for sure which of these errors occurred. Since states are more complicated for qubits than for bits, there are more types of errors, resulting in more uncertainty about which correction will return us to the correct state. Finding the best method for choosing a correction is a difficult problem, and one where there is still ongoing work.

If we know the noise acting on a system, we can calculate the best possible strategy for small codes. For larger codes, however, it becomes prohibitively expensive. Take, for example, the surface code, which is the most popular large quantum error correction code. Rather than pre-selecting corrections for each measurement outcome and using a lookup table, a classical algorithm is used to select recovery operations during every error correction step. This algorithm introduces significant latency.

Even for smaller codes using lookup tables, though, classical computers are still required to route measurement outcomes, select a recovery operation, and send that information back to the quantum computer. This introduces significant latency, thereby making the codes less effective. This is one major bottleneck to effective quantum error correction that many in the field are working actively to overcome.

Keysight is working with researchers to accelerate progress in the pursuit of a viable quantum computer. Learn more about quantum computing and Keysight’s involvement by visiting the Quantum Solutions page.

## Archive

- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)