By Andre Saraiva, UNSW

The Google Quantum AI team published a paper in Nature called “Exponential suppression of bit or phase errors with cyclic error correction”. This title is very well crafted, as it contains all the key information condensed in it. They achieve exponential suppression of errors, the holy grail of error correction (QEC). They do it with cyclically, which for the first time reveals the stability of QEC methods as one measures and resets ancilla qubits. But the most important word will pass unnoticed to the laymen: the use of “or” is a reminder of how hard it is to correct qubit errors. Only one type of error can be corrected at a time with a small number of qubits.

The paper is yet another timely and ground-breaking result obtained in Google’s Sycamore processor (see here a recent Qnalysis of an application of Sycamore on quantum chemistry). The team under Hartmut Neven explored the very complex subject of QEC in a clear, almost textbook manner. The technical challenges that had to be overcome are thoroughly discussed, ranging from the mundane crosstalk between nearby devices all the way to the – literally – otherworldly issue of high energy cosmic particles, which lead to spikes in error every now and then. This is the longest leap from the abstract theory of QEC to its practical implementation so far.

The exponential gain of accuracy with the number of qubits is, however, only achieved when aiming at addressing one type of error, either bit flips or phase flips. Measuring both simultaneously is much harder. This is related to the Heisenberg’s uncertainty principle, which is famously exemplified by one’s inability to simultaneously determine position and momentum of particles. A qubit is usually not encoded in position and momentum, but a similar uncertainty principle applies. Monitoring bit flips in an ancilla will disrupt its phase, such that it cannot be used to measure phase flips simultaneously. Following this logic further, one can convince themself that the only way to correct both types of errors is by arranging qubits in a two-dimensional grid, at a much higher cost of physical qubits per logical qubit. This is the basis of the family of QEC methods called surface codes. Reminding us all just how difficult full-fledged QEC is, this paper is an important service to the quantum computing community.

This also helps the general public understand that the comparison between qubit technologies cannot be boiled down to a single metric, like coherence time. The types of errors and strategies to circumvent such errors can have a huge impact on the overhead for QEC. For example, qubits with less errors, like ions in a trap, can lead to dramatic reductions on the size of logical qubits. But qubits that have only phase flips as the dominant error and only rarely incur in bit flips can be treated with methods that save resources by monitoring one kind of error more extensively than the other. Spins in silicon naturally have this bias in error rates, but this bias can also be engineered such as in Amazon’s proposal for cat-state qubits. Another potential advantage is to be able to entangle qubits that are not first nearest neighbours with fixed addresses in the processor array but are able to either move around (such as spins of electrons or ions in a trap) or be coupled at long range (such as in the collective movement of ions in a trap or by using flying qubits).

On the other hand, the difficulty of detecting and correcting two types of errors is an alert for certain qubit technologies. Two-dimensional scalability is a necessity, and most likely a connectivity better than just nearest neighbours will be of essence. Qubits that are distributed in a chain, such as spins in nanowires or ions in a linear trap, will not be able to scalably address the two main qubit error types. Other qubit technologies have very prominent additional errors, such as qubit loss in the case of photonic qubits or leakage for superconducting qubits.

So, what’s next?

From a technical point of view, Google manages to get their error rates low enough to start getting some advantage over single errors. But their analysis shows that the projected errors for a surface code (which would correct two error types) are still insufficient to guarantee exponential error suppression. Before the qubit quantity increases, there is some work to be done in qubit quality.

The biggest villain is the measure-and-reset step required to check if an error has occurred. This step is usually slow, which means that while an ancilla qubit is measured, the other data qubits are being exposed to errors. This is a bit of a “the emperor has no clothes” situation – several qubit platforms have a similar issue, some of them with catastrophically bad error rates, but this is swept under the rug known in the jargon as SPAM errors (state preparation and measurement). While in academic studies it is fine to work around these errors, ultimately errors in measure-an-reset incur costs in error correction that are in the same budget as all other types of errors. Significant emphasis should be put on these types of errors when comparing qubit platforms.

Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He works on MOS Silicon Quantum Dot research and commercially-oriented projects at the University of New South Wales (UNSW).

July 17, 2021