By Andre Saraiva, Diraq

The most jaw-dropping applications for quantum computers require trillions or more operations on qubits. But even the best engineering on top of the best error mitigation strategies will only give you a thousand or so operations before all the quantum “goodness” of your computer is gone. The only solution to this conundrum is to sacrifice several qubits to form one large cluster that cooperates to protect the quantum information better. This collective entity is called a logical qubit.

Making and controlling good logical qubits is no easy feat. Poorly controlled qubits will work against each other, instead of protecting each other. Moreover, to identify any potential errors and correct them, it is necessary to be able to repeatedly measure some qubits while keeping others alive.

Quantinuum, the result from a merger between Honeywell Quantum Solutions and Cambridge Quantum, had already been working on error correction for a while. Their quantum processors are getting large and accurate enough that they can now meaningfully try out some of these techniques. Last year they made single logical qubits equipped to track errors in real time. They have now pushed forward and entangled two of these logical qubits.

To understand the precise value of this progress, one needs to understand the challenges in error correction. In their blog, Quantinuum describes the main achievements of this investigation as the following

1. The first demonstration of entangling gates between two logical qubits done in a fully fault-tolerant manner using real-time error correction

2. The first demonstration of a logical entangling circuit that has higher fidelity than the corresponding physical circuit

Non experts will agree – there is a lot to unpack here. The key term is “fault tolerance”. Quantum error correction does not get rid of errors completely – it just reduces the probability of an error occurring. Error correcting schemes, called “codes”, may or may not be able to make error rates become as small as the user wants. A code that is able to systematically reduce the error rate by sacrificing an increasingly larger number of qubits is said to be fault tolerant.

The first accomplishment by Quantinuum was to perform entanglement between two groups of physical qubits (or logical qubits); and they did that while sticking to the rules that permit to protect the logical qubits increasingly by making these groups bigger.

Importantly, the operations on individual physical qubits were good enough that the concerted dynamics of the whole group performed better than the individual qubits, confirming the key ingredient for error correction – the suppression of errors with redundant encoding of quantum information in a logical qubit.

Quantinuum even went as far as testing two different strategies for error correction, using two different quantum processors – the 20 qubit H1-1 and the 12 qubit H1-2. The codes tested were the five-qubit code and the color code. The five-qubit code – one of the most economical codes in number of physical qubits needed – was outperformed by the color code. This was because the color code demands less operations per cycle of error correction, leading to a better error budget and ultimately, more competitive against noisy qubits.

This is the tip of the iceberg. The H1 series processors consists of ions floating over reconfigurable traps made from electrodes in a CMOS chip, which means it is able to shuffle qubits around. This trait permits the connectivity between qubits to be rearranged and more advanced codes to be tested. Moreover, the real-time qubit measurement and decision making should allow for interesting feed forward applications in the near future.

This result was based not only on the exceptional quality of the Quantinuum processors, but also on significant progress on the classical computing in support of quantum operations. A tight integration between the quantum processor and a classical processor and the development of software that is fast and optimised for the characteristics of the H1 processors will allow for significant progress on the science of error correction.

What is next?

One of the main challenges faced by error correction lies not on the quantum side of things, but on the classical processing that goes along. Quantinuum made excellent progress in this direction with their reconfigurable traps allowing for real time measurement of qubits, fast decision making based on tightly integrated classical processing and correcting errors actively over several cycles. However, when all of these ingredients are brought together, the advantage of the collective logical qubit over individual qubits is hampered. That is because interpreting where errors occurred and deciding how to correct them requires a few precious milliseconds, which is sufficient for the qubits to lose some of their coherence.

Another important challenge moving forward is to keep the malleable connectivity between qubits while increasing the number of qubits sufficiently for large scale quantum computation. It is unclear to what extent this is merely an engineering problem, or if there is a fundamental limitation to the ability to shuffle qubits around, at which point the processor will need to be broken down into cells with some number of qubits and interact with neighbouring cells, effectively limiting the range of the qubit connectivity.

Additional information about this research can be found in a blog posting on the Quantinuum website here and a preprint of the full technical paper posted on arXiv here.

Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He currently is the Head of Solid-State Theory for Diraq, an Australian start-up developing a scalable quantum processor based on CMOS quantum dots.

August 4, 2022