By Dr. Andre Saraiva, UNSW
Late in June this year, quantum computing Professor Christopher Monroe wrote to Nature describing how COVID-19 restricted access to his lab at University of Maryland. Perhaps unexpectedly, he reported that the remotely controlled experiments his team had set up were delivering ‘the best data (they) have ever seen’. This means a lot coming from a group that was already demonstrating extremely high-fidelity operations before the noise levels came down. How much better could their ion-trap-based qubits get? Now, the anticipation is over. In a preprint manuscript led by Monroe and Marko Cetina from Duke University, they demonstrated the holy grail of large scale quantum computing – fault tolerance. In their own words, this means “an encoded logical qubit whose logical fidelity exceeds the fidelity of the entangling operations used to create it”.
While the result has not been peer reviewed yet, this work is a natural consequence of the years of developments incorporated in their architecture. Their quantum computer starts by removing an electron from single ytterbium atoms to turn them into ions that can be trapped above a microfabricated chip inside a vacuum chamber (which is at room temperature). They cool down just the atoms with lasers, obtaining extremely high-quality spin qubits, with coherence times of a few seconds (limited only by fluctuations in magnetic field due to the external magnet).
The recent progress in high quality control tools developed by the group is what made these results possible. Some years ago, they demonstrated individual optical addressability using an acousto-optic modulators (their current quantum computer has 32 channels, so they are ready to do even larger multiqubit experiments in the near future). All-to-all coupling is achieved by converting the spin qubit states into motional states for a brief period (or even only virtually), such that two spin qubits that were set to couple with the collective oscillations will effectively interact with each other. This is called the Mølmer-Sørensen interaction.
Logical qubits are implemented by encoding 0s and 1s on the collective state of several physical qubits. Fault tolerance is then described as the limit when the collective state of the logical qubit outperforms each separate physical qubit – which is not easy to achieve because the more qubits one operates, the higher the probability of introducing an error. But eventually, in the limit of many qubits and many stabilizing operations, fault tolerance should be achievable. They use only 9 qubits in their fault-tolerant logical qubit implementation (what is called a [[9,1,3]] Bacon-Shor code), which is an impressively small set. Another 4 ions are used as ancillas to measure hints of errors in some properties of parts of the logical qubit (called stabilizers).
They also demonstrate a non-fault-tolerant implementation of a “magic state”. This is a key resource for implementing universally programmable quantum computers, since it can be used for “distillation”, which is how a multiqubit system can access parts of the Hilbert space that make a quantum computer completely non-simulatable by classical computers (beyond the so-called Clifford space).
So, what’s next?
Well, many things. The most immediate one is backing up the claims with peer review – which hopefully will not be a problem since this manuscript is already making a lot of noise in the community. Secondly, it is now time to ask how far can this quantum computer go. After all, with only 15 qubits they have demonstrated what Google’s 53 qubit system hasn’t. And they still have a few channels that they can use.
Now, this does not mean we are done. The error correction code they demonstrated does not have a threshold like the surface code, for example. This means that the more logical qubits one has, the more errors will build up, and eventually this form of encoding qubits will also reach its ceiling. This is true, no matter how accurate each logical qubit may be. The surface code, on the other hand, has a demonstrated threshold – a tipping point at which error correction completely reverts all errors and a long term, large scale computation can be sustained.
More can be learned by following the Twitter thread being written by one of the co-authors, quantum information scientist Kenneth Brown.
Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He works on MOS Silicon Quantum Dot research and commercially-oriented projects at the University of New South Wales (UNSW).
September 27, 2020