By Andre Saraiva, UNSW

It has been almost a month since Amazon released a detailed analysis of a quantum computer architecture of their own, and it still has people digesting all the technical details and dense math equations. In their own words, the analyses in the document “can be broadly classified into three categories: 1) a hardware proposal; 2) a physical-layer analysis of gate and measurements errors; and 3) a logical-level analysis of memory and computation failure rates”. We already discussed in a previous article the proposed physical qubits and what is known about them. We also discussed the impact of the stellar author list and the high-level relevance of Amazon’s stance in initiating an in-house quantum computer fabrication effort.

We now look into the aspect of the document that is most extensively explored – the full-scale quantum error correction scheme. For the non-experts, it may come as a surprise how little is known about quantum error correction and how it would be performed in a realistic qubit platform. Attention has been mostly devoted to mathematical demonstrations of important yet arcane concepts such as thresholds, stabilisation, and universality. Most studies on the topic are somewhat theoretical, with oversimplified models of noise and crosstalk, and worse yet, paying little attention to the question of “when will I have a useful quantum computer?”

Amazon’s calculations, on the other hand, try to bridge the gap between theory and implementation by starting from the nuts-and-bolts of the qubit model, where things could go wrong and in what fashion they affect qubits. This means that the model of errors is far more realistic than your average academic study. Nevertheless, a theoretical model is only as good as its inputs. Since Amazon’s qubit plans are somewhat different from most current implementations (we will compare them in a follow-up article), there is a long list of unknowns and many potential sources of errors will need to be considered as things evolve in the lab.

Still, Amazon’s paper does a great job at taking the first jab at the problem of quantum error correction. Firstly, they devise specific strategies that take advantage of the bias between different types of single qubit errors – bit flip error rates are exponentially suppressed in cat state qubits, becoming much less common than phase flip errors. This is a key head start that their qubits have, and a reflection of the importance of communication between the theorists devising the full-scale quantum computer architecture and the experimentalists getting the qubits implemented.

Magic states – that is what Amazon’s Quantum Computer is all about

Because of these biased error rates, they were able to develop a very efficient method for creating magic states. This is potentially the key advantage of the qubit design that they are proposing. What are magic states, you ask? Well, get ready because here is where things get complicated.

The type of qubit architecture that Amazon is adopting is based in a 2D array of qubits with fixed physical positions and that only interact with its immediate neighbours. For these architectures, only a handful of error correction codes have been developed. They analyze two examples – the repetition code and a variant of the surface code. A code is basically a recipe for repetitive measurements and operations that detect if a certain set of qubits are committing errors and steer them back into the right general direction (the definition of “direction” is scientifically complicated, so we will leave it at that). Think of herding sheep – each sheep might be straying from the path a bit, but collectively they move in the right direction. In this analogy, the physical qubits are the individual sheep, and a logical qubit is the herd.

The problem is that these stabilization strategies make the logical qubits move only along certain specific paths, missing out a lot of the possible directions it could move in the very large space of qubit configurations (Hilbert space for the jargon fans). The paths covered by your groups of qubits are so sparse that they do not provide a quantum advantage. Using the analogy with a herd of sheep, it is as if you could only steer it along main roads and could not come off from those roads into the pastures.

Magic states are a trick to get the logical qubits to explore the full potential of universal quantum computing. Basically, every so often, qubits with specified states are brought into the lattice of logical qubits (details of how to do that are traditionally muddy). These states, when injected in an array of qubits, shifts the logical qubit paths into non-trivial directions, allowing to explore the Hilbert space in its full magnificence. The problem is obvious – how do you guarantee that these magic states are not faulty themselves?

Normally that would mean that the magic state factory would have to be, in itself, an error-corrected quantum processor (perhaps a little sector in your full array of qubits). This means that you are getting a huge overhead to create these magic states.

Amazon’s approach is to take advantage of the types of errors in their qubits and the potential interaction strategies between qubits to create what they call “bottom up Toffoli” states. These states could be prepared with high fidelity at the physical qubit level (what is called “fault tolerance”), without the need for encoding every single qubit of the factory in a logical qubit.

Too confusing? Well, the bottom line is that the harmonization of their qubit hardware with their error correction code will allow them to be performing useful quantum computations with a few thousand qubits. And the word “useful” is not to be taken lightly.

Prospects for Quantum Supremacy and Quantum Advantage

Amazon’s preprint manuscript tries to set realistic expectations for their quantum computer. A major positive is their clear distinction between quantum supremacy and quantum advantage. They claim to be able to create programmable, quantum error corrected qubit states that outperform any classical computer at around 1000 physical qubit unit cells (what they call ATS, or Asymmetrically Threaded Squids), and to perform better quantum chemistry calculations at 32000 qubits.

Now, the first claim, which is basically that they would achieve quantum supremacy with 1000 qubits, might come as a surprise. After all, with just 53 superconducting qubits Google claimed supremacy. A similar number of single photon modes were used in the recent demonstration of a photonic quantum computer supremacy by the University of Science and Technology of China. So, what is the difference?

Amazon’s claim is based in a fully programmable, quantum error corrected approach. Google’s supremacy experiment did not count on error correction and ended up relying on the creation of a noisy qubit configuration that randomly falls into a state that is hard to compute with classical computers. The far less noisy realisation from USTC with Gaussian Boson Sampling relied on a photonic circuit that was fixed, meaning that the computer is not programmable and really only outputs the same answer over and over. The authors in Amazon’s paper are talking about a much more realistic, potentially useful type of supremacy. Maybe we should have a separate word for it?

Now, regarding quantum advantage, they chose a quantum chemistry calculation to compare the potential of their designed quantum computer and the state-of-the-art classical computers available (Hubbard Hamiltonian). Notice that this is an application that could potentially also be implemented without full error correction, in what is called Noisy Intermediate-Scale Quantum Computers (NISQ Computers). That is, again, not what the authors are talking about. They are discussing a fully error-corrected scheme, which is why the expectation is that 32000 qubits will be needed before questions about drugs, protein folding, and advanced materials start to be answered.

Do I really look like a guy with a plan?

Perhaps the most striking criticism to Amazon’s plan is that it only has a beginning and an end, but lacks the middle part. The ideas for the individual qubits are sound, and the way that it enables efficient strategies for quantum error correction is really powerful. But between these two points there is a lot of ground to cover and, by betting in an unproven technology, Amazon is taking all the risks for itself.

The first challenge will be to experimentally demonstrate the theoretical protection that cat states would have. They assume that the resonators are operated with a highly excited quantum harmonic oscillator (approximately 8 quanta). In practice, experimental investigations have failed to show error protection for cat states above 3 or 4 quanta (see here for the same technology as Amazon and our article here for an alternative method).

The second challenge will be to realise the coupling between the superconducting resonators and the nanomechanical resonators in order to boost the lifetime of the qubits. Firstly, there will be fabrication challenges to harmonise the chemical processes necessary to fabricate the acoustic resonators and the ultraclean chemistry of the superconducting films. Secondly, once these two components are brought together, their individual performances will be impacted, which might put a hole in the resource estimates performed so far by Amazon.

Will it be possible to fit 32 000 qubits in a single chip? The back-of-the-envelope estimates based solely on the size of individual qubits can be encouraging, but as soon as one needs to account for multiplexers, heating effects and whatnot, the numbers start becoming more daring.

Now, the real challenge still is from the engineering side. In order to individually control and readout each qubit, the shear number of wires makes it impossible to control and measure the qubit from outside the fridge. Every quantum engineer out there designing printed circuit boards and wiring quantum devices was thoroughly impressed by the Herculean efforts in Google’s Sycamore, at just 53 qubits. In order to beat that, significant developments in cryogenic control would need to be in place – see for instance Intel’s Horseridge cryoCMOS controller. But then, again, this difficulty is faced by pretty much everyone in the field.

In part 3 of this series later this month we will describe how Amazon’s qubit technology compares with other qubit technologies and discuss its advantages and disadvantages versus the other approaches. And if you missed part 1 of this series, you can view it by clicking here.


Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He works on MOS Silicon Quantum Dot research and commercially-oriented projects at the University of New South Wales (UNSW).

January 2, 2021