By Andre Saraiva, UNSW

Following the historical footsteps of classical computers, the quantum industry is now approaching an era akin to the first vacuum tube processors from the early fifties. Bulky and hard to integrate, the most successful qubit systems demonstrated at this point cannot be scaled up past the ten-to-hundred mark. Significant progress must be made before true scalability is achieved, which will probably require the invention of a quantum version of the integrated circuits.

In the meantime, the brute force solution for getting to bigger quantum processors is to wire up two or more distinct chips (or whatever processing units) such that quantum processing can be distributed, allowing for a lower density of wires and more efficient cooling of the devices. This solution can push the number of qubits into the few thousands, which would already be a remarkable step forward. In recent weeks, three major demonstrations of remote quantum processing units being linked together stemmed out of the labs in Rigetti, Max-Planck Institute and University of Chicago.

Similar to the old vacuum tubes, current qubits require individualised control. This means that for every additional qubit, a new wire or a new laser beam, or a new microwave frequency needs to be added to the setup. In the quantum world, this problem is significant because each qubit is typically a very small particle or device. Moreover, more control channels are synonymous with more pathways for heat to get into the quantum processor. In a recent paper, researchers from Intel and Qutech in the Netherlands discussed this “tyranny of the numbers” in the quantum context, painting a realistic picture of the challenges ahead for scaling up the current qubit technologies. Spreading the qubits across different chips provides an immediate workaround for this issue, but how many processing units can one realistically bring together?

Some people in the industry do not consider this question to be rhetorical at all. Indeed, once qubits are freed up from co-existing in the same chip, large scale distributed quantum computing could possibly be realised. The real problem is not the size of the multi-chip system, but how to cool it down and control it. For this purpose, links can be made as large as tens of meters, allowing different chips to sit in different cryogenic systems or even in different rooms.

This quantum network comes with special requirements. That is because it must support the quantum entanglement of the qubits in these remote processors. This can be achieved only if quantum particles with good coherence properties can be exchanged between the qubits, which can only be achieved if the quantum links are sufficiently noiseless.

This distributed quantum processing has some side benefits. For instance, completely separating the setups is the most efficient way to guarantee that the errors in one module are uncorrelated with errors from another module, which is a requirement for efficient quantum error correction algorithms. Moreover, this would allow for radically different qubit technologies to be connected. For instance, spin qubits operating at high magnetic fields in one refrigerator could be connected to superconducting qubits (which do not tolerate magnetic fields well) in another refrigerator, and these two could further be linked to ion-based qubits, which require optical setups that cannot be implemented in the small bore of a dilution refrigerator.

So, what is next?

Immediately there is significant work that needs to be done for this technology to be viable for full scale devices. The errors introduced by these early demonstrations are promising, yet not small enough to pass the test of fault tolerance. Quantum error correction protocols require a very minimal error rate (typically no more than one error every thousand operations). So we will be seeing a lot of progress in this regime of two coupled chips before we start discussing truly distributed quantum computing. Meanwhile, theory may be able to provide input on what is the ideal size of each on-chip module and the necessary connectivity between chips in order to achieve good large-scale computing that is resistant to errors. Most of these calculations so far were performed assuming a monolithically integrated two-dimensional array of qubits. The new rules of the game will most certainly allow for an optimal strategy for connecting processors to optimise the efficacy of quantum error correction protocols.

Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He works on MOS Silicon Quantum Dot research and commercially-oriented projects at the University of New South Wales (UNSW).

March 10, 2021