As anyone who has looked at the Qubit Technology scorecard on this web site has seen, there is a very wide divergence of technology being pursued to implement quantum computing. The primary driver for this is the different approaches to combating decoherence. Some will argue that the best approach to building a qubit will be a technology like superconducting that can leverage the decades of experience semiconductor manufacturers have put in to improve yields, process control, and equipment to build chips with billions of transistors at nanometer dimensions. In fact, all the current superconducting qubit devices are being built today in fabs that were originally constructed for semiconductors. A conceptual breakthrough was made about 20 years ago with the development of quantum error correction codes that can correct for any type of quantum errors including bit flip and phase flip errors. Using these quantum error correction codes one could use technologies with relatively short coherence times yet still achieve a desired level of reliability by concatenating error correction circuits, i.e. repeatedly performing another layer of error correction on the error correction qubits of a previous layer down to the original level. As long as the gate fidelities were above a certain threshold, one could achieve any desired level of reliability as long as one used enough error correcting qubits.

One very important parameter that is not mentioned enough is the Physical-to-Logical qubit ratio. Although folks talk a lot about the number of qubits in various designs, they almost always talk about logical qubits and the required Physical-to-Logical ratio might vary anywhere up to 10,000:1 depending upon the underlying quality of the qubit and the desired reliability level. Some studies of different qubit implementations believe that various universal machines available in the future will require ratios of somewhere roughly between 1000:1 and 5000:1. For anyone used to error correction in the classical world, these numbers are astounding since ratios for classical error correction would be 2:1 for a worst case Raid Level 1 implementation but typically be much lower (perhaps one parity bit for every 8 bits or a 1.12:1 Physical-to-Logical ratio).

This is why other quantum scientists are pursuing potentially more robust technologies such as topological and other qubit technology that may potentially have much better gate fidelities and coherence times. Their argument is that although superconducting qubits may be much easier to manufacture right now, these other technologies may have orders of magnitude better coherence times and require less error correction allowing them to eventually overtake the superconducting technology. For example, if one wanted to implement a quantum machine with 100 logical qubits, these other more robust technologies may only require a 10:1 Physical-to-Logical ratio and require 1000 physical qubits while a machine implemented with a less robust technology may need to use a 1000:1 ratio and require 100,000 physical qubits. This advantage in required qubit count could easily compensate for any manufacturing disadvantage that the more robust technologies may have.

However, one of the more recent developments are a variety of techniques that can provide some error relief much more efficiently than the brute force concatenation method described above. Examples of these techniques would include VQE (Variational Quantum Eigensolver), QAOA (Quantum Approximate Optimization Algorithm), QVECTOR, and others in development. These algorithms may take advantage of certain correction that occurs naturally in the algorithm itself or leverage the observation that the quantum errors may not occur randomly. In the latter case, by understanding which types of errors are more likely to occur one can develop an algorithm that provides stronger correction on the errors that are more likely to occur. Usage of these techniques could potentially lower the required Physical-to-Logical qubit ratio and make the less robust technologies more competitive.

So right now it is hard to predict which approach will win out. However, one thing that provides optimism is that having multiple potential approaches will ensure that quantum computing does become a commercial reality. Although many of the approaches currently being tried today will probably fail, at least one of them will surely work out well and make quantum computing a success.

Emmanuel Xagorarakisat 5:26 amPlease see to the possibility the error correction in managing QC could be no issue at all; by unifying the classic (reliable) computer with the quantum one. This will apparently be achieved by applying the Binary Code in QC. But this has to be a different binary code for it must be able to mathematically-functionally include superposition and entanglement.

I invite you to consider my academic publication on the Binary Code for the Quantum Computer. The Code occurs by elementary and wholly new number theory proving by pure math the Tesla 3, 6, 9 sequence. It is a less than 8 pages long paper, and it is published in parallel in two academic websites;

1) in the university homepage of distinguished Professor Doron Zeilberger

2) in the SORITES Scholarly Journal of Analytic Philosophy.

So here are the links:

1) http://sites.math.rutgers.edu/~zeilberg/

in the Links option,

http://sites.math.rutgers.edu/~zeilberg/khaver.html

at the end of the first part of the list by the title, “Here is another masterpiece by Emmanuel Xagorarakis”

http://sites.math.rutgers.edu/~zeilberg/akherim/Xagorarakis19.doc

2) http://sorites.org/

http://sorites.org/room/index.htm

by the title “The Singularity Algorithm of Human Vs Computer and the Binary Code for the Quantum Computer by Manolis Xagorarakis”

It is elementary logic offering actually new concepts in number qualities and relations. Please take a look at it!