A recent article by Mikhail Dyakonov published in the IEEE Spectrum titled “The Case Against Quantum Computing” caught our eye and we thought it would be worth a comment.  In the article, the author points out that a strategy that relies on manipulating with high precision an unimaginable huge number of variables is doubtful and raises serious concerns about the future of quantum computing.

We don’t agree with the assessment and would like to point out a couple of major reasons why we do expect quantum computing to become viable in the coming years.

If At First You Don’t Succeed, Try, Try Again

One interesting fact about the invention of the light bulb in 1879 was that Thomas Edison tried 6,000 different materials until he found one that would last long enough to make a commercially viable incandescent light bulb.  (That initial material was carbonized bamboo which was replaced later by tungsten when he figured out how to manufacture a tungsten filament.)

As you can see on our Qubit Technology page, we are now tracking 58 different organizations working to develop quantum hardware in 76 different projects within 8 different technology areas.  While we would agree that many of those efforts will not succeed we are encouraged by the breadth of different approaches being taken to find a viable qubit architecture. Our expectation is that a few of them will succeed and in fact, some of the early succeeding approaches will be replaced later on by even better approaches later on, much like Edison’s experience with light bulb filaments.

The Rise of Approximate Quantum Computing

One factor it appears the author is completely neglecting is that quantum computers do not need to be fully accurate down to the last qubit to be useful.  There are many problems where even the 2nd or 3rd best answer to a problem may be useful.  This can be particularly true in optimization problems.  In fact, many quantum computer systems are able to run the same problem multiple times (some call these shots).  The results from all the different runs may vary a little, but this could be an advantage because sometimes having a few different choice can be helpful.  Consider a quantum solution to a travelling salesman problem where a quantum computer a programmed to find the route that has the fewest number of miles. If the results show 3 or 4 different answers that are close to optimal, but not necessarily the very best, the salesman can finalize the choice by including an additional parameter, like which one provides the most frequent flyer points!

Also, one of the more significant recent developments in quantum computing that the author is not considering at all is the discovery of optimization algorithms that use a hybrid classical/quantum computing model such as QAOA (Quantum Approximate Optimization Algorithms) and VQE (Variational Quantum Eigensolver).  These algorithms use both classical and quantum calculations iteratively and are designed to be resilient to quantum errors.  As a result, the massive error correction that is discussed in the article may not be as necessary.  It is likely that algorithms of this type will be some of the very first to show success using NISQ (Noisy Intermediate Scale Quantum) machines.

The Bear in the Woods

At the end of the day, the measure for success is not how perfect the calculations can be but rather how well quantum computers can provide commercial value in providing solutions to real-world problems as compared to classical computing.

Let’s illustrate this with a story.  Two friends decide to go camping in the woods one weekend.  We will name them Alice and Eve.  In the middle of the night they hear a noise outside their tent and see a bear starting to charge at them.  At that point, Eve tells Alice, “Let’s run!”  Alice turns to Eve and says, “Are you kidding?  That bear is fast we can never outrun him.” And Eve turns to Alice and replies, “I don’t have to outrun the bear.  All I have to do is outrun you!”

Similarly, quantum computers do not have to provide perfect answers to problems which can be a bear to solve.  All they have to do to declare success is run better and provide answers that are superior either in cost, quality, or practicality than can be obtained using a classical computing approach.