*By Andre Saraiva, Diraq*

For the impatient reader, here are the oversimplified answers to the questions we aim to address in this article:

*“How do I put a price to quantum computation?”*

*“How do I put a price to quantum computation?”*

The resources needed are 1) qubits and 2) runtime. The unit of quantum computation should be the qubit-second. However, at the moment, that is not how quantum computation services are priced.

With more qubits you can run computations with less steps. Multiplying the number of qubits and number of operations (or operation time) gives a rough estimate of the required resources.

*“How many qubit-seconds do I need?”*

*“How many qubit-seconds do I need?”*

Each run of a useful quantum computation takes between 100 billion qubit-seconds and 500 billion qubit-seconds.

Most useful problems require testing parameters and variants of a problem, which means quantum computation units may sold in the future by the teraqubit-seconds.

Since quantum computers have variations in clock rates and operation fidelity, this number may be off by a factor of ten or more. These estimates are based on current best knowledge and certain assumptions on qubit quality that are optimistic yet realistic. The exact requirements may improve significantly in the next decade with better algorithms and error correction codes.

*“How much does a qubit-second cost?”*

*“How much does a qubit-second cost?”*

On average, each qubit-second costs approximately $0.05 USD currently (June 2022).

At current prices, a quantum computing campaign for a significant problem (see examples below) would cost more than $10 billion USD.

This cost is, obviously, expected to scale down very significantly in the next decade. The main question that this article proposes is *how much will a qubit-second cost in a decade?*

**Why do we care now what the price of quantum computations will be in ten years?**

The market for quantum computing is certainly impressive. Investments, both private and governmental, aim to capitalize on the revolutions ahead in drug design, materials, cybersecurity and other multi-billion-dollar fields. The expectation is that consumers will be willing to pay top dollar to stay ahead of the competition and unlock unique opportunities created by quantum algorithms.

However, tech companies know that it is often hard to convert findings from computer simulations into viable products. It is currently unclear how much companies should budget in order to unlock quantum computing capabilities for their R&D ambitions.

This article shows that the current price tag for cloud-based quantum computing services combined with the best knowledge in quantum resource estimation tells us that the price for a single run of a useful quantum computation would cost more than the entire R&D budget of the top-spending company in the world, Amazon.

In other words, the race for quantum computing is not only a race for scaling up qubit quality and quantity, but a race for scaling down the costs as well.

**How do we measure resources for quantum computing?**

The setup for the problem is relatively straightforward. Take an example of a problem that can be solved in a quantum computer but not in a classical computer. Identify the best possible algorithm to solve it and count how many operations are needed. Then, identify the error correction protocol to guarantee that all the necessary operations will compute with minimal error. Map these to a qubit architecture with a set clock rate. The result is an estimated total number of qubits and total runtime.

The problem described above is, however, an extremely difficult task that requires profound knowledge in diverse areas. Some compilations tools have been developed for the few-qubit, NISQ-era computers. However, when it comes to large problems that rely on universal fault tolerant quantum computation, compilation is still crafted by hand.

The first unknowns are the future hardware specifications such as operation fidelity, connectivity between qubits and clock rate. We may estimate them based on the assumption that engineering efforts will push qubits to their best possible operating regime even when they are integrated by the millions – a view that may be contested by less optimistic scientists and engineers.

For this article we assume operation fidelities of 99.9%, clock rates of 1MHz and connectivity between neighbouring qubits distributed in a two-dimensional array. These are common assumptions among researchers in resource estimation, and while theoretically it is possible to improve significantly these numbers in some architectures, these are parameters that have been demonstrated and tested in prototypes.

A more difficult question is whether future quantum algorithms will be significantly more efficient than current versions. New ideas introduced in the past have led to improvements as large as hundred-fold. In this sense, it is possible that the estimates provided here are inflated.

The final issue is the actual spatial distribution of qubits and how to optimally perform the operations in a fault-tolerant manner. For instance, in “A Game of Surface Codes: Large-Scale Quantum Computing with Lattice Surgery”, Daniel Litinski (currently working at PsiQuantum) shows how larger arrays of qubits can be used to perform operations in less steps. This is why some researchers argue that the product of number of qubits and runtime is the best way to express the requirements for a quantum computation (we will see examples of such an argument throughout the rest of the article).

With the assumptions above, all quantum computation resources can be expressed in qubit-seconds.

**How many qubit-seconds are needed for a useful quantum computation?**

A useful quantum computation is basically an instance of a problem of interest that can be calculated in a quantum computer significantly faster than in a classical computer. The problems that require the least quantum computational resources to have an advantage over classical computers are those that can leverage quantum algorithms with exponential speed-ups over their classical counterparts.

Precisely worked out examples include

- The main enzyme responsible for drug metabolism, cytochrome P450 (CYP), has unknown mechanisms of promoting oxidation, and simulating it will require 7.8 billion operations in 1434 logical qubits [Goings, Joshua J., et al. “Reliably assessing the electronic structure of cytochrome P450 on today’s classical computers and tomorrow’s quantum computers“]. In order to perform this many operations, each logical qubit will need to contain approximately 9745 physical qubits.
.__This totals 109 billion qubit-seconds__

- The mechanism for biological fixation of nitrogen is a coveted chemical process that could significantly reduce the price and energy consumption of production of fertilizers, and it is based on the FeMo cofactor in a yet-not-understood process. This process can be simulated in a quantum computer with 2196 logical qubits in 32 billion operations [Lee, Joonho, et al. “Even more efficient quantum computations of chemistry through tensor hypercontraction“.]
.__This totals 448 billion qubit-seconds__

- In order to factor a 2048 bit product of two primes, a quantum computer will require approximately 25 billion operations in 14238 logical qubits [Gidney, Craig, and Martin Ekerå. “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits.”].
**This totals 432 billion qubit-seconds**.

**How much does a qubit-second currently cost?**

Customers can currently purchase quantum computing services over the cloud. However, since the prices are currently dominated by the costs of starting each shot or job and the general upkeep of a cloud service, they are not priced based on numbers of qubits or runtime (runtimes are currently very short because qubits will completely decohere within a few operations).

Just as an exercise, however, we can estimate such prices based on typical runtimes and the known number of qubits in each system. Here are examples of prices in June 2022.

We looked up the prices practiced by AWS Braket selling access to quantum processors from IonQ, OQC and Rigetti, IBM selling access to their own processors and Google Cloud selling access to IonQ’s quantum computer.

IBM sells access to their 27-qubit Falcon system for $1.75 USD per second.

Rigetti sells access to their 40-qubit Aspen-11 system for an estimated US$0.35 per second (assuming 1ms runtime per shot).

Oxford Quantum Circuits sells access to their 8-qubit Lucy system for an estimated $0.35 USD per second (assuming 1ms runtime per shot).

IonQ sells access to their 11-qubit IonQ device for an estimated $1.00 USD per second (assuming 10ms runtime per shot because IonQ’s computer has a lower clock rate than average).

__On average, a qubit-second is currently priced at $0.05 USD.__

The most expensive system is IonQ’s, priced at estimated US$0.09/qubit-second (the estimate is even higher considering the price practiced in Google Cloud). The least expensive system is Rigetti’s, which is currently priced at a bit less than $0.01/qubit-second.

Once again, at this price the costs of running a useful quantum algorithm would be impractical, amounting to tens of billions of dollars per run. This means that all technologies need to significantly reduce their costs in order to deliver reasonable prices to customers.

**Predicting future costs of qubit-seconds, and the sustainability of the quantum computing market**

We have found no information about future costs of quantum computing services from various vendors. However, calculating such costs should be viable for most quantum hardware makers, assuming that they will stick to their current plan for scaling up their quantum processors.

The focus is, of course, the variable costs. Given the extremely high volume of resources that need to be purchased by customers, we expect fixed costs — such as the maintenance of cloud access and the costs with hardware development – to be strongly diluted and the bulk of the price tag to be set by the costs of running what will most likely be a large, complex machine.

Another assumption is that the classical computing that will accompany the quantum processor will not scale with the size of the processor. In other words, the prices for classical computations associated with the quantum computation will become diluted once quantum computers reach a certain size. This may not turn out to be true, but it is the vision that most qubit-makers have for their ultimate quantum computers.

For instance, dilution refrigerators impose a steep electricity bill, significant wear-and-tear costs and require attention from highly specialised quantum engineers. A quantum computing approach that requires integration of hundreds or thousands of refrigerators will have a starting cost that will limit how low the price of a qubit-second will reach. For these technologies, integrating more qubits in each cryogenic setup will translate into saved dollars and more economic competitiveness.

To be explicit, a technology that can only harbour a thousand qubits per fridge will require ten thousand fridges to be able to run any of the algorithms mentioned earlier. Since each fridge consumes approximately 50kW, we are talking about a total power consumption of 500 Megawatt, three times the average consumption at the CERN particle accelerator. If instead engineers are able to cram 10 000 qubits in a single fridge, this power consumption drops to a more manageable 50MW, comparable to some of the largest data centres around the world.

Not many corporations have deep enough pockets to justify computational research at a cost of more than a few hundreds of dollars per run. Combine this high cost with the somewhat limited range of problems that offer a quantum advantage and the total market for quantum solutions involving expensive machines integrating several cryogenic setups might be a lot smaller than some believe. Quantum computing solutions can only create a full-scale sustainable market if the price of quantum computations is cut by a factor of at least a million from the current asking price. This is not necessarily a big ask – a similar scaling of costs has happened with the semiconductor industry and the price tag for each transistor.

In summary, there is a need to think of the costs of quantum computation and how they dictate the sustainability of the market. Ultimately, some variation of the qubit-second may end up becoming standard, but the discussion of strategies to tame the costs of quantum computing need to be started now, while we set ourselves in the path of deciding in which technology to invest. More importantly, we need solid science in resource estimation to get these numbers right, with focus on practical, useful problems and their realistic costs. There is no room for unfounded optimism.

*Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He currently is the Head of Solid-State Theory for Diraq, an Australian start-up developing a scalable quantum processor.*

June 29, 2022