Quantum Computing Report

Quantum for AI: Costs from a Diagnostic Benchmark

by Amara Graps

Training hybrid quantum neural networks (HQNN) and quantum neural networks (QNNs) can be costly. As in up to $100K per qubit. If this is the price, would you want to train your QNN? I provide a 2023 research finding in this article, that may make managers take notice. Yes, there are caveats. 

Quantum for AI

In an earlier article, I presented the two-sided question: AI for Quantum? or Quantum for AI? In this article, I focus on Quantum for AI, in the larger context of benchmarking. To be clear about what “Quantum for AI” means:

  • Quantum machine learning
  • Quantum artificial intelligence
  • Quantum neural networks
  • Automatic classification of quantum states

The concept of “Quantum for AI” explains how the quantum field can help enhance artificial intelligence. Quantum machine learning (QML) makes quantum computing noise-resilient for applications such as image classification and stability evaluation by introducing techniques like parametrized quantum circuits (PQC) and Noise-Adaptive Search (QuantumNAS). Quantum artificial intelligence (QAI) provides more processing capacity through hybrid models and quantum algorithms, assisting with fault detection and sustainability initiatives. By automatically classifying quantum states, Quantum Neural Networks (QNNs) simplify quantum information tasks and show promise for language and power grid analysis. 

Quantum Neural Network Training Cost

In a diagnostic benchmark paper: Kordzanganeh et al., 2023: Benchmarking simulated and physical quantum processing units using quantum and hybrid algorithms, the authors research the cost of training a network, which is a major factor in the development and testing of QNNs and HQNNs. 

The researchers explain that the price structure of publicly accessible QPUs is set up so that the number of different quantum circuits that need to be assessed throughout the training process determines the training cost. The number of different quantum circuits is proportional to the number of trainable parameters when employing the parameter-shift approach for gradient computations. As a result, the training costs for the QNN and HQNN increase linearly with the number of qubits. This quickly leads to millions or billions of unique quantum circuits that need to be initialized and assessed on the QPU for an increasing number of epochs and training samples. The ‘money’ chart is 3b from their paper, in the next Figure. 

Figure. The training cost and inference cost (inset) for various publicly available QPUs as of June 2023. From Kordzanganeh et al., 2023, Figure 3b.

The authors explain the computation: Assume that the model is trained over 100 epochs with 100 training samples and 1000 circuit shots per expectation value. The conversion rate for the IBM Falcon r5.11 is 1.06 USD = 1.00 CHF, and the prices are obtained in CHF. It should be noted that the IBM Cloud Estimator service provides a circuit training technique that could lower IBM prices by up to two orders of magnitude as described in the paper’s Sec. III A. 

My rough estimate from the logarithm on left axis with an alignment on 10 qubits gives:

IBM~ 10^3.2 = $1585K/10 qubits = $158.5K/qubit

IONQ~10^3.1 = $1259K/10 qubits = $126.0K/qubit

OQC, Rigetti~10^1.6 = $40K/10 qubits = $4K/qubit

The authors include the above calculation with a comparison of hardware platforms and software development kits (SDKs) to find the quickest and most economical path to creating unique quantum algorithms. QNNs, or quantum network architectures in their most general form, are used to execute this benchmark. AWS SV1 is better for circuits with 28–34 qubits, QMware basiq for circuits with 36–40 qubits, and the QMware basiq simulator for circuits with 2–26 qubits, according to the diagnostic benchmark. 

The authors conclude with predictions on when, as quantum hardware develops and the number of available qubits rises, quantum circuit execution will be able to offer a sizable runtime advantage over simulator hardware. The runtime of quantum simulators expands linearly with circuit size, but that of quantum processors increases exponentially. Publicly available QPUs already provide runtime improvements over simulator hardware for large qubit circuits. QNNs with very few qubits can tackle challenging data science and industrial problems due to the exponential computational space. Thus, reducing the cost and improving the accuracy of QPUs while successfully incorporating them into the conventional infrastructure is the key to the success of quantum computing. The trick is to harness the hybrid interaction between classical and quantum machines to seamlessly take advantage of the best performance of simulators and QPUs depending on the application case.

The caveats?

  1. The authors work at one of the benchmarked cloud vendors, so it is not an objective study.
  2. We should track the follow ups to this research, as the costs fall. The paper has been cited as of today, 33 times, but Connected Papers doesn’t show a community follow up, yet, to confirm the results. 
  3. It’s a diagnostic benchmark, using the definition of ‘diagnostic benchmark’ in Amico et al., 2022 published in IEEE (see also: at ArXiV). As benchmarks are one of the critical pieces of quantum technology progress, we’ll talk about those benchmark definitions next time. 

We can think about benchmarks as a way of both: 

  1. measuring progress and 
  2. checking the accuracy of quantum research estimators (QRE). 

Quantum Resource Estimator (QRE)

Costs are an issue, therefore try to estimate the seconds, minutes, hours, and days needed for the QPU time of your training activity before. The essential component of this Use Case is a variational quantum algorithm (VQA), in which a QPU algorithm: parameter-shift is one part. You can start by determining which Use Case most closely matches your variational, parameter-shift method by studying Quetschlich et al., 2024 Utilizing Resource Estimation for the Development of Quantum Computing Applications.

GQI has a Use Case Tracker that can provide additional assistance. From GQI’s Use Case Tracker, one can choose “QML” as the “Problem domain to industry mapping” and then navigate to the “Computational Approaches” section on the right to see if it is correct in the Use Case type. These are the primitives of quantum computing. Refer to the following Figure. This will assist you in determining which use case for quantum resource estimation is most like your own. Then, you can use a QRE with table values for several algorithms as published in our Quantum Resource Estimator article here, which is provided by GQI with support from Microsoft.

Figure. Distribution of used computational approaches in QML in GQI’s Use Cases Tracker. (*) 

(*) For those eager to explore the interactive quantum computing Use Case dashboard, which offers the ability to filter results by specific companies, geographies, technology approaches, or hardware compatibilities, please don’t hesitate to contact info@global-qi.com

October 17, 2024

Exit mobile version