In December 2023, the U.S. Defense Advanced Research Projects Agency (DARPA) selected five teams for their Quantum Benchmarking project with a goal of find applications that could be used to benchmark progress in quantum computing as well as perform an initial estimate of quantum hardware resources necessary to find solutions for some of these problems.

The three teams focusing on benchmarks and applications were led by the University of Southern California, HRL Laboratories, and L3Harris and the two teams focusing on estimating quantum processor requirements were led by Rigetti Computing and Zapata Computing. The Phase II research will continue until March 2025, but the teams have started to post their initial sets of research in preprints posed on arXiv. Seven preprints from this research have already been posted on arXiv and three more are in preparation. You can view a full list provided by DARPA here.

These studies are all geared towards fault tolerant quantum computers and the applications chosen all require heavy computation. For example, in a paper titled “Fault-tolerant resource estimation using graph-state compilation on a modular superconducting architecture” worked on by a team including Rigetti Computing, Aalto University, the University of Technology Sydney, and Zapata AI, there is a Table II that provides resource requirements for several different sizes of the Fermi-Hubbard Simulations, Transverse-Ising Simulation, and the Quantum Fourier Transform (QFT) algorithm. All the cases listed in the table show physical qubit sizes ranging from 1.60 to 1.93 million qubits, with runtimes between 769µs and 151 minutes, and power consumption ranging from 0.13 watt-hours to 1.48 Megawatt-hours. So an initial lesson is that these applications won’t be ready to run anytime soon.

Other observations we have at GQI is that on first glance it doesn’t appear that any of these analyses exploit hybrid-classical parallelism to a significant extent. Also, they are currently assuming surface-code error correction algorithms with their associated overheads. There is a tremendous amount of research ongoing to develop more efficient codes such as Q-LDPC (Quantum Low Density Parity Check) as well as other error correction codes and innovative hardware approaches that could further reduce significantly the number of physical qubits needed. Also, GQI is optimistic that the algorithmic techniques will advance and new ways of implementing these algorithms with fewer qubits and gate counts will appear in the future.

Nonetheless, this effort on Quantum Benchmarking should provide very useful to the quantum community to better understand what applications are well-suited for a quantum solution and provide targets to the hardware engineers on what hardware they will need to provide to make their machines useful.

For additional information about this program, we recommend you review a news release provided by DARPA here, a listing of both the posted and pending preprints developed by the teams here, a press release from Zapata that describes their work on the program here, and a LinkedIn post from Rigetti that discusses their participation in the program and the preprint they recently posted here.

June 21, 2024