Quantum Computing Report

Diagnostic Benchmarks for Hybrid Quantum Computing

by Amara Graps

– Which- quantum computing modality is best suited for -which- Use Case, beyond what we know for the gate-based versus annealing types? 

In an earlier  QCR article about the Ansatz Zoo, I presented one way to move us closer to the answer: by tracking the ansatzes for particular Use Cases. 

Another way to move us closer to the answer: with Diagnostic Benchmarks. 

The Diagnostic Benchmarks focus for this article were developed by the QED-C Team led by T. Lubinski, in papers [1] and [2].  In an earlier article, I wrote about Diagnostic Benchmarking as defined by Amico et al., 2023, who provided examples running the QED-C benchmarks. 

The QED-C Benchmarking Suite

What is the QED-C? The Quantum Economic Development Consortium (QED-C) is a coalition of business associations, governmental agencies, and academic institutions formed in 2018 following the U.S. National Quantum Initiative Act. They established a Technical Advisory Committee (TAC) to examine the state of standards development in quantum technologies, and to suggest strategies for promoting economic development via standards. With input from QED-C members active in quantum computing, the Standards TAC, in 2021-2022, developed the suite of Application-Oriented Performance Benchmarks for Quantum Computing as an open source effort (source: Acknowledgements in Paper [2]). The QED-C Benchmarking Suite was officially released in 2023. The next two papers are connected, with the first paper closer to the hardware with “quantum volume” measurements. Paper [2] applies a more complex Use Case with a scalable HHL linear equation solver benchmark and a VQE implementation of a Hydrogen Lattice simulation. 

Paper [1] Lubinski, Thomas, Sonika Johri, Paul Varosy, Jeremiah Coleman, Luning Zhao, Jason Necaise, Charles H. Baldwin, Karl Mayer, and Timothy Proctor. 2023. “Application-Oriented Performance Benchmarks for Quantum Computing.” arXiv. http://arxiv.org/abs/2110.03137.

Paper [2] Lubinski, Thomas, Joshua J. Goings, Karl Mayer, Sonika Johri, Nithin Reddy, Aman Mehta, Niranjan Bhatia, et al. 2024. “Quantum Algorithm Exploration Using Application-Oriented Performance Benchmarks.” arXiv. http://arxiv.org/abs/2402.08985.

Paper [1] presents  QED-C, which can map the quantum computer’s fidelity as a function of circuit width and depth, therefore probing the performance of a quantum computer on different algorithms and small applications, as the problem size is modified. The QED-C suite is designed to be hardware topology independent, by using a specific, device-independent basis (a gate set), to which the circuit is transpiled. In addition to measuring quantum execution results’ fidelity, the suite benchmarks various components of the execution pipeline to give end-users a practical assessment of solution quality and time.

The quantum computer’s fidelity is a color-coded output, mapped to volumetric benchmarking. First, a family of circuits is selected to serve as a benchmark; second, circuits from this family are chosen and run on the hardware to be benchmarked; and third, the hardware’s observed performance on these circuits is plotted as a function of the circuits’ width and depth. The findings are represented in “volumetric space,” which is defined as depth × width.

The next Figure shows a mash-up of charts showing hardware benchmarked in the two papers:  Paper [1] and Amico et al., 2023. Paper [1] presented results for simulated fidelities, as well as for real quantum hardware, so I combined the charts for IBM, Honeywell, and IonQ. The work by Amico et al., 2023 applied QED-C to IBM hardware to demonstrate the importance of disclosing the optimization techniques used in the Benchmarks, which will be necessary for a fair comparison across devices and over time. The following four charts shows IBM, IBM optimized, Honeywell, and IonQ (no quantum volume, more qubits). They are not benchmarking the same thing, yet they are tantalizing for the possibilities allowing us to get closer to the answer: Which modality is better suited for which Use Case?

Figure. Mashup of figures from Paper [1] and Amico et al., 2023. Top-Left: IBM Hardware, no optimization, Quantum Volume (QV) = 32 from. Top-Right: IBM Hardware, optimized after layout selection, dynamical decoupling and measurement error mitigation. QV=128. Bottom-Left: Honeywell Hardware, QV=1024. Bottom-Right: IonQ Hardware, 21 qubits (therefore, different y-axis) and no QV was identified. An Apples to Oranges comparison.

The applications in the Figure: Hamiltonian Simulation, Quantum Fourier Transform, and Phase Estimation, look intriguingly different on the optimized IBM chart and the Honeywell chart. 

Could it be that Ion Traps show more efficient quantum algorithm primitives? However, they each have different quantum volumes; it must remain as an interesting result. So for now:

GQI’s Doug Finke, in a recent Laser Focus article:

We aren’t at the point yet where we can definitely say which quantum applications will be able to provide commercially useful results on which machines. But the one thing that makes us optimistic is the diversity of innovative approaches and rapid advances organizations are making in both hardware and software to get us to the point where the systems can be used for quantum production for useful applications.

A new favorite benchmarking paper is a third paper [3] by Proctor et al., 2024, which elegantly describes a method to further classify benchmarks, from ‘Standarized’ and ‘Diagnostic’, by using Abstraction and Complexity axes. With those axes, the paper introduces a multidimensional capability metric for assessing quantum computer performance, allowing stakeholders to track and extrapolate the growth of quantum capabilities over time.   The study also identifies the limitations of existing benchmarks and proposes a roadmap for developing challenge problems that can effectively measure quantum utility.  See the next Figure.

Figure. Benchmark types. Benchmarks differ in the complexity of the object whose performance they assess, from individual logic gates to complete computer systems, and in the abstraction level of their objectives, which can range from solving a computational issue to executing certain low-level quantum circuits. This illustrates the abstraction and intricacy of a few key benchmarks and benchmark families. Source [3] by Proctor et al., 2024. 

Paper [3] Proctor, Timothy, Kevin Young, Andrew D. Baczewski, and Robin Blume-Kohout. 2024. “Benchmarking Quantum Computers.” arXiv. http://arxiv.org/abs/2407.08828 .

You can see how benchmarks and integrated quantum computers interact in Figure 3 of Proctor et al., 2024. In his Figure 3, four key categories are highlighted: Computational Problem Benchmarks, Compiler Benchmarks, High-Level Program, Low-Level Program Benchmarks.

According to Proctor et al., 2024, benchmarks assess the collective performance of one or more parts of an integrated quantum computer’s “stack”—its qubits, compilers, routers, etc. By assigning tasks to a certain level of the stack and then analyzing the output from that level or one below, they achieve this. Benchmarks that enter and exit the stack at different levels and assess essentially different aspects of performance are grouped into separate benchmark categories. As a result, for this article’s example, it’s critical to identify which IBM quantum stack component was benchmarked for Top-Left and Top-Right in the above Figure.

GQI’s Quantum Tech Stack Approach

Now that our Diagnostic Benchmarks have further delineated into ‘High Level’, ‘Low-Level’, ‘Holistic’, and ‘Component’, we can consider to which part of the Quantum Tech Stack they apply. If you compare Proctor et al., 2024 Figure 3, to the next Figure, you’ll have your answer. 

Figure. Quantum Tech Stack for understanding to which stack a Diagnostic/ Benchmark would apply. (*) 

(*) The Quantum Tech Stack concept is threaded throughout GQI’s method of analyzing quantum technology developments. This slide is from GQI’s Quantum Hardware State of Play, which is a 47 Slide Deck that steps through the stacks with accompanying latest developments and how to evaluate the developments. If you are interested to see this State of Play or any of the other State of Plays: Quantum Technology, Quantum Safe, Quantum Sensing, Imaging, and Timing, Quantum Software, Quantum Landscape please don’t hesitate to contact info@global-qi.com.

October 22, 2024

Exit mobile version