Measuring and comparing the performance of different quantum computers can be a very complex task. As a first step, IBM first developed a metric called Quantum Volume in 2017 in order to provide equal emphasis on the error rates of the qubits as well as the number of qubits. Although this measure it a vast improvement over just measuring the number of qubits, this measure may have limitations. (See our previous article on The Pitfalls of Overreliance on the Quantum Volume Metric). IonQ modified the concept slightly and proposed their own version called Algorithmic Qubits. And Atos proposed another benchmark called Q-Score based upon the Max-Cut problem. But none of these benchmarks are universally accepted as one that accurately reflects the “goodness” of quantum computers under all circumstances. For example, none of the above mentioned benchmarks measure total execution time which could be an important factor in some environments.

Now the Standards and Performance Metrics Technical Advisory Committee (Standards TAC) of the Quantum Economic Development Consortium (QED-C) has introduced an open source suite of quantum application-oriented performance benchmarks that are designed to measure the effectiveness of quantum computing hardware at the application level. The benchmark suite can run a variety of commonly used quantum algorithms such as Quantum Fourier Transform, Grover’s Search, Amplitude Estimation, and others for a varying number of qubits and plots the associated circuit depth and average result fidelity as shown in the diagram below. An advantage of this approach is that it utilizes potential algorithms that could be used in applications instead of an arbitrary one such as randomized benchmarking. It also is not constrained to using square configurations (i.e. where the number of levels equals the number of qubits). Understanding the results of such benchmarks is more complicated that just comparing a single number like Quantum Volume, but that should be expected because quantum computers are complex devices.

Example Plots of Simulation Results using the QED-C Benchmarks

The code for the benchmarks has been posted on GitHub and it was created with a goal of being easy-to-use, hardware agnostic, and compatible with the Qiskit, Cirq, and Braket software platforms. A news release from the QED-C announcing these benchmarks can be found here, an arXiv paper that describes them in more detail can be found here, and the GitHub repository with the code is located here.

October 12, 2021