There is a lot of research underway on quantum machine learning to understand how the use of quantum computers can speed up machine learning applications. In particular, many researchers believe that in the future quantum computers will be able to speed up training for Boltzmann machines and neural networks.
However, what has not received a lot of attention is how classical machine learning can be utilized to optimize the performance of quantum computers. Two papers have recently come out that show how classical machine learning techniques can be used to tackle the problem of creating the optimal sequence of control pulses to control the quantum gates and maximize their performance. This can be a very complex problem but can have a significant positive impact on the quality of the calculations run on the machine. As an example, see our article from last August describing how IBM increased the Quantum Volume of one of their machines from 32 to 64 by only making changes to the software and qubit control pulses.
The first paper came from a team at Intel and was titled “Designing high-fidelity multi-qubit gates for semiconductor quantum dots through deep reinforcement learning”. (See here for the preprint posted on arXiv.) The research was presented at the recent IEEE Quantum Week conference to show how deep reinforcement learning methods can be used to design optimal control pulses for achieving high fidelity multi-qubit gates in their quantum-dot qubit design. Although the optimizations were analyzed on a qubit simulator, their analysis indicated they could achieve a two-qubit CZ gate with a fidelity of greater than 99.9% and a gate duration as short as 21 nsec.
The second paper came from a team at Q-CTRL that described a TensorFlow-based toolset to characterize and suppress the impact of noise and imperfections in quantum hardware. In particular, they used TensorFlow’s efficient gradient calculation tools to optimize the many variables. Q-CTRL provides an overview of this research on a blog article and also posted a technical paper on arXiv here that provides case studies of its use on both trapped ion and superconducting quantum computers.
We think there may be more uses of AI to improve the performance of quantum computers in the future and it will be interesting to see what additional uses will be found in the future.