We attended the Adiabatic Quantum Computing Conference (AQC) held at the NASA Ames Research Center on June 25-28. 2018. The conference showed continued progress in the technological and had a total of 223 registrants from around the world in attendance.
For those who think that quantum annealing will be imminently superseded by gate level quantum computing, we would advise that you are solely mistaken. There were presentations representing five different efforts to create either a quantum or quantum-inspired solution aimed at providing improved solutions to combinatorial optimization problems. These include:
· D-Wave with their line of superconducting quantum annealers.
· IARPA’s QEO (Quantum Enhanced Optimization) program to develop improved architectures for superconducting based quantum annealers.
· Fujitsu’s Digital Annealer to create specialized classical hardware that can solve optimization problems faster than classical processors.
· NTT/NII/Stanford/University of Tokyo/Osaka University/Tohoku University/Tokyo University of Science joint effort to create a photonics based Coherent Ising Machine (CIM, also known as a Quantum Neural Network).
· Google’s experimental program to research their own quantum annealing architecture.
D-Wave led off describing some of the key goals in their next generation processor development project. These include improvements in annealing control, connectivity, system overhead, and environmental noise control. The annealing controls are a continuation of some recent hardware changes made in the D-Wave 2000Q and include better control of annealing offsets, annealing pause, annealing quench, and a reverse anneal capability.
These features are becoming increasingly important as researchers look to implement hybrid classical-quantum algorithms for improved performance. The reverse anneal feature allows one to partially solve an optimization problem classically and then load the solution into the quantum computer to find even better optimizations. This process can be repeated multiple times to get improved results. Google is taking advantage of this feature to implement a parallel tempering algorithm which they described at the conference. Since this feature was only announced eight months ago, this is the first conference where researchers reported preliminary investigations into its use. We expect a lot more next year.
A key feature described in the next generation processor will be improved connectivity. D-Wave described an architecture which they call Pegasus that will increase connectivity from 6 degrees to 15 degrees. In addition, there will be more flexibility in the coupling to include non-stoquastic terms. These improvements will make it easier to embed an optimization problem on the hardware. D-Wave described a 680 qubit test chip they have developed to prove out this architecture and indicated their plans to create a version with over 5000 qubits based upon this architecture.
Sometimes when the qubit count is increased, noise problems can get worse and the overhead times for programming and readout will get longer. However, D-Wave is targeting a 2X improvement in noise characteristics for improved accuracy and they aim to maintain or even improve the programming and readout times versus the existing 2000Q despite the increase in qubits by more than 2.5 times.
Although Google’s 72 qubit gate level chip gets a lot of publicity, Google has also developed a small-scale 9 qubit quantum annealing chip. They reported on this chip in their presentation. It uses flip chip semiconductor technology and they are exploring additional architectures including an 8 qubit fully-connected chip.
In much the same way that NVIDIA has developed specialized GPU accelerators that offer improved performance for machine learning applications over standard microprocessors, Fujitsu has developed a custom CMOS implementation called a Digital Annealer specifically designed to solve quadratic unconstrained binary optimization (QUBO) problems. Their first generation version supports 1024 variables and provides all-to-all coupling which is a great advantage for embedding problems. In addition, since the machine uses digital logic, noise and coherence time issues should not be a problem. Fujitsu described use of the machine in solving stochastic search problems and indicated that their next generation machine will support as many as 4000 to 8000 variables. An arXiv paper which more fully describes this machine was released during the conference and you can read it here.
Two papers from Stanford and Tohoku Universities were presented that described Coherent Ising Machines (also called Quantum Neural Network). This architecture also provides all-to-all connectivity which can be highly advantageous in configuring problems. Public access is now available for this machine and you can view the article we previously wrote about this architecture here.
We also previously wrote about the IARPA QEO program in an article here. Several papers were presented from Lockheed Martin, Northrup Grumman, MIT Lincoln Labs and the University of Southern California that show early results from this program. From a hardware standpoint MIT Lincoln Labs described the chip level portion of the implementation using a three tier chip stack built with their flip chip technology.
In addition to presentations on the hardware there were numerous presentations and poster sessions that covered theoretical, software, and applications related to adiabatic quantum computing. One key buzzword you may hear more frequently is “quantum-inspired”. As research is continuing on true quantum based devices, folks developing classical techniques for optimization are not standing still. New classical algorithms are being investigated, custom digital architectures have been developed, and research into hybrid classical/quantum algorithms is increasing. So even for those skeptics who are unsure if true quantum computing will ever be commercially viable, one result which is already apparent is that the competition effect has already had a positive impact on improving classical optimization algorithms.
We are hopeful that the presentation and videos from the conference will be posted online within the next few weeks. If so, you should be able to find them at the conference web site here. In the meantime, the agenda for the conference can be found here and the abstracts of the presentations and poster sessions are here.
The 2019 AQC Conference will be held in Innsbruck, Austria.