I had the privilege of attending the 2016 Adiabatic Quantum Computing Conference hosted by Google in Venice (Los Angeles), California between June 26-30. Also, at the end of the conference, Google hosted tours of their Quantum Computing Lab in Santa Barbara (see their special logo) and showed attendees some of the labs they use for fabrication and testing of their quantum computers. This conference was the fifth one to be held and was by far the largest with 180 attendees from around the world. As another sign of the increasing interest, there were a great number of proposals submitted for Contributed Talks at the conference that the conference organizers could only accept less than 50% of those submitted. As a result, there was a good range of talks that covered many different aspects with both theoretical and experimental results. I will summarize some of the significant trends that I saw.
Google started the conference with a presentation describing a parallel tempering architecture that combines both classical systems with adiabatic annealing machines. The classical systems would perform simulated annealing to capture thermal optimizations while the adiabatic annealing would capture quantum tunneling optimizations. The idea is that each system works on what it does best to provide the best of both worlds. Google also described their efforts to develop two different hardware qubit type for use in logic gate and adiabatic quantum computers. The former is based upon the XMON qubit developed at UCSB while the later uses a fluxmon qubit architecture along with a neocortex architecture for increased qubit-to-qubit coupling. With the logic gate qubits, Google is preparing a test bed using about 49 qubits to implement a quantum supremacy experiment. Although this would be a made-up problem to show that a quantum circuit can calculate in seconds a function that would take a large classical computer several days, it could show definite proof that a quantum approach is superior. Google, as well as MIT Lincoln Labs, also mentioned 3D approaches that use a chip-on-chip approach to increase density and performance. By separating out the chips that perform logic and control functions from the qubits themselves, they can optimize each chip for its respective function and improve performance. Additional potential hardware improvements were discussed including methods to reduce flux noise, additional coupler types for use with non-stoquastic Hamiltonians, ways of achieving more flexible multi-qubit interactions, and techniques for modifying the annealing schedule for improved performance.
There were several talks that discussed usage of Restricted Boltzmann Machines (RBM) to enable adiabatic quantum computers use in machine learning applications. Given the tremendous amount of attention being shown on machine learning in general, some people believe that machine learning could be the “killer application” that can significantly drive the growth of quantum annealing.
There were also several discussion of simulation approaches and comparisons including Simulated Annealing, Quantum Monte Carlo and other approaches. One observation that I have is that a side benefit of quantum computing is that it is providing competition to those who are developing classical optimization algorithms. As a result the classical algorithms are getting better too, so achieving quantum superiority will require aiming for a moving target.
There were several significant ideas presented that will extend and enhance the capabilities of future Adiabatic Quantum Computers this surely bodes well for success in the years ahead. The 2017 Adiabatic Quantum Computing conference will be held in Japan.
For more information on the talks discussed at the conference, you can access the listing of abstracts here.