The IEEE held its second annual Quantum Week last week and it contained a ton of information about developments and outlooks for quantum technology. It is one of the largest and most comprehensive conferences that cover quantum technology and provides a good overview of all the activity going on with quantum research and the quantum industry. Although there are a lot of academic conferences as well as industrial summits about quantum technology these days, the IEEE is aiming this conference to fill what they call the engineering gap that can bridge these two worlds. The conference lasted for a week with over 320 hours of program material including 10 world-class keynotes, 19 tutorials, 23 workshops, 48 technical papers, 18 panels, 29 posters, and over 30 Exhibitors in the virtual Exhibit Hall. To help visitors view the material presented in the session, the IEEE has recorded most of the session and will have the videos available for the next two months. For those who have not registered yet, you can go to their web page of the IEEE Quantum Week 2021, look at the program, register and view the videos that interest you.

Although we are not able to summarize the entire event in this report, we will report on the ten keynotes that were presented and mention some of the noteworthy comments (at least to us) that we heard in these keynotes.  However, the summaries below of the keynotes only provide the highlights of these talks and we encourage our readers to view the full videos of these sessions if they can. These videos have been posted online and will be available until the end of the year.

Krysta Svore, Microsoft – Shifting Left to Scale Up: Shortening the Road to Scalable Quantum Computing

Krysta was the first keynote for the week. Her key theme was that the industry needs to “Shift Left”. By that she meant that developments in the computer industry typically have three components consisting of Hardware Development, Software Development, and Integration and Test. In many cases these components are developed sequentially, but she emphasized that overlapping the phases and moving earlier the Software and Integration and Test phases can provide significant benefits for quantum technology. By paying more attention to hardware/software integration the industry can reduce time to market, improve product quality, lower resources, cost, and risk, and increase revenue. She mentioned three ways this “Shifting Left” can be accomplished through 1) Innovating on Algorithms and knowing the hidden costs, 2) Prioritizing interoperability, and 3) Growing a diverse community for innovation.

An interesting point was made on what they what is needed to achieve what Microsoft calls “Quantum Practicality”, which is where a quantum computer can process an application that provides either scientific or commercial advantage. Because of the great disparity in gate speed between classical transistors and quantum gates, you need a significant algorithmic advantage for quantum to pay off. In particular, they believe that an algorithm that provides quadratic speedup (think Grover’s algorithm) will not be able to show enough of an advantage over a classical solution to pay off. They believe algorithms will need to show a superquadratic (for example, a cubic or exponential speedup) to make sense. This will rule out a number of the quantum algorithms that have been developed in the theoretical world for practical use because they only provide a quadratic speedup. In addition, she felt that leadership quantum acceleration will require much larger machines. The beginning point to start showing this quantum leadership would be for scientific understanding applications that would need at least 30,000 physical qubits with error rates of 10-6. And for applications like cryptoanalysis or chemical or material science simulations, we will need over 1 million physical qubits to start providing leadership class solutions. These numbers indicate to us that machines of this size are still several years away and we would not expect to have them until later in this decade.

Microsoft is performing much research to come up with clever algorithms to reduce the number of qubits and gate levels needed to implement various algorithms and reduce the requirements. For example, she mentioned some new research to develop a new type of error correction code that uses dynamically generated logical qubits. In addition, Microsoft is doing much work in applying quantum-inspired algorithms for their customer’s problems in order to help the customer achieve performance improvements right now.

Finally, some of Microsoft’s recent improvements to the Azure Quantum capabilities were mentioned. The system now can work with quantum programs written in the Qiskit and Cirq programming languages. They have created an intermediate Quantum Intermediate Representation language called QIR based upon the classical LLVM intermediate language that can make it easier to translate between various higher level languages like Q#, Qiskit, and Cirq to different backends.  In addition, Microsoft will soon be putting on the cloud a free full state simulator that will provide a capability to simulate more qubits that can be achieve with the current local simulator. In addition, they have been research some new hashing algorithms for use with this simulator that could allow it to simulate systems with over 100 qubits, in certain situations. Another simulator they now have available in preview mode is an Open System Simulator that can provide simulations with noise models.

Alan Baratz, D-Wave Systems – Quantum is Ready, But are You?

In contrast to Krysta’s conservative forecast that it would take a quantum computer of at least 30K qubits to provide leadership acceleration, Alan Baratz of D-Wave indicated that he has customers who are either in Pilot Deployment or Full Deployment right now with their Advantage 5,000 quantum annealing machine. Specific customers he mentioned include Canadian supermarket change Save-On-Food who has reduced the scheduling time from 25 hours classical to under 2 minutes on the D-Wave machine, Volkswagen that has seen an 80% reduction in waste and time for a automobile paint shop application,  bank BBVA that has reduce the time it takes for a portfolio optimization problem from 1 day on a Tensor processor to under 3 minutes, OTI Lumionics that is using it for an OLED materials application, and Menten AI that is using the annealer for protein folding calculations.

Of particular interest to us is his assertion that quantum annealers will always be the better choice for optimization problems over gate based quantum computers. His argument is that the VQE or QAOA algorithms that would be used in NISQ computers would require heavy classical computing support that would take a lot of time. And for future error corrected quantum computers, the error correction overhead will be very significant and take too much time. If this turns out to be true, it would represent a significant advantage for D-Wave because our data suggests that optimization problems may comprise as much as one-third or more of the total usage of quantum computing and D-Wave may have much of this market to themselves.

Alan also discussed some of the recent announcements the company has made including plans to develop a gate based machine, a new constrained variable solver, an upgrade to their 5000 qubit machine called Advantage Performance and other improvements. For more details on these, you can read our article from earlier this month that reported on these developments. One new item he mentioned were plans to expand D-Wave’s Leap quantum cloud service to include machines based in Germany and Southern California. Today, this service is only supported by machines located at D-Wave’s headquarters in Vancouver, Canada.

Jay Gambetta, IBM Quantum – Challenges and Directions of Quantum Computing with Superconducting Qubits

Jay described some of the near and medium term development activities of the IBM quantum group to continue to advance their machines. He stressed that they are working on three main goals of increasing the numbers of qubits, increasing the quantum volume, and increasing the number of circuits that can be processed per unit time.

We expect IBM will formally announce their next 127 qubit machine codenamed Eagle within the next few weeks with a goal of having a 1121 qubit machine codenamed Condor by 2023. To continue this scaling they are planning on using a variety of technologies include laser annealing to fine tune the qubit frequencies, scalable measurements that will use multiplexing to reduce the number of readout amplifiers to qubits from a single amplifier for every qubit to one amplifier for every 8 or 10 qubits, multilevel wiring within the chips, and higher density I/O cables for the electronic signals. At this point, they are not planning on using cryoCMOS control chips or multichip architectures up to the 1121 qubit level. But to go beyond 1121 qubits they are starting to research the use of quantum interconnects and multiple processor chips to create a distributed network.

For the Condor machine in 2023, IBM has an aggressive goal of achieving a 99.99% gate fidelity which would be over an order of magnitude improvement over today’s systems. Jay described new gate architectures that can reduce noise, faster gate times, faster readouts and dynamic circuits. He showed the results of some experiments that reduced certain two qubit gate delays from about 400 nanoseconds to about 90 nanoseconds. Another experiment reduced readout times from 5.2 microseconds to about 0.33 microseconds. Providing faster gate and readout times without impacting coherence or gate fidelity will also improve quality because it allows more gate operations to occur during a qubit’s coherence lifetime. He also showed one experiment that improved the qubit coherence times from the current 100 microseconds to 300. Other improvements include improved compilers that can reduce the number of qubits and gate levels and also dynamic circuits allow measuring a qubit, resetting it and reusing it for something later in the program. The dynamic circuits can reduce the number of qubits required and levels needed to execute a program. IBM is also working on new generations of control electronics which will be more flexible and easier to use.

One of the more interesting discussions was the emphasis on improving circuit runtimes. They are redesigning the Qiskit runtime architectures in order to improve efficiencies and are working on a new measure for circuit execution speed that will provide a measure of circuit depth per second. This measure will take into account not only gate delays, but time to set up the circuit, reset times and other factors and be measured for programs that are executing multiple shots with perhaps different parameters. We expect that BM will be providing more information about this new measure later this year and that they will put as much importance on this new measure as they have on Quantum Volume.

For higher efficiencies out of the GPU’s they have developed two low-level programs called Circuit Runner (or Sampler) and Estimator. The Circuit Runner takes one or more circuits, compiles them, executes them, and optionally applies measurement error mitigation. The Estimator will estimate the value of an observable for an input circuit. They will also provide Nested and Custom Qiskit runtime programs that can provide simpler execution or more execution speed.

Another area they are researching falls in the general category of trading classical resources for quantum resources. One item they are looking at is something they call Entanglement Forging. This provides the taking a larger quantum circuit, splitting it into multiple smaller circuits and then using a classical computer to combine the results together. They are also looking to use a similar approach to achieve error mitigation by taking multiple circuits running them and then using the classical computer to stich them together to provide provide a more accurate answer. They are also looking at approaches that use error mitigation techniques on some types of gates and error correction techniques on other types of gates within the same program. The bottom line is that IBM believes that they can use quantum plus classical resources together to run programs with reduced qubit counts and gate depth, while still achieving better accuracy.

Urbasi Sinha, Raman Research Institute – Photonic Quantum Science and Technologies

Professor Sinha described the research in quantum technology that is being performed at the Quantum Information and Computing lab of the Raman Research Institute in Bengaluru, India. They are specializing in photonic quantum technology. She described how they are using photonics to create Qutrits and Qudits including a pump beam modulation technical for generating qudits. She also described research they are performing into quantum state interferography and also described their experimentation into Quantum Key Distribution (QKD). They have demonstrated a new variant of the B92 protocol to achieve higher bit rates and lower bit error. They are also working to develop India’s first satellite QKD system.

Prineha Narang, Harvard University – Building Scalable Quantum Information Systems

Professor Narang discussed some of the physics research being performed in her lab at Harvard. The talk discussed predicting new artificial atom qubits in 2D hosts, understanding and controlling spin-photon interfaces, algorithms for correlated quantum matter, and hybrid quantum architectures towards quantum network science.

Brian Neyenhuis, Honeywell Quantum Solutions – Honeywell’s Commercial Trapped-Ion Quantum Computer

Brian’s keynote address provided a clear description of Honeywell’s ion trap computer using their Quantum Charge Coupled Device (QCCD) technology. He described the architecture in some detail and had a good animation that showed who the one and two qubit gates are implemented including the process to move ions around and merge them for executing the two qubit gates. He described their work to measure the Quantum Volume of their machines which now stands at 1024. He also described the mid-circuit measurement capability built into the device and went into some detail about an experiment they performed to implement the [7,1,3] error correcting code (sometimes called a color code or a Steane code) to correct single qubit errors. He admitted that they have not yet reached the breakeven level where the error rate of the logical qubits formed by the grouping of qubits is lower than the error rate of an individual physical qubit, but indicated that achieving this was one of their next areas of research. Additional information about this research is available in an arXiv 2107.07505 pre-print paper posted on arXiv. Brian also indicated that the current Honeywell computer available to customers is the H1-1 version with 10 qubits. They are also testing a model H2 version with more qubits arranged in a “racetrack” configuration in their labs. We expect to hear more about the H2 version within the next few months. Honeywell has also started development work on an H3 version that will have a two dimensional topology and even more qubits, but that version is still a few years away.

James Clarke, Intel Labs – From a Grain of Sand to a Quantum Computer

James Clarke talked about the quantum development activities at Intel. A primary strategy is to leverage as much technology and experience that they can from current semiconductor manufacturing expertise. Although they did work with superconducting qubits in the early days of their research they are now focused on spin qubits (also known as quantum dots) because he these can just be viewed as single electron transistors with similar structures. The qubits are arranged in a two dimensional topology with nearest neighbor connectivity. Intel has developed some interesting technology including a cryogenic prober that provides a significant speedup in collecting characterization data on new devices to evaluate their performance. He also mentioned their Horseridge quantum control chip that can operate at temperatures of 4 degrees kelvin. They are seeing very similar performance results from the use of this chip versus using room temperature control electronics. Another thing not generally known that he mentioned is that Intel is developing a full stack solution including the software. They are developing their own quantum programming language that is also based upon the classical LLVM representation, similar to Microsoft’s approach. Although Jim did not provide details on Intel’s quantum roadmap, he did provide a glimpse into Intel’s commercialization strategy. They expect to commercialize the quantum technology in a manner similar to how they commercialize their current microprocessor products. Intel will develop PDK (Platform Development Kits) and SKD (Software Development Kits) and work with their customers and partners to have them develop machines based upon the Intel technology. So we do not expect that Intel will be entering the cloud business and offer quantum computing services on their own.

Sonika Johri, IonQ – Approaching Quantum Advantage with IonQ Quantum Computers

Sonika first provided an overview of IonQ’s ion trap architecture and then proceeded to discuss five recent research papers that IonQ performed with various software partners that used their computer to find potential ways that quantum advantage could be achieved with their machines. These papers include research on Generative Quantum Learning of Joint Probability Distribution Functions, Generation of High Resolution Handwritten Digits with an Ion Trap Computer, Nearest Centroid Classification on a Trapped Ion Quantum Computer, Low Depth Amplitude Estimation on a Trapped Ion Quantum Computer and Application Oriented Performance Benchmarks for Quantum Computing. Although quantum chemistry was not mentioned in these particular papers, she mentioned the company is also active in research potential application in this area too.

The last paper described above on application oriented performance benchmarks was created as part of a QED-C project that compared machine performance from several different providers. She showed data that would indicate that IonQ’s next generation machine outperformed the others. But we would point out that everyone is always working on a next generation machine so that these type of comparisons aren’t permanent and could change rather quickly when someone else introduces their new machine. She concluded by saying that we are currently seeing much rapid progress in both quantum algorithms and quantum hardware and felt that the path of new quantum advantage algorithms and use cases introductions will look more like an upward slope rather than a step function.

David Dean, Oak Ridge National Laboratory – Pursuing the Scientific Grand Challenge of Developing Quantum Information Science

David started by providing a quick overview of the history and basic concepts of quantum technology. He proceeded to describe some of the quantum research funding activities being undertaken by the U.S Department of Energy. It has been steadily increasing since FY 2017 and has now reached $250 million by FY 2021. The department runs two quantum scientific testbeds called QSCOUT at the Sandia National Laboratory and AQT at the Lawrence Berkeley National Laboratory. The department also funds five National Quantum Information Science Research Centers based out of five of the national laboratories.

One of them is the Quantum Science Center (QSC) out of Oak Ridge National Laboratory that Dr. Dean heads. The QSC includes about 100 researchers across 16 different institutions. They are pursuing three broad areas including material science, computational science, and sensors and he provided examples of some of the projects being pursued in each of these areas. He also described something call the Innovation Chain being used at QSC. This is a framework to identify transitions and manage collaborations between projects. It starts with fundamental science and then proceeds to device research then prototypes then applications and finally to systems. The end goal is to help integrate solutions into commercial systems to bolster U.S. economic competitiveness. The discussion concluded with a Q&A on workforce training and development. The QSC itself has 40 postdocs and 12 students working in the center and across the five DOE centers the total number of postdocs and students is in the hundreds. QSC has a number of programs to do provide training including summer schools, training sessions with industry, exchange programs, research grants and journal clubs.

Anthony Megrant, Google Quantum AI – Progress and Challenges Toward an Error-Corrected Quantum Computer

Anthony and his team is working to develop error corrected codes for Google’s quantum computing. Google’s goal is to create a machine with 1,000,000 physical qubits and 1,000 logical qubits with error correction by 2029. The logical cubit error rate goal would be less than 10-10. They are working on a milestone to demonstrate one logical qubit that shows this with a machine that has 1,000 physical qubits. The machine would be a monolithic processor and look similar to their current Sycamore processor, but would need a slightly larger cryostat with more cooling capability and some miniaturized components so they can all fit. To scale up from 1,000 physical qubits, Google will arrange them in groups of tiles until they reach the 1,000,000 level.

They are currently planning to use a surface code for the error correction which will arrange the logical qubits in a square array of physical qubits with 2D topology and provide correction for both bit and phase errors. For their current research right now, they are using a repetition code with the qubits connected in a linear 1D line. However, in order to capture as many potential causes of error, this line is physically arranged as a 1D line embedded in a 2D array so that it can pick up things like crosstalk error. The repetition code can be thought of a 1D version of the surface code that can correct either bit errors or phase error, but not both at the same time like the full surface code.

One of the achievements he reported was a 100X improvement in error rate when going from using 5 qubits to 21 qubits. This is important because if the physical qubits don’t have an error rate below a particular threshold, the logical error rate will actually get worse as you add more qubits to your error correction code instead of getting better.

Some of the improvement he mentioned include improved one qubit and two qubit gates. Creating a new reset gate that can not only reset the |1> state to |0>, but also reset the |2> state to |0>. Improving readout errors by a factor of 2X and reducing readout time by 5X. And implementing dynamical decoupling on neighboring qubits during measurements to reduce crosstalk. Other areas they are working to improve include materials, packaging, cryostat, fabrication, and the electronics and wiring in addition to the qubit architecture. They will be continuing this work to hit their goal of creating their error corrected logical qubit with a 10-10 error rate and once that occurs the focus will shift to scaling it up so they can meet their 2029 goal.

October 23, 2021