Quantum Computing Report

IBM Reveals More Details about Its Quantum Error Correction Roadmap

Conceptual Picture of IBM’s Starling, Planned to Be the Company’s First Fault Tolerant Quantum Computer. Credit: IBM.

IBM had previously indicated its intention to create full fault tolerant quantum computers (FTQC) with an ultimate goal to produce a processor codenamed Blue Jay with 2000 logical qubits and support for processing circuits with 1 billion gates by 2033. Certainly, this roadmap will incorporate things that will provide key technologies needed for meeting the goals of DARPA’s Quantum Benchmarking Initiative that IBM is participating in.

But what some people may not realize is that moving from their current NISQ to an FTQC architecture in their product line requires many changes up and down the stack including wafer processing technology, qubit topologies, circuit design, mid-stack software, program compilation and other things. The company has been quietly working on this for several years and now, it has released many more details on what they are planning as well as an indication of the progress they have already made.

Perhaps one of the key decisions the company needed to make is what error correction code to use. Traditionally, companies working on superconducting technologies have considered using what is known as the Surface Code. As shown below this code is straightforward and works well for qubit topologies that are laid out in a two-dimensional grid. Each qubit is connected to its four nearest neighbors and the qubits are separated into data qubits and check qubits as shown in the picture below.

Diagram of the Qubit Layout in a Surface Code Chip. Credit IBM.

However, the surface code has a very significant drawback in that it is extremely inefficient. One measure we look at is called the Physical-to-Logical ratio and these are very high in a surface code. Some estimates are that it would take 1000 physical logical qubits to create one logical qubit in a surface code implementation and IBM wanted to find something that was an order of magnitude more efficient.

One of the keys for creating a much more efficient error correction code is to create a qubit topology that has a higher qubit to qubit connectivity and that is what IBM is doing. They have created a code based upon qLDPC (Quantum Low Density Parity Check) which they call the Bivariate bicycle code which the company claims is roughly 10-14 times more efficient. As shown in the chart below, they can achieve comparable error correction using a code that requires only 144 physical data qubits to produce 12 logical qubits, compared to surface code versions that use between 1452 and 2028 physical data qubits. (Note: These qubit numbers do not include any additional ancilla qubits needed for checking, but that does not impact the main point regarding relative code efficiency.)

Comparison of IBM’s Bivariate Bicycle Code versus Two Implementations of the Surface Code. Credit IBM

However, to implement this new code the company needed to completely change the qubit topology on their chips to support 6-way instead of the 2-way or 3-way connectivity used on their previous generations of chips.

Layout of IBM’s Current Heron Generation of Processor which Supports 2 or 3-Way Connectivity using a Heavy-Hex Topology. Credit IBM.

New Topology Showing 6-Way Connectivity (Only 1 Qubit Shown in Full). Credit IBM.

This change requires a very significant change in the wafer processing technology to implement what IBM is calling C-Couplers for a long range connection to the two additional qubits on the chip. These C-Couplers require the addition of one or more metal layers on top of the qubit layer to establish the connection. This allows bridging over the neighboring qubits to establish a connection to a non-local qubit. This new metal plane is shown as the pink layer that sits on top of the qubit wafer. IBM has built test chips that implement this and indicates that they are achieving similar 2-qubit error rates between non-local qubits (roughly between 2×10-3 and 5×10-3) as they do with two neighboring qubits.

Diagram Showing the Qubit and Interposer Wafers. Credit: IBM

Their error correction architecture is built around this concept, but there is still much more to be done. In order to do anything useful with these error corrected qubits, one needs to be able to apply gate operations. Previously this has been a serious weak point for those pursuing qLDPC based approaches, universal gate schemes were not well established. IBM has now plugged that gap. This requires adding in more qubits to form what is called the Logical processing unit (LPU) that controls the operations of gates from the Clifford group. This approach is finally giving QPUs something like the microarchitecture a conventional chip designer might recognize

In the diagram below, the torus shape in the lower right corner represents the code qubits with their 6-way connectivity. Above the torus and also a larger image to the left represents the additional circuitry needed for the LPU. In this diagram, the code block requires 288 physical qubits for 12 logical qubits (IBM calls this the Gross code) and the LPU block uses 100 additional physical qubits.

Diagram Showing a 12 Logical Qubit Block with an Attached Logical Processing Unit (LPU). Credit IBM.

Of course, if one only has 12 logical qubits, they still would not be able to do anything useful even if the qubits worked perfectly. So, the next step is to scale up the size by implementing an additional coupler type, called the L-Coupler, that connects multiple modules together. The L-Coupler is implemented with cables and can connect together modules that are located as far as one meter apart. Strikingly it does this by borrowing concepts from distributed quantum computing, an approach more normally associated with trapped ion, photonic or color center-based qubit platforms. At a time when many are struggling to explain how they will access the benefits of modular scaling; IBM’s is bringing a bold new approach to SC circuit qubits.

Finally, there is one other element to add and that is something called a Magic State Factory. As mentioned above, the LPU can implement circuit gates from the Clifford group, but that is not enough to provide a Universal Quantum Computer that can process any quantum algorithm. Specifically, there are important gate types called the T-gate and the Toffoli gate that are not part of the Clifford group and cannot be synthesized using a combination of Clifford gates. Another type of special circuitry called the Magic State Factory is required to generate these using a Magic State Distillation process which creates accurate quantum states from multiple noisy ones. With these, the processor can have a full set of Universal gates available to process any quantum algorithm. The diagram below shows a full FTQC processor with all these elements including the L-Couplers and the Magic State Factory.  With Magic State Distillation, IBM is using an approach, with associated overheads, that is common in the field.

Diagram of a Complete Fault Tolerant Quantum Computer with all the Elements. Credit IBM.

An important element of a fault tolerant quantum computer is the classical circuitry needed to very quickly decode the check measurements and determine if an error has occurred and what correction action should be applied. IBM has developed a new heuristic decoder based upon an improved belief propagation algorithm called Relay-BP. This decoder provides high accuracy, eliminates two stage decoding for faster speed, can be efficiently implemented in an FPGA or ASIC, and outperforms other error decoding approaches that would have to be needed for the bivariate bicycle code.

A conceptual representation of how IBM will be physically implementing that architecture shown above can be seen in the following diagram. The dilution refrigerators are depicted by the hexagonal systems in the middle with rectangular boxes containing the units for gas handling and electronics beside it.

Conceptual Diagram of IBM’s Starling and Blue Jay Processors with Associated Gas Handling and Electronics Cabinets. Credit IBM.

IBM’s development plan is to complete each of the developments described above in incremental steps with a series of internal innovation machines that can be seen below. At the same time, IBM is continuing to focus on improving the coherence times and gate fidelities of the individual qubits because the better the quality of the qubits you start with the more effective the results of the quantum error correction codes will be.

Progression of IBM’s Development Roadmap for FTQC Processors. Credit IBM.

In conjunction with this release of IBM’s development plans for fault tolerant quantum computers, they have updated their overall roadmap charts to show the changes. IBM’s roadmaps fall into two categories. The Innovation roadmap describes the machines they plan to develop for internal test and development use. These machines are not intended to be released for general use, particularly in the timeframe shown on these charts. The Development Roadmap shows the quantum processors that will be available to IBM customers as well as the timelines of more of the supporting software and infrastructure they will be available for use with these machines.

Innovation Roadmap Showing Quantum Processors IBM Will be Developing for Internal Testing
Development Roadmap Showing IBM’s Quantum Processors and Associated Software Available for Customer Use

Other Updates in IBM’s Roadmap

Sharp-eyed readers may notice that IBM has replaced the previous Flamingo series of processors with a new series called Nighthawk for the 2025-2028 timeframe. The important difference lies in the qubit topology. While the Flamingo used the heavy-hex topology similar to the previous Heron and other implementations which provided qubit to qubit connectivity to 2 or 3 neighbors, the Nighthawk design implements a square lattice topology using some of the technology they have developed for the FTQC designs and achieves a qubit-to-qubit connectivity of 4. While the Nighthawk series will have fewer qubits per module versus Flamingo (120 versus 156), it will actually provide better performance for most applications, due to a reduced number of SWAP gates needed to implement a given algorithm.

Another big area of improvement that has taken place in the past year is in runtime performance. This involves optimizations performed on the classical side and with the interface between the quantum processor and the classical parts of the system. By using techniques such as parallel compilation, implementing parameterized circuits, job scheduling, code pipelining, optimizing data movement and other things the company is achieving much more throughput from the quantum processors. The company measures this with a parameter called CLOPS (Circuit Layer Operations per Second). As an example, one of the processors the company currently has available on its network is called IBM Torino.  Today, this processor has a CLOPS measure of 210K. A year ago, this same measure was at 3.8K for the same processor. For further details, view the section describing Massive Improvements in Runtime Performance in our report from the IBM’s Developer Conference last November.

Additional Reading

IBM has been quietly publishing papers on arXiv and other technical journals describing their research into fault tolerant quantum computers. For those who want to examine their work in more detail we are providing the list below so people can study them.

Conclusion

IBM has a well-thought-out plan for how they will be successful in developing fault tolerant quantum processors. Importantly, they have looked at all the required pieces and have a good understanding of what needs to be done to finish the job. Now, successful execution of the roadmap will be the next crucial step.

For more about IBM announcement of these plans, you can read a press release posted on their website here, a blog available here and two different videos posted on YouTube here and here.

Addendum – GQI’s Quantum Hardware Roadmaps State of Play Presentation

GQI has created an in-depth analysis of all the publicly available roadmaps (19 in total) from companies developing quantum hardware that includes an assessment of the Technical Readiness Level (TRL) of each layer of the stack for each provider. See a previous article published in the Quantum Computing Report by GQI  How GQI Evaluates Quantum Processor Development Roadmaps that describes GQI’s process for creating these assessments. Contact GQI at info@global-qi.com for find out how you can obtain our Quantum Hardware Roadmaps State of Play analysis.

June 10, 2025

Exit mobile version