Ish Dhand, CEO and co-founder of QC Design, is interviewed by Yuval Boger. They discuss the company’s focus on advancing fault tolerance in quantum computing, their architectures and software tools to help manufacturers build scalable and reliable quantum computers. We talk about their “blueprints” for building fault-tolerant systems, the “boosts” to improve existing hardware, and much more.

Transcript

Yuval Boger: Hello Ish, and thank you for joining me today.

Ish Dhand: Thanks for having me here.

Yuval: So who are you, and what do you do?

Ish: My name is Ish Dhand. I am a quantum computing researcher by training with over 10 years of experience in the field. In my previous role, I headed the architecture team at Xanadu, where I led the development of Xanadu’s blueprint for fault tolerance. Currently, I am the CEO and co-founder of QC Design. We are the fault tolerance company. We help quantum computing manufacturers build scalable and reliable quantum computers. And we do this by developing the most efficient architectures for fault tolerance with real-world hardware. We offer licenses to these fault tolerance architectures, much like ARM offers licenses to power-efficient chip designs.

Yuval: Let’s work through a use case. So let’s assume I’m making a computer using superconducting qubits. Obviously, I want to get to large-scale fault-tolerant. How do you help me? How are you embedded in my design? At what stage do you get involved, and what does a project look like?

Ish: So many great questions. Let’s start with what we offer right now, and maybe we can go into the timeline a little bit later. What we offer are licenses to our software and to architectures. Our software is more like the AutoCAD of fault tolerance. It helps our customers, quantum computing manufacturers, understand which hardware imperfections impact fault tolerance and in what way. The more important part of our offering is the architecture. We offer two kinds of architectures: blueprints and boosts. A blueprint is basically a detailed recipe for a hardware manufacturer to build and operate a working fault-tolerant quantum computer. This means how do you build the qubit? How do you connect the qubits together? How do you manipulate the qubits with laser and microwave fields? And how do you process the massive amounts of data that’s coming out of the device to find the errors and correct them? That’s a blueprint. A boost is more like a firmware upgrade for fault tolerance. So by implementing the boost, a manufacturer can get better logical performance from the same hardware. As you can imagine, a blueprint is focused on a specific hardware platform. A boost is typically useful for many different hardware platforms. So that’s how we help quantum computing manufacturers get to fault tolerance by offering licenses to our software, which is the design tool, and to the designs themselves, these blueprints, and the boosts.

Yuval: Let’s try to go deeper. Assuming again that I’m a superconducting manufacturer, I’ve built qubits, I’ve built say a 50-qubit machine, just for discussion. So the connectivity is known, I was able to implement one and two-qubit gates, and I have the whole infrastructure ready. What now? Can you help me optimize the pulses to reduce the noise? Are you providing me with firmware that says, “Well, if these are your qubits, this is how you implement a logical qubit”? How does it work?

Ish: Yeah, very cool. Thanks for asking. What the architecture does is focus on building the first logical qubits and beyond logical circuits. In practice, what could this mean? Could it not be that a hardware manufacturer that’s just building for NISQ is on the same path as building for fault tolerance? I think not. So yes, both NISQ and fault tolerance need in general a better quality of qubits, but the full picture is much more nuanced. So when you build for fault tolerance, you have to optimize completely different aspects. 

Let’s talk about NISQ. In NISQ, you’re optimizing for things like error mitigation, which is not really necessary for fault tolerance. We are talking about many different high-quality gates. Lots of variety is great in NISQ because you can take the user’s algorithm and then compile it down to the shallowest circuits. Fault tolerance is different. Here what you need is a large number of qubits with the right kind of quality. What does the right kind of quality mean? Different hardware imperfections have a completely different impact on logical qubit performance. I’d say a very well-known example in the field is the difference between Pauli errors and erasure errors. For certain architectures, a fault-tolerant quantum computer can tolerate 1% Pauli errors but 10% erasure errors. And if the hardware manufacturer has, let’s say, 2% Pauli errors and 5% erasure errors, then well, in their development, they can safely start to ignore the erasure errors and focus their attention on the Pauli errors. That is what would be the best bang for the buck in terms of hardware cost, in terms of hardware effort. 

Now, this is why it’s really crucial for manufacturers to identify which imperfection has the biggest impact on the logical qubit performance. So our architectures and our software help manufacturers do exactly that. 

Yuval: What would be the ratio between physical and logical qubits? 

Ish: Yeah, that’s a very deep and nuanced question. People typically talk about a 1:1000 ratio for a 99.9% fidelity qubit. This is a simplification. This assumes that only one kind of imperfection acts on the qubit. In reality, if you’re a hardware manufacturer, you know better than me that there are 20 different sources of noise acting in parallel on your quantum computer. These 20 different sources of noise have a completely complex interplay and a very different impact on the hardware performance. So I would say that at this stage, it’s impossible to say what the overhead would be. It depends on which platform we are looking at, and what are the parameters of those 20 different imperfections. And in the end, it’s exactly this question that our software helps answer. 

Yuval: If the ratio is 1000:1 or something of that nature, I’m not aware of a 1000-qubit computer today. I’m aware of those that are in development but not those that are available today. Do you have customers that are using your products today?

Ish: Yeah, so this question is more about the timeline. At what stage does a quantum computing manufacturer need our architectures and software? I would say from the earliest stages of development. As we discussed before, so NISQ optimization is very different from fault tolerance optimization. So a customer with access to our architectures can move much faster than a customer without this access. It’s the same with software. So a customer that’s working on going from 50 qubits to 200 qubits without degrading the quality of the qubits, can use our designs and our software to know which aspects of the hardware they really need to focus on, and which ones they can safely ignore. So I would say from the earliest stages, one needs access to this fault tolerance architecture and software. And actually, maybe even coming back to this overhead of 1000:1. What we’re doing also is developing architectures tailored for the hardware. This means that you can probably get away with much lower overheads. So in that sense as well, we bring fault tolerance closer to reality.

Yuval: We’ve spoken about superconducting qubits, but obviously, there are other modalities. Would your software be equally applicable to all quantum modalities?

Ish: Yeah, that’s a great question. So we can talk about the general design kit that we have, which is basically the AutoCAD for fault tolerance. A small part of that is Plaquette. It is available on GitHub today as open source. This, to the best of our knowledge, is the most powerful fault tolerance simulator. It has many different error correction codes built in and many real-world hardware imperfections built in. So depending on what imperfections your hardware has, Plaquette can simulate them. Whether it’s things like qubits that are just missing or erased, qubits that are going to leak out of the subspace, or whether it’s just plain bit flips. So the software can handle all of these kinds of errors, and together these different kinds of errors define the landscape of errors on different hardware platforms. So yeah, that’s along the lines of saying that yes, the software works for many different matter qubit platforms today. There are add-ons that we have that are tailored for specific platforms, and these we basically offer as licenses to hardware manufacturers.

Yuval: In your experience, what would you think is the ideal platform to implement large-scale fault-tolerant computing?

Ish: I think the race is too early to call. It’s a marathon. We need lots and lots of qubits and we need the right quality of qubits. Different platforms come with their own strengths and weaknesses. I would say it’s too early to say whether it’s one of the five or six major platforms, or whether it’s a combination of these platforms, or whether it’s something completely new that’ll come up as we work towards fault tolerance. 

Yuval: Some architectures require interconnects to scale up to a larger number of qubits. The individual blocks that are interconnected may or may not be a thousand qubits large. Are the interconnects a concern when it relates to error correction? 

Ish: I think this becomes yet another kind of imperfection in the hardware. And there are already several methods that deal with precisely this kind of imperfection. So if a certain subset of the qubits has a much higher error rate or a certain set of the connections between qubits has a much higher error rate, then the error correction methods focus on those and prioritize getting rid of those errors. So I don’t think it’s something that’s a deal breaker. I do think that all interconnect architectures need to think very carefully about what fault tolerance schemes they apply.

Yuval: You mentioned the open-source package is available. How about the product itself? Where are you in your development? Is it something that people could license today?

Ish: Yeah, absolutely. If you’re a quantum computing manufacturer on any different platform, you can firstly license software to design for fault tolerance, to understand how different hardware imperfections impact logical qubit performance. But also, you can start to license our architectures. We have several different architectures, blueprints, and boosts already developed. And these architectures are already available today. To begin, we’re focused on two platforms: Photonics and spins. And very rapidly, we’re moving into many other platforms. Quantum error correction and fault tolerance are obviously critical to manufacturers, and I would probably say that that’s a strategic capability that the manufacturer should develop in order to compete in the future.

Yuval: Do you feel that companies want to outsource that to companies like yours, or would they prefer to develop that expertise in-house?

Ish: That’s a great question. I agree. If possible, companies would prefer a situation where a lot of this development happens in-house, and this is how things have been traditionally for the big hardware players. I’m thinking about PsiQuantum, Google, IBM, Xanadu, and so on, who have massive 10, 20, 30 people fault tolerance teams and have their own blueprints. But this is a small fraction of the 50 or so quantum computing manufacturers out there today and rapidly growing. The rest of these companies, the small to mid-sized quantum computing companies today, don’t have access to fault tolerance talent. They don’t have access to these massive simulation and design tools that are necessary for fault tolerance. So already today in the next one to two years, our main customers are small to mid-sized quantum computing companies that are looking to get into fault tolerance, looking for a fast path to fault tolerance to actually start to compete with the well-funded companies. But this is just the near term. We’ve seen how things evolved in the classical semiconductor industry. Big teams of silicon engineers in companies like Apple and Samsung and so on still use designs made by ARM. So I do imagine that in the long run, our fault tolerance designs would be useful across the board for companies big and small.

Yuval: Tell me a little bit about the company if you can. Where are you based? How large are you? How are you funded? How did it start? Anything that you’re able to share?

Ish: Sure. We’re based in Ulm, Germany. This is a stone’s throw from Munich. It’s a small city with a spectacular university and is also home to over 10 quantum startups. So it’s a really great place to be a quantum computing company like ours. The company was founded by myself and Professor Martin Plenio. He’s one of the founders of quantum information and computation. He came up with many new ideas in quantum computing and actually has 30 years of experience in quantum information and eight years of experience in running startups and founding them. We founded the company around two years ago. We were able to get funding from top European deep tech investors like VSquared Ventures, Quantonation, and Salvia. We were also awarded a BMBF grant, which is by the German Ministry of Education and Research. All of this has allowed us to put together a team of over 12 great employees. So building for fault tolerance really needs expertise in fault tolerance itself, but also in hardware. And we’ve really been lucky to be able to get the top experts in the world in these two areas, working together to build the best architectures.

Yuval: Professionally speaking, what keeps you up at night?

Ish: I think a fault-tolerant quantum computer would be the ultimate computing machine. The most powerful computer that any civilization anywhere in the universe can build under the known laws of physics. So how do we get there? What’s the most cost- and energy-efficient design for a quantum computer that’s fault-tolerant? And what’s the most feasible path to getting to this design? That’s what keeps me up at night.

Yuval: And last hypothetical, if you could have dinner with one of the quantum greats, dead or alive, who would that person be?

Ish: Yeah, I’d love to dine with my co-founder, Martin Plenio. He’s an incredible scientist. He started many fields within quantum technology. He knows the ins and outs of every major hardware platform. But not only that, he’s an incredible chef. So that’s an opportunity that I just can’t say no to.

Yuval: Okay, but it sounds like you could have dinner with him multiple times a year, to say the least. Who else?

Ish: In that case, I’d have to pick Einstein, who, by the way, like QC Design, was born in Ulm. He was one of the first people or, in fact, the first person to really understand the implications of quantum physics in terms of the counterintuitive effects. As all of us work towards building a fault-tolerant quantum computer, it would be great to understand what Einstein would think about the implications of such a fault-tolerant quantum computer, a device that basically shows all of these counterintuitive effects and uses these effects on a massive, massive scale.

Yuval: Excellent. Ish, thank you so much for joining me today.

Ish: It’s been my pleasure.

Yuval Boger is the chief marketing officer for QuEra, the leader in neutral-atom quantum computers. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he can be reached on LinkedIn or at this email.

October 23, 2023