Tom Lubinski, Chief Software Architect for Quantum Circuits, Matt Langione, a partner at the Boston Consulting Group, and Pranav Gokhale, VP of Software at Infleqtion, are interviewed by Yuval Boger. Tom, Matt, Pranav and Yuval talk about quantum benchmarking – why benchmarks are important, what types of benchmarks exist, whether end-users should care, standardization efforts and much more.

Transcript

Yuval Boger: Hello, Tom. Hello, Matt. Hello, Pranav. Thanks for joining me today.

Matt Langione: Hey, Yuval

Pranav Gokhale:Thanks for having us.

Yuval: So Tom, who are you, and what do you do?

Tom Lubinski: Oh, I get to go first. I am working as the chief software architect for Quantum Circuits, a company that is developing a quantum computer based on superconducting cavity resonator technology. For the last four years, I’ve also been involved in the QED-C, that’s the Quantum Economic Development Consortium. I was the chairman of the Standards and Benchmark Committee for three years, and then I, recently in the last two, have taken leadership of the quantum computing group within the Standards and Benchmarking. I’ve been spending quite a bit of my time focusing on drawing together the community to develop a common understanding of benchmarks and standards that might be applicable for quantum technologies, in particular computing.

Yuval: Excellent. And Matt, who are you, and what do you do?

Matt: Sure, so I’m Matt Langione. I’m a partner at Boston Consulting Group, and I lead our quantum computing topic globally. So the primary focus for me is really vendors who we help with things like strategic assessments, business models, things like that. Investors who we help with diligence. End users, increasingly, who we help onboard quantum capabilities, as well as governments and other bodies as such that are thinking about national quantum strategies. The benefit of the purview from my role is that we do get to see all aspects of the ecosystem, and from that perspective, have developed a view on what kind of uses benchmarks can be put to in quantum computing.

Yuval: And Pranav, you know what’s coming. So who are you, and what do you do?

Pranav: My name is Pranav, and I am the VP of quantum software at Infleqtion, which is the newly coined name for the umbrella that encompasses both ColdQuanta as well as Super.tech. Super.tech is actually a company that I co-founded, spun out from the University of Chicago, where we did research on connecting the dots from applications of quantum computing, to the actual hardware. And benchmarking, as you might imagine, is a critical piece of that, connecting the dots from applications to hardware.

Yuval: So we’re here to talk about quantum benchmarking. So Tom, maybe I can ask you who needs them and why? If I were to go and buy a gaming computer, I would know what to look for, I may look at some popular benchmarks, but why do I need benchmarks for a quantum computer?

Tom: First of all, I think it’s important to understand that benchmarks exist at different levels. At the hardware vendor or provider level, the companies who are building the quantum computers need to develop benchmarks and have access to lots of academic work on evaluating the components that make up their machines. They, of course, want to measure their own progress, and, year-to-year, how their systems evolve. One of the things that we found in the QED-C group is that the level of benchmarking done by the developers of these machines is often different from what users of the machines want to have access to.

Users need a different kind of benchmarking so that they can understand which of these various machines might actually help them address a problem that they have. From my perspective, it’s a very different type of benchmarking that is required at the user level. It’s not so much just for evaluating or comparing different devices, but also to understand what quantum computers are even capable of at this point because they are very early in their development cycle. Understanding what they can actually do is another thing that is important to derive from benchmarks.

Yuval: And Matt, you see lots of different groups, as you mentioned, investors, vendors, and end customers. Who are these benchmarks important for, or if they’re important for everyone, who are they most important for?

Matt: Well, I mean, that is how I would effectively break it down. So the constituents, as I mentioned, that I work with, you’ve got vendors, you’ve got investors, you’ve got users and governments. I would say, from a vendor perspective, it’s great to have a common measure of progress across the industry. I mean, you see what things like ImageNet did for AlexNet, and with ImageNet for the development of machine learning. Of course, apples-to-apples comparisons often break down in quantum, but it is important, I think, to get a little bit of a sense of a common target to strive for, and to compare progress against. I do think it stimulates competition in a productive way. So that’d be sort of the vendor case.

From an investor perspective, it’s really to get a little bit of a measure for what companies are promising and give them a little bit more investment certainty in as one of many inputs into the analysis that they do. And don’t get me wrong, things like talent, technical roadmap, and other things, are at least as important as benchmarks, but benchmarks are an important figure in the calculation.

For users, I think that what’s really important is to understand how to use benchmarking, not just in terms of, “What is the best technology today for my use case?” But also the extent to which, even the technology roadmap can be benchmarked at a more generic level, I think that was some of the work that, well, the SupermarQ suite actually kind of tackled to a certain extent. So the benchmarking there to the extent, in QED-C as well, having application defined benchmarking, I think is particularly useful. The point there being that even if you’re not comparing vendors, comparing, say, physical implementations, and thinking about where you may want to invest resources, if say, you’re a farmer company, versus a finance company, I think that that’s an important use case. And then governments I think are a little bit more ensuring that resources are deployed across the different research areas. I think benchmarking can be helpful there as well. So those would be the ways I think that things break down across the four constituents that we serve.

Yuval: I can understand why low-level benchmarks could be important for vendors, “This month, I had this qubit fidelity, and in three months, I have something better.” But Pranav, do users care about such low-level benchmarks? I mean, after all, some would talk about fidelity, some would talk about connectivity, and some would talk about two gate, and so there are all these different parameters that everyone can pick and choose what makes them look good, as opposed to the application level. What do you see when you talk to your customers?

Pranav: Yeah, your question is really, in some sense, the motivation for why application benchmarking is so important. And it is worth acknowledging that these low-level benchmarks that preceded application benchmarking, the clock rate, the hertz, the amount of qubits, things like that, they do matter too, and it’s kind of the same as when you go to Best Buy, and you buy the latest computer, and latest hard disk, you do see on the sticker, “Oh, it’s 2.6 gigahertz, it’s one terabyte.” And those things matter. But if you’re actually looking to get the best machine for your actual business use case, then in the classical world, which I think is a good analogy here, people who are doing machine learning look at MLPerf benchmarks for specifically image classification, language translation benchmarks, or people who do video editing, photo editing, maybe podcast editing, look at performance benchmarks for Photoshop, et cetera.

And I think that’s the link here that customers and users care about ultimately too. Yes, it matters what clock speed you have that influences things like latency, but it also matters, to use Matt’s example, if you’re a pharma company, connectivity, things that are low level that can matter, certainly, but they can also be compiled away sometimes. And that’s why it’s much more actionable to have something like, “Oh, this machine can run VQE six times better than this machine.” Maybe one thing to add is a time dimension here too, “This machine did performance X last year, but this year it’s doing performance 10X. I should be paying more attention to this machine because if it’s 10X again next year, that matters a lot.” So I’d say the short answer is that users care about both, but ultimately, users with business needs are going to care about the application-level results the most.

Yuval: Tom, touching on the earlier point, because there are so many different low-level benchmarks, and every vendor can pick the one that makes them look good, what is the focus of the QED-C benchmarking group? Is it low-level, or high-level, or everything? Where do you start?

Tom: When we first began to work on this project, our goal, the charter for the QED-C, was to identify ways in which standards and benchmarks could help advance economic development. In other words, how can we help the builders of quantum devices actually succeed better? The conclusion we came to was that there was a wide variety of choices at the component level, but what was missing was some way for users to more quickly get their hands on measurements of how these machines could perform at higher levels.

We decided that the application level was more valuable to the community because there was a dearth of choices there. If someone wants to evaluate two machines side-by-side, they would essentially have to write the code themselves and exercise it on different machines in order to understand anything, and that can be a big job. So we decided we could fill that gap by making some easy-to-use benchmarking capabilities available to the community to make it easy for them to do this without writing all the code themselves. Basically, we’re saving them time, we’re saving them money, by providing these benchmarks to them. That was the focus of our effort.

Yuval: Pranav, you mentioned that some of the problems could be compiled away, which I interpret as there being different implementations, I can do VQE one way here and the other way there, and what’s best for one computer may not be best for the other. How do you make benchmarks that no one is going to object to because they say, “Well, we can code the same thing better on our specific machine”?

Pranav: Yeah, good question. Let me first lean on the fact that one of the objectives of the podcast is educating and informing, so let me give you an example of what I mean by this compiling away, and then talk later about how to address it. So if you have a system that only has qubits in a line, the lowest possible connectivity you can have, there’s fair critiques that that quantum computer is worse than a quantum computer that has full connectivity. Every qubit is connected to every other qubit. Now, that’s often true, but it’s not always the case. And so let me give the example of, if you go to a sports game, a softball game, or a basketball game, at the end, all the players high-five each other. And if you think about it, that’s also a line of qubits, well, players, that are all high-fiving every other player, and they only move in a line. And so that’s an analogy to something that’s known as a swap network.

It’s this idea that emerged in 2019 or 2016 that basically shows that even with linear connectivity, just a line of qubits, you can get your computer to run programs that would simulate having full connectivity, having every player high-five every other player, even though you only move one spot in a line at a time when you go down that train of high-fives. So if I back up now, what we do in SupermarQ, which is the benchmark suite that we developed, as well as other benchmark suites that are out there, including QED-C, is to specify the problem, not always directly in the sense of, “This is the quantum circuit, you have to implement it exactly like this.” Instead, we specify it at the level of a more fundamental problem. For example, “Here is the chemistry problem that we want to solve.” And then we leave it to the compiler, which IBM gets to use its best compiler, any vendor gets to use its best compiler.

And then it’s up to the quantum computer to figure out, “How do I want to address this specific problem?” If you have low connectivity, you can insert this thing that I mentioned called the swap network, the train of high-fives, to deal with the fact that you have low connectivity. If you have a full connectivity device, you don’t need that. And so the short answer is that by specifying benchmarks at a level which gives hardware vendors the flexibility to figure out how they want to implement that benchmark, we try to resolve this problem, because, essentially, for every machine, we’re showing the result of, “This is how, if you use the best software tools that this vendor has, how it would run it.” And we’re not saying, “This is the very specific circuit, down to every qubit link that you need that we want to run.” That would start to run up against issues of, “Well, my hardware has this capability that this one doesn’t have.” And vice versa. So short answer is, define the benchmarks at a slightly more fundamental level.

Yuval: Matt, when you speak with customers, I’m sure you often have a broader scope than just quantum computing, so, for instance, how quantum computing integrates with the rest of the IT infrastructure, within high-performance computing centers, and so on. So does the issue of speed, compute speed, also come up in discussions when comparing different computers?

Matt: For sure. I think for sure. Speed’s an important piece of the puzzle. Even if you just take it out of the quantum realm, and put it into traditional, say, deep neural networks, there are some problems that are compute-bound and others that are memory-bound, and there’s all sorts of research into what the right architecture to address those kinds of problems are. And we’ve kind of all read horrifying articles about the ballooning costs, both to companies, but also to the environment of training a lot of these neural networks. So certainly speed’s important. I would say, also the extent to which there are resource challenges, both at the compute level, and increasingly, FTEs required to handle the compute, those things are important as well.

Within quantum, speed’s an important thing, quality and scale are other things that we focus on. IBM, in particular, has emphasized the importance of these three metrics. Scale being something like the number of qubits, quality being the quality of circuits, so low operation errors, and then speed, which I think that they’ve talked a lot about CLOPS. Those three things are really, really, really critical within quantum. But then I think if your question is a little bit around, what are users asking us about in the surround, I think that that’s really important, really it is. Often around costs, and whether it’s the costs of the hardware itself, or the costs to operate the hardware, those things are really important.

And then also the timeline to implement any algorithm, which is partly a function of some of the things that Pranav is talking about, which is, say, the maturity of the hardware over time, and how benchmarking allows you to roadmap that a bit. And, of course, part of it is the degree to which the problem itself is efficient for quantum computing in general or the extent to which there’s actually a lot of custom work that needs to be done in order to do that translation from a business challenge, to a quantum advantage solution as it were.

Yuval: And a follow-up on that, Matt, I know you published an article about benchmarking a few months ago. So in this podcast until now, we spoke about QED-C, and we spoke about the Super.tech, now Infleqtion benchmark. Are there other packages, or other vendors that you think users might be interested in looking at beyond these two?

Matt: Well, so there are a number of different approaches to this, and I guess in the article, what we did was think about the landscape as a spectrum from systems benchmarks on the one hand to application benchmarks on the other. We’ve talked mainly in this podcast, rightly around application benchmarks, partly because Tom’s QED-C benchmarks, and Pranav’s SupermarQ benchmarks are both on the application end of the spectrum. And I do think that that application end of the spectrum is the most useful, at least for users. If you’re a pharma thinking about implementing quantum computing, I do think that taking an application approach is certainly the thing that we see most doing, and the thing that ex-cathedra, I would assume, is most useful for them. Now, on the other hand, system benchmarks are important as well, especially in stimulating… Well, maybe competition among vendors, or even actually just getting vendors to talk the same language so that some of their development efforts can actually benefit from advances from one vendor to the other.

And on that end of the spectrum, you see things, and I had started to mention IBM, like quantum volume and CLOPS, which are a lot more like the traditional speed sorts of benchmarks that you’d see, like tops, for example, in the classical world. And those are really important, too, they have quite different sort of use cases, but those are also important. Another couple that I might mention would be the Sandia National Laboratory benchmark that they use as mirror circuits, and then somewhere, maybe splitting the difference a little bit, is the quantum LINPACK research out of UC Berkeley and Berkeley National Lab. I think that those are also pretty meaningful efforts that are somewhere between system benchmarking and application benchmarking.

Yuval: Tom, I wanted to ask you about the status of the standardization efforts of the various benchmarks that we mentioned, qubit level benchmarks, or application level or something in between, which are the most advanced right now in terms of being ready for users to use? And related to that, do you expect an emergence of the various benchmarking efforts that everyone’s going to come together under one set, or do you think that they’ll continue to be half a dozen or even more different benchmarking suites?

Tom: Well, there’s a lot to say here. I think this is a very complicated and complex landscape. In terms of “standards”, or which benchmarking approaches might be ready for standards, I would say probably zero. I don’t think we’re ready to standardize anything, the hardware’s changing and the programming approaches are changing. So I’d be reluctant to say that standards are around the corner. People talk about the LINPACK standard for classical computing, and ask, “Can we get to that level?” We may eventually, but it’s really early right now.  I think that, in the quantum community, what you’re seeing is the surfacing of a number of different approaches. There are component level benchmarks, there are system level benchmarks, the Sandia work on volumetric benchmarking, we looked at all of that when we put together our QED-C project. The QED-C is a consortium of all the different providers and vendors, and we tried to identify what we can do that would benefit all of the members of the community. How could we bring them together with a common goal?

Our goal became, and continues to be, to try to reduce fragmentation and to integrate the various approaches into something that can be commonly accepted and perhaps eventually grow into a de facto approach to benchmarking. There are new benchmark projects surfacing. Besides SupermarQ, there’s QPack Scores that came out of a group in Europe. We’ve got QPack and QUARK, there are a lot of different projects emerging. The QED-C would like to see some sort of unification. Practically, can we achieve that? That’s yet to be seen. I personally would like to see less fragmentation and more collaboration, unification in the community. But it is very early, and we have to expect a lot of different approaches.

Yuval: Pranav, Matt mentioned cost, is there a cost component to the benchmarks? It would cost so many dollars to run it on IBM, and more or fewer dollars to run it on Honeywell for instance?

Pranav: That’s a great question. In the current benchmarks that we’ve done and others have done, cost is not part of the equation. And it’s for two reasons, one is that practically speaking, it’s often challenging to estimate cost. Vendors offer different packages to different people, they’re not always transparent. The other piece is that no current quantum solutions are, in some sense, cost-effective over what you could do with a well-tuned supercomputer that’s running in some nice fridge, something like that.

The other thing is that the asymptotics of the quantum solution is that at some point, not today, but at some point the solution quality that you can get with quantum just easily outperforms what you would get by just multiplying the number of CPU cores. And so we’re a little bit more concerned with just basic quality of solution at this stage. That said, I will point to some interesting work that Igor Markov has done at University of Michigan, I think, looking at estimating the cost in energy for solving certain problems with a quantum advantage. And I think there’s good theory work here, but I think it has yet to be brought into the benchmarking world. Something we should look into as a community.

Yuval: But Matt, just to come back to you for a second, it would seem like users would really care about cost, right? I mean, part of the budget is not just people but cloud time or on-premise time.

Matt: Yeah, and I think that there’s probably two layers of it, and Pranav addressed what might be the harder of the two. It wasn’t exactly what I was thinking, but perhaps what I should have been thinking, oftentimes conversations with Pranav lead me in this direction. I think the thing that he’s talking about is, as part of the case for quantum computing, there may be some really, really valuable arguments to be made about the efficiency of quantum algorithms in certain use cases. Of course, there’ll be some in which it’s much less efficient and others in which it’s much more efficient, and that’s, I think, at the heart of the Markov work. And that I do actually think is really, really important to pursue, but to Pranav’s point, I haven’t seen anything there that’s quite yet convincing. I haven’t seen an example of a client onboarding quantum computing for this reason yet, though I think that it’s a great future avenue.

I was referring a little bit more to within quantum computing, and within different physical implementations, certain experiments can be much more costly if run on one piece of hardware versus another. And there, I have seen folks looking for guidance and advice, not from BCG, it’s not our sweet spot, but from others on, “Running this specific algorithm, will we be at a major cost disadvantage if we do it on, say, ion traps versus superconducting qubits?” And I have seen some really, really large disparities in cost for specific algorithms with different physical implementation. So certainly, like resource estimation work for those that are already experimenting with quantum computing, I think is valuable now, and will only become increasingly valuable as the algorithms and the circuits that folks try to run are more and more complex.

Yuval: And you could even envision comparing different cloud vendors if you’ve got the same computer available on both. Pranav, I wanted to get back to you on the quality of the benchmark. So if I were doing an image classification benchmark, then I think it’s very easy to say, “This is a cat, this is a dog.” For a human to say, “Cat versus dog.” And therefore, be able to assess the quality of the image classification network. But if I do a portfolio optimization benchmark on quantum, how would I know that the result is better or worse than something else? Maybe I’m just getting a much better result. Much more difficult sounds like cat versus dog, or hotdog, not hotdog, if you go back to Silicon Valley.

Pranav: Fair. Right. I think that there’s two pieces to this. The first is that there’s certainly human intuition type problems, where it’s easy to construct a cost function just mentally of, “This is hotdog, this is not.” But just because on a portfolio optimization problem there isn’t necessarily human intuition, there is certainly a mathematical expression, which ultimately, if you’re a financial firm, and you’re solving for minimum volatility with certain return, that would be the very direct head-to-head comparison that can be made, which is… Even though it’s not the fuzzy, warm feeling of if this is a cat or a dog, it’s certainly, if you’re a financier, a fuzzy feeling to know that you’ve got the lowest volatility portfolio, you did better on this quantum computer than this quantum computer, this classical computer. So essentially, the short answer there is, there’s always a way to mathematically formulate the objective and just compare to that objective.

But then the other piece is that we can also compare to in the, say, up to 40 qubit regime, you can run and see, “If I ran this experiment on a perfect quantum computer, which you can simulate up to 30, 40 qubits, what output would it give? How many shots of each result would it give?” And then, you can do a variety of statistical tests of, “How does the output that I got from my noisy quantum computer compare to the idealized quantum computer?” Granted, that doesn’t scale to 50 plus qubits, 100 qubits, but at least in the current regime, it’s a second approach we can take, which is just, “Does this match what we would get from a perfect noiseless quantum computer?” So in both cases, there’s a mathematical approach which ultimately corresponds to business value, even if it’s not something that’s human intuition of, “Is this a cat or a dog?” Or something like that.

Yuval: Tom, I wanted to ask you, other than joining QED-C, what would you like to see vendors, end-users, or governments do to help move standards forward or to help make them easier to interpret?

Tom: Since a lot of these projects are open source, the best thing that can be done is to actually use them and contribute commentary, suggestions, or feedback on the value that it provided. I think actual utilization is very important in my mind. 

I did want to add one quick comment to what Pranav was talking about on the cost, as you mentioned. From our perspective, we’ve done several rounds of work on our benchmarking. In the most recent round, we have focused on something that we call the time versus quality trade-off, because in many  problems like machine learning or optimization, you wouldn’t necessarily expect a quantum computer to provide an exact answer. Rather, it is, “How good an answer can you get? How close can I come to the optimal?”

As Pranav said, there are a lot of already existing classical formulations for what they call an optimality gap, and how close your solution is coming to optimal. You can evaluate a solution based on criteria like, “If I run it for 30 seconds, I get this quality answer, but if I run it for 60, I get a better quality answer, and what’s my trade-off there?” And at some point, that trade-off may become more valuable than what they can get in a classical solution. That’s what we would be looking for. It is important to view that not as “did you get the answer or not”. Rather, it’s this time versus quality trade-off that’s really important in quantum computing.

Yuval: And Matt, same question to you. What would you like to see vendors or others do to help you advise your clients with regard to the best quantum computer for their particular application?

Matt: I think that transparency around system performance is extremely important, and I do think that the spirit of transparency is there, but to the extent that making as much information as possible so that we can more readily insert the fundamental system pieces into our cross-implementation benchmarking, cross-vendor benchmarking. I mean to me, that’s just the number one thing, making sure that there’s a lot of transparency, just at this stage in the technology development around the physical systems. That’ll make it easier for us. And then candidly, like for Tom and Pranav, when they think about the benchmarking that they’re doing, or the benchmarking suites that they set up, ensuring that there’s an apples-to-apples comparison, again, is just much, much easier if the system specs are really transparent.

Yuval: As we get close to the end of our conversation, I have one more question for you, Tom. What do you do when the solutions are not gate-based? I mean, you have companies like Pasqal, or QuEra that do analog hamiltonian simulation, and you have D-Wave, of course, that does annealers, how can you benchmark a solution if it’s not based on a gate-based architecture?

Tom: That’s a very good question, and in fact, a subject that has been in the forefront of our discussion, our group meetings since the beginning. We didn’t want to limit ourselves to gate model computing only. In the most recent work we did, we incorporated annealing as a form of solution. This goes right to what Pranav said earlier, we are defining the problem that you’re solving, not in terms of the specific circuit, but what is the definition of the problem? What is the optimization problem? What is the machine learning problem? Here’s a data set, and I’m going to derive some answers from that data set. That’s the problem. And then it doesn’t matter whether you use a gate model, or annealing machine, or cold atoms, or photonic, it doesn’t matter. How quickly can you get a quality answer within some time, and measuring that trade-off is what’s really critical. So we have made our suite hospitable to different computing technologies in this way. This is a big complicated problem that isn’t going to be implemented overnight.

Yuval: So we’re now at the lightning round, so Pranav, to you, and then Matt, and then Tom, if you could have dinner with one of the quantum greats, dead or alive, who would that person be?

Pranav: I would probably go with, it’s an easy answer, but Richard Feynman, in many ways the founding pioneer of our field, but people often quote him talking about simulating quantum systems with quantum computers as the killer application of quantum computers, and there’s actually a lot more depth to that statement than what’s commonly quoted. So I’d like to have a conversation with him about it, maybe we can interest him in some benchmarking too.

Matt: For me, it would be, I think Feynman as well. But among the living, maybe I’ll say Aaronson. I think both Feynman and Aaronson obviously have a role in the history of quantum computing, but also I think that their thoughts on extensions of quantum mechanics into philosophy would make for great dinner conversation. So that’d be my choice.

Tom: That is a really tough question, because it would be a great pleasure to sit down and have dinner with any one of them, I would say… Did you say only alive or anyone?

Yuval: Dead or alive, but not both dead and alive.

Tom: Well, I have a particular interest in Pauli, I have a few questions I’d like to dig into, so I might want to get together with him. But certainly, any of the others would be a very engaging conversation.

Pranav: Yuval, how about you?

Yuval: I just ask the questions here, but it is a good one. So Pranav, Matt, and then Tom, how can people get in touch with you? And what kind of people are you most interested in hearing from?

Pranav: My email address is pranav.gokhale@infleqtion.com. And I’m interested in talking to anyone who would like to learn about our benchmarking methods, as well as trying to inform the next generation of benchmarks, and maybe stay tuned on what’s coming next for us.

Matt: So for me, it’s langione.matt@bcg.com. And I think really anyone can reach out, certainly anyone from our traditional client base will have perspectives for you, probably on your thorniest problems, whether it’s vendors, users, investors, or governments. But increasingly, we’re seeing a lot of action on users, and I think that’s probably the most exciting development in the technology because with any deep tech, typically, the early interest is among the providers and the investors first, and then it gravitates toward users. So I think it’s a good benchmark of the maturity of the industry that we’re seeing a lot of interest from users, and would encourage users to keep reaching out.

Tom: I am reachable at tlubinski@quantumcircuits.com. I would say that I see benchmarking as a methodology that enables users to explore the potential of quantum computing. To me, it’s a means to an end. What I think I am particularly interested in is uncovering just how quantum computers can actually be taken advantage of in the coming years? What new types of programming, algorithmic approaches, might we conceive of? Benchmarking is a good way to measure progress, but I am very interested more so in exploring new ways to use quantum computers.

Yuval: And just to get back to your question, Pranav, I think for me the answer would be Einstein, and one reason, is because he expressed so much hesitation and uncertainty about quantum effects, and I would love to see if the development since his time has changed his mind. And interestingly, Einstein doesn’t come up often, or actually ever, when I asked this question before. So that would be my answer. Well, gentlemen, thank you so much for joining me today. It’s been a great pleasure.

Pranav: Likewise. Thanks so much.

Tom: All right, thank you.

Yuval Boger is an executive working at the intersection of quantum technology and business. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he can be reached on LinkedIn or at this email.

February 6, 2023