Murray Thom, VP of Product Management at D-Wave, Sam Lorenzo CEO at InspirationQ and Sergio Gago, Managing Director – Quantum Computing at Moody’s Analytics are interviewed by Yuval Boger. Murray, Sam, Sergio and Yuval discuss quantum and non-quantum approaches to solving the Travelling Salesperson Problem, benchmarking for TSP, how companies should approach optimization problems, and much more.

Transcript

Yuval Boger: Hello, Murray, Sam, and Sergio. Thank you so much for joining me today.

Sergio Gago: Hey, Yuval. Good morning.

Sam Lorenzo: Nice to be here.

Murray Thom: Nice to be here.

Yuval: So, Murray, who are you and what do you do?

Murray: My name’s Murray Thom. I’m the Vice President of product management at D-Wave Systems. D-Wave is the leading provider of quantum computing systems, software and services. We’re producing two models of quantum computing systems, and we’re laser-focused on bringing people business value in their practical quantum hybrid applications.

Yuval: And Sam, who are you and what do you do?

Sam: My name is Samuel Fernandez Lorenzo. My first background is in physics. I have a Ph.D. in quantum technologies. Previously, I led the research of quantum computing in finance for BBA, and now I’m co-founder and CEO of Inspiration-Q. Inspiration-Q is a spinoff of CSIC, the largest public research institution in Spain. Our mission is essentially to offer quantum advantages in the short-term, so we develop our own quantum-inspired and quantum algorithms, as well as specialized solutions in the finance industry at the moment, but with a very pragmatic vision, because we prefer to work with only those backends that may provide short-term advantages.

Yuval: Excellent. And Sergio, who are you and what do you do?

Sergio: Thanks, Yuval, for having me here today. My name’s Sergio. I am the managing director of quantum computing at Moody’s, and I’m also the editor of the Quantum Pirates newsletter, sharing with everyone the news of what happened in the week in quantum. I am a lover and long-time partner of the traveling salesman problem, which I think is what brought us here today.

Yuval: Excellent. I should have asked you who are you and what do you NOT do? Because you do so many things. But let’s talk about the TSP, traveling salesperson problem, which is one famous optimization problem. Murray, would you explain to us what TSP is about and why is it so popular and important?

Murray: Sure. Well, let me take a shot at it. And Sergio, I mean, being the fan that he is, I’m sure can probably improve on the answer that I give. The traveling salesman problem is a very common problem that’s well known. It’s about the notion of someone who needs to go out, it’s a salesperson, they’re going to visit a number of customer sites, and they need to find the optimal route to each of those customer sites so they can reduce the distance traveled. If you imagine they’re going to sites, if they have customer sites, A, B, C, and D, should they do them in that order or should they start with B and then go to A, then go to C and to D?

So, basically if you have a small number of sites that way, you can set up all the combinations and look at them and say, “Okay, which one is the shortest?” But as you add new sites to that problem, the problem complexity grows exponentially. So, it’s a fantastic problem from the perspective that it’s an example of an NP-hard optimization problem, where you’ve got this huge solution space that you’re trying to look through, there’s a variety of solutions. On average, solutions are not particularly good. With quite a bit of work, you can improve solutions and get good operating solutions, and then you have to do an enormous amount of work to try to find optimal solutions, and then more work still if you want to prove that those solutions are optimal because the solutions spaces are so large.

So, in my experience at D-Wave, we’ve been helping people to tackle their industrial optimization problems, scheduling problems, logistics. Traveling salesman is a really helpful example to raise with people so they can make a connection to the problem types that are suitable for quantum computing technology, and an important part of high value in applications in industrial optimization. However, in our experience, also immediately after starting there, then you have to work quite a bit further to get into people’s actual industrial challenges.

Traveling salesman actually has an important component of industrial optimization challenges, but there’s usually a lot of other constraints and factors that you’re trying to take into account, which as we’re talking about traveling salesman approaches to finding good solutions, those heuristics that we might use, the choices we make and the way that we optimize them gets complicated by the real-world situations that people are dealing with.

Yuval: And I think traveling salesperson is not just about salespeople, right? They all do things on Zoom anyway these days, but it could be a UPS truck that’s got multiple deliveries, and even the optimization criteria may vary from problem to problem. Maybe you want the minimal amount of time, maybe you want the most fuel-efficient route. Is that correct?

Murray: Yeah, that’s an excellent point. There are companies that have looked at routing vehicles for garbage collection, for instance, Groovenauts and Mitsubishi have done work like that with our systems in order to reduce the CO2 emissions. So, really, that’s an application where people are trying to impact some large global challenges. There’s benefit to the business in terms of increased efficiency, and then there’s benefit to the world in terms of reduced emissions. Just like you said, that’s the practical reality, where it’s not just one vehicle visiting all sites, so you also have choices to make in terms of how different resources, in this case, vehicles, get assigned to different areas of the routes that are going to be run, and that’s also not unlike manufacturing floors where people are delivering resources from where they arrive at the shipping location and then get them out to the manufacturing locations, where you’ve got a fairly complicated vehicle routing problem, as you’re trying to basically imagine that material moving across the floor and arriving at those fabrication locations just in time.

Yuval: And Sergio, I think that TSP is not a quantum problem, necessarily. It was formulated a long time before quantum came into vogue. Could you tell us a little bit about various classical or quantum approaches, or quantum-inspired approaches to solve these kind of optimization problems?

Sergio: Right. In India, the TSP is not necessarily a quantum problem. It’s a problem that potentially, hopefully, can be solved in a more efficient way with quantum computers or quantum-inspired algorithms. But my guess is that one of the most explored and analyzed and studied problems in computational complexity in the world. There may be others, but surely this is one of the top five, if I could guess. The other point is that there is not just one TSP or vehicle routing problem. There are actually, to my knowledge, at least three of them, right? One of them is the satisfiability problem. So, if I have this budget or this amount of available kilometers or routes to cover, is there a solution, yes or no?

This problem may be more easy or more difficult than the others, and there is the actual optimization problem where it says, “Hey, for this graph, for this specific list of nodes or cities or parts that my robot has to traverse across in a factory, what is the best route, the most efficient,” as you said, “In terms of energy spend or kilometers or centimeters traversed?” Then there is the search problem itself. Give me, if it exists, the route with this specific amount of weights or these metrics that I want to output. The optimization problem is the one that’s NP-hard, I believe, whereas the search itself is NP-complete. So, we talk about one single problem, and then of course we add how many hubs or depots we want to put, how many different buses or delivery boys we can put on the map.

So, there have been hundreds, possibly thousands of researchers working on the TSP, and I think we humans have become really good at finding heuristic solutions. So, there’s one thing that’s common, and I think it’s unbeatable, at least classically, that is an NP family problem. So, of course, the complexity is going to grow exponentially or more with the size of the problem, and of course, we have all those challenges around, and a couple of universities in the states were trying to beat themselves all the time. A few decades ago, there was the Christofides algorithm that is one of the main ones out there that people are using today. Branch and bound in terms of optimization is the one that, not just for the TSP, but for many optimization problems out there is quite famous, and then simulated annealing problems, and I guess this is a nicely way into the wave.

Simulated annealing is one that fights all the time with two other families, genetic algorithms, and the more naive ones like nearest neighbor and stuff like that that will probably rely a lot on what type of matrix you have. Nearest neighbor may be the most quickest and easiest solution for a specific graph, but a terrible one for another graph. So, I think one good point to consider is is there a better algorithm for a type of TSP or for a specific data set? That would be a nice discussion to have.

Yuval: And Sam, you said that you focus on near-term solutions to these kind of problems. Could you describe the technology that you’re using today? How would your company solve a TSP problem? And if I were a customer today listening to this podcast, I’m excited by this problem, I reach out to you now in 2022. How large of a TSP problem can I solve with your technology?

Sam: One aspect that I would like to highlight, often overlooked, is that the mathematical encoding of a use case is as important as the back-end or the algorithm you use to solve it. Now, I like D-Wave machines, and surely it is one of the back-ends that we plan to work with soon. However, D-Wave is at the moment restricted to the same coding, the quadratic unconstrained binary optimization, aka QUBO problem. You can encode the TSP problem in this mathematical shape, but it’s not efficient as you get an encoding that grows exponentially with N squared, N being the number of places you want to visit. So, the potential speed advantage that you might get using D-Wave, for this particular use case, you may very well lose it with this encoding. Now, to overcome this at Inspiration-Q, we are developing a specific quantum-inspired algorithm for the TSP problem. Indeed, we already have a first solution to it, and I think there is room for improvement to tackle problems with thousands of cities.

Yuval: And Murray, I’d let you augment that in a second, but what does quantum inspired mean? Sam, that’s for you, please.

Sam: Okay. Quantum-inspired algorithms, actually, it’s an emerging technology. It’s a technology in itself. It’s not just simulating how a quantum computer works. It’s just using all the mathematical tools, methods developed during these last decades by all the people working in the quantum information field. There’s been a lot of improvement in the methods that they’ve employed to simulate or to understand, to study quantum systems, and now we use all those methods and apply them to problems like the TSP problem or other problems, use ideas or encodings taken from all these problems that we are starting to tackle with the D-Wave and other technologies, and combining all these ideas to produce new algorithms, new techniques.

So, yeah, this is a whole emergent technology that in many cases we’ve seen that they are able to improve previously classical algorithms, and in some way, the collection of all these techniques, it’s essentially what we call quantum-inspired technology. Well, and the benefit is that we can tackle problems that are also compatible with future quantum backends, so I think it’s a very pragmatic way to tackle this quantum feature, and that’s what we do.

Yuval: And Murray, you’ve published hundreds of different applications in use cases of optimization. What’s your current state of the art for TSP-type problems?

Murray: Well, that’s an interesting question. Maybe the thing I should comment on as an observation is that… I’ll just pause here for a sec. When we’re thinking about the approach that D-Wave is taking the traveling salesman problem, we’re really focused on practical applications, and it’s very important to recognize the distinction between the types of algorithms you’re going to use to solve the traveling salesman problem and what the quantum processor does. If you think about it, if we look at past industrial revolutions, the main industrial revolution was powered by a steam engine, which produced rotary motion, and the information age was powered by classical computing, which can do addition and multiplication blazingly fast.

So, quantum computers are able to draw solutions from these QUBO problems that Sam was introducing with very high efficiency. Those problems are combinatorial in nature, which means that you’re looking through a problem. Let’s say I would like to pick a set of players for a sports team. I have a pool of players, and I’m going to pick the ones that are the best. But once those players are on the team, they don’t have a particular order. They’re on the court playing together. In a traveling salesman problem, the order of the cities that you visit makes a difference. So, if you sit visit A and then B and then C, that’s not the same solution as A and then C and then B. For exactly that reason, the problem formulation issue that Sam was just describing means that you don’t want to try to brute force it directly onto a quantum annealer.

So, if you just try to take a very natural, straightforward description of a traveling salesman problem and put it onto one of our quantum processors, you might get tens of cities. If you compare that to, let’s say, the best in the world like Concord, you can get up to tens of thousands of cities. What’s more, you can prove optimality at hundreds or thousands of cities in just a few seconds or minutes. So, one of the reasons why classical algorithms could do so well at Concord is that there are a lot of simple observations you can make, which is that when you’re routing a single vehicle across a route, it’s on a surface. It’s not moving in three dimensions or higher dimensions. So, by recognizing all of those constraints about mapping vehicles on the surface of the earth, you can actually help classical computers do extremely well at large sizes.

But to the discussion we were talking about earlier, there are many problems in vehicle routing or planning the deliveries of fleet vehicles where you have a number of other constraints that are at its heart. The way that we look at applying quantum annealing to this problem is by finding encodings where this is actually very much like that, the classical solution space. You’re starting with a feasible solution, and then you’re looking to make exchanges, you’re finding areas of high cost, and then you’re looking to make swaps, swaps that basically keep you in a feasible space. It’s a loop that doesn’t break. You can use the types of problem-solving that quantum annealing provides to do those kinds of movements in the solution space in a hybrid algorithm.

It’s important for people to keep in mind that it’s not about quantum or classical. We’ve probably done about 10 years of benchmarking, learning what the best capabilities of those two technologies are in order to understand how they solve problems. But they have complementary skills, right? Classical computers are very good at local updates, and quantum computers seem to be remarkably good at non-local updates where they can move between large areas of the solution space. That’s an important part of problem-solving, is when you get locked into a particular solution, just doing one swap or exchanging two cities is not going to get you out of that. You need to reconfigure a portion of the problem. That’s where we’re applying quantum computing to these hybrid solutions, and that’s what’s allowed us to get to industrial optimization challenges in vehicle routing when we’ve looked at support vehicles that are delivering water or food supplies or vaccines out to locations, is by taking that hybrid approach.

Sergio: So, if I may add, thanks, Murray, for adding Concord, if I’m not mistaken, that’s basically very low-level C optimizers and linear programming that try to optimize the tiniest bit, both in memory of the combinations and the actual formulation of the problem, also echoing what both of you are saying on the domain is super important and how you do the mathematical formulation of it as well. But I would like to dig a bit deeper on the efficiency, whether it is tied to a specific problem or to the more general formulation. We take, I don’t know, the traditional Ising formulation for the DSP.

So, we have Concord, we have the OR tools, which are also really, really efficient, and work pretty well for a lot of real-world problems today with the sizes that we have today. We have also companies like Robi, for example, doing really good classical optimizations as well with very specific proprietary algorithms as well. So, when we talk about efficiency, are we talking about orders of magnitude on solving the problem or small steps on that, or maybe finding solution heuristic solutions that we could not find before?

Murray: Yeah, that’s a great question. I mean, I think it’s very natural for practitioners who are working in this space to think of quantum annealing as a kind of heuristic. It’s something where it’s part of a technology stack that they’re putting in place in order to build solutions in their application domains. Certainly, industrial optimization brings a lot of challenges to it, not just, “Can I come up with a good plan?” But, “Can I come up with a good plan that a workforce is actually going to use?” Because the driver on the road is the one in the traffic who can basically choose to deviate from a plan. So, you’ve got a lot of those hard factors and also those soft factors as you’re looking to make those improvements.

Certainly, as we have been looking at and understanding how quantum mechanics can be turned into useful work for us, we realize that proving optimality even in the sub pieces that the quantum annealer is actually contributing is not the quantum computer’s role, right? The quantum computer is basically helping to do these fast reconfigurations. So, I would say that by the time you see its effect in the business quality result, you’re looking for not something that’s going to suddenly change the business operations capability by 1,000 times. I mean, it would be fantastic and terrible if that happened in terms of the disruption to the business with that kind of a rapid change, but something that more increases the efficiency.

Although, practically what we find is as soon as you increase the efficiency, the business will then grow or they’ll be like, “Oh, there’s some other factor to this that we would like to now include that we’ve always had to exclude from our problem solving,” and they’ll bring that in and that complexity will go up. So, really the approach is everyone who’s running businesses which is bound by problems that are this complex, you will always hit a ceiling. No quantum computer or classical computer will ever be large enough to reach your ambitions in terms of how big of a scope you would like to run.

So, by combining quantum and classical computers together, you’re effectively helping to push that ceiling as far up as you can, and you’re also doing it in a way where you’re setting yourself up to take advantage of CPUs, GPUs, and QPUs, basically all as your compute resources in the future, because you certainly don’t want to be in a situation, let’s say in material science, we’ve done some experiments in quantum material simulation where in two product generations we went from not being able to solve problems to being 3 million times faster. If that happens in your business, you’ve got to now redesign all your application infrastructure, and that’s the negative part of disruption. But if you start building your applications with all these technologies incorporated together in a hybrid manner, you’re effectively getting all the gains of the technology and of the algorithms as they change on top of one another. That’s where I think people are seeing real value today as they’re using the technologies.

Yuval: I’ve learned from you guys that it’s important how you express the problem and that there are multiple ways to run it on the back end, whether classically or on a quantum computer. Now there are also hardware solutions, FPGAs, that are implementing Ising models. So, a question to all of you, maybe Sam first and then Murray and then Sergio, how should a customer go about choosing the best provider, the best technology to solve a TSP problem? I’m a customer, I’m motivated, I can understand that even a 5% improvement in my business has a huge financial impact. How should I go about making that choice?

Sam: Yeah. The best, I think, depends on your needs, right? I think that in any optimization problem, the TSP problem, there are at least three dimensions we need to care about. These dimensions are interdependent, so we cannot reduce the problem to any of these. Human beings, we try to many times oversimplify things. So, the first could be the number of variables, in this case, the number of cities you want to visit. Second, the quality of the solution, and third, the amount of resources, say, time, say number of processors, etcetera. So, in summary, your budget, right? Depending on what you demand in this decision space, you may choose one option or another.

Maybe you want a fast online calculation with few resources, even if you don’t get the best solution, because what you want is really a fast solution, or maybe it’s really important, as someone was saying, that it’s very important for you to have a very optimized solution, even if that takes more time and more resources. So, probably there is no single algorithm or backend that is going to be the most suitable in any circumstances, so you have to first ask yourself what your constraints are, and then to analyze the best solution, but knowing very well what you really want there.

Murray: Okay. I mean, the way I would answer that question, because that’s a very difficult question, and the reason why it’s challenging is because there are so many people coming at this problem or experiencing this problem in so many different ways, right? They might be thinking, “Do I want to build my own solution for this?” “Do I want to buy or purchase a solution for this?” And certainly, there’s also a lot of depth to it, right? I mean, I have gone through exercises as a product manager and been like, “What is my customer’s experience?” Well, if you start going and doing a search for industrial optimization, the world of stuff that basically falls on you at that point, it’s amazing. It’s like, well, you become an expert in algorithms, and then it says become an expert in tuning the parameters of those algorithms. Then it’s like, become an expert in analyzing which of those algorithms is contributing to which instances on a given day.

When I’m talking to math experts, that’s the world that they’re immersed in. But a business leader who’s trying to pick a solution, they’re thinking about things like, “Well, even if I do run on the best technology in the world, I have to make sure that I have high uptime and I have high quality solutions. My software is not going to break, that this is going to be updated and maintained over time.” So, I think that people are looking at composite classes of factors when they’re trying to make this decision, and it’s sort of like, I want good people, I want good processes, I want good technology, right? So, they’re going to meet with a team and they can be like, “Does this team have experience? Do they have the ability to understand my problem, design a problem that’s suited to my experts and what their needs are and operations?”

Then they’re also going to be looking at, “Okay, what are the processes we’re going to use? Are we going to be using the agile development method? Can we do continuous deployment? Am I working with a group of folks who are very conservative and we need to gradually enter this new operation process into it?” That’s an important business decision at the time that you’re picking a solution provider. Then on the technology side, it’s sort of like in the best case scenario, I don’t want to be trying to pick winners, so that’s a big part. It’s our customers who basically drew us into the direction of building hybrid solutions because Samuel was mentioning earlier about quantum inspired, there was a lot of work done on CPUs with simulated annealing initially, and that was sort of a big test bed against quantum annealers to see what they could use.

When the quantum annealers started beating simulated annealing, they started to explore and say, “The Quantum processor actually has access to this big memory space. Let’s make a classical quantum-inspired version of that using a GPU, and now we’ll basically create slices of the memory space of the solution spaces we’re searching,” and now you’re incorporating a graphic process unit. Both those have contributions to make. It’s not that one is better than the other. Now we basically have Nature papers basically showing that quantum annealing can reach high-quality solutions faster, fundamentally faster than any classical thing can possibly do.

So, on the technology side, it’s kind of like, well, how do I get the best benefit on all of those things? And keep in mind that the technology is not just the processor, it’s also the deployment infrastructure, your ability to scale up and scale down. You don’t want to be using these resources when your problems are not actually running. So, people are thinking about it in that broader IT context and making sure that the solution suits them there. I think people, processes and technologies are how people judge those providers.

Sergio: I’m going to try to put the perspective of the non-vendor, or better say the user of those algorithms, both classical and quantum. I will start with saying that I think both D-Wave’s and Inspiration-Q’s solutions, among others, are really good, really smart, and really efficient, having personally seen both of them. The first thing is kudos to both of you guys, because you’re definitely trying to do something that I’ve tried for most of my professional life, which is solving the TSP, right? So, thank you for that. Hopefully we get there together soon. I think one of the problems that we have out there is that in some cases we are mixing layers of conversations. We talk business, then we talk engineering, and then we talk algorithmia, right? Our computational complexity.

If we keep the conversation on each layer, then we make sure that we are talking at the same level. But sometimes those conversations start to mix and then it’s a bit chaotic. I think you touched, Murray, on a lot of those points. What is going to be the integration, how it will look like, how it will improve over time, how my problem will evolve over time. I think you touched on a really good point that we’ve seen over and over with AI, cloud computing, and stuff like that, how, okay, now we have better machines or better technology to do things, so now let’s tackle more things. We are like gases. We always use all the available space that we have at our disposal. If we start from the ground level, computational complexity, I think we need to get into an agreement, and I’m not sure if this is today the place for it or not, but there’s a lot of conversation on what is BQP, what is NP, and can we achieve quantum advantage for general NP problems, right?

If we enter that conversation, it’s a whole opening a can of worms that probably we would need a three-hour podcast or 30-hour podcast and more people to come in and comment. So, maybe we can agree on something that I think both of you say, that sometimes it is about the local efficiency for your type of problem and improving what you have today. You don’t necessarily have to go into claiming quantum advantage or claiming P=NP, otherwise, some of us would have a lot of money in that regard. Let’s say that there is a business advantage on how we do things. From the user perspective, there is one thing that I would really like to see, which is more business-side benchmark. On the TSP specifically, and many other problems as well, you can see plenty of that classical benchmark you mentioned, simulated annealing, which is really good, and genetic algorithms, that tends to be very good for certain models or certain type of problems or certain data sets.

The same can be applied to many other complex and not complex problems at the end of the day. Perhaps it’s because the classical algorithmic problem has been around for much longer and it is more mature in a way, and of course, quantum is still in its earlier stages of the technology. But to your question, Yuval, what would be nice to have or how would a client choose which one, well, that benchmark could be extremely efficient or extremely useful in order to help each other improve on what we have. I think that’s the beauty of quantum and quantum-inspired, that we keep pushing each other on improving the other thing, right? There’s now post-quantum algorithms. Suddenly we have much better classical encryption algorithms that have nothing to do with quantum, but just the ecosystem itself helped push the boundaries of what we thought was possible.

So, for me, it’s pure benchmark and say, “Hey, this is how far we can truly get in time, accuracy or cost,” which are at the end of the day the three things that matter to companies or a combination of the three. And sure, some cases will require speed ups on the calculation time at the expense of some accuracy, because as you say, you need to be more reactive to soft valuables. In some cases, you just want to get something as steep as possible and put it out there in the market, or in some cases you want the bloody best solution that you can get out there.

So, that is something that I think can help a lot the industry in general and also the client base. So, we remove part of the fog surrounding the ground level that I was mentioning, is BQP under NP? We know that’s not true, right? Does it mean that we cannot get some business advantage by using quantum-inspired algorithms or quantum annealers or a combination of hybrid algorithms that take the best of both worlds? I think the answer is yes, but we need to navigate through the fog.

Yuval: My next last question, and thank you, Sergio, for that, just touching on the point of benchmarks, if I were benchmarking a CPU, I could use PassMark, and if I’m benchmarking a machine learning algorithm, I could use MLPerf. Is there or should there be a standard benchmark for TSP? Maybe Sam quickly and then Murray.

Sam: Yeah. I think the problem of benchmarking, it’s a critical one, I agree, because for many problems, at least in the quantum, we know theoretically that there should be an advantage. But in the case of optimization problems, and of course, TSP is no exception, we don’t have that theoretical advantage in principle as far as I know, and we won’t have it. We know that even if we have a quantum computer, the complexity of the problem is going to grow exponentially. Now, that doesn’t mean that we are not going to find an advantage, because we are using heuristics and we can use one processor, one algorithm, and then we can find an advantage using those means.

So, yeah, there is no way but to benchmark different solutions, and for real use cases, it’s just a matter of finding a real business value. I think that’s what we all have to do. In that sense, I think we have to, all of us, to make an effort to invite all the quantum community to start doing these benchmarks and then to compare solutions to find the truth, right? Yeah, this is what matters in the end, and to find value and to make progress in society using these technologies.

Murray: Yeah, I think that’s an excellent point. I mean, one of the excellent things about having these new emerging technologies is that we’re pushing each other to do better, just like the point that Sergio was making there, which I thought was an excellent point. Solving problems like the traveling salesman problem, where the complexity is explosive, it’s not like other problems where you basically say, “Well, I can show you that this is the best way to solve the problem.” The smallest number of additions that you can turn this into is this many, it’s N, and then this is how many we’re doing per nanosecond. As a result, this is the fastest way to do it. It’s very much more like, “Well, 100 people have tried this problem and this is how well they did, and I was able to do a little bit better than them.” It’s more of a relative comparison.

There’s a researcher named Carleton Coffrin at Los Alamos National Laboratories, and he really remarked that what’s fantastic about quantum annealing is that I think it’s going to advance the entire optimization field, because they now have something to compare themselves to that’s not classical computing at all, so its fundamental methods and its scaling curves are going to be different, and that is actually causing people to then open up their eyes to, “Well, maybe it is possible for a classical algorithm to do better,” and that’s causing them to explore that space better. We’re constantly looking for online benchmarks in a variety of spaces. Certainly, if you’re going to do native surface of the earth, one traveler, TSP problems, we’re not going to do as well, and it’s because we’re not optimizing to do extremely well in that space because we’re talking with companies about the industrial problems that they’re doing.

It might not be obvious to everyone, but there’s this no free lunch theorem in these algorithm spaces, which is that if you design an algorithm for a particular class of instances of a problem, and if it’s the best algorithm at that, it will absolutely be worse at another group of problem instances, right? The choices that you have to make, so for instance, if your optimization problem, the solution space is a bowl, the fastest algorithm is to run downhill. The moment that you start to get some undulations in the surface of the bowl, that’s no longer the fastest algorithm. But both of those algorithms are important when you don’t know what the structure of the problem you’re trying to solve, and herein lies the point that Sergio is making about business benchmarks, is that how do people orient themselves when they’re trying to create industrial solutions in these incredibly complex spaces, right?

It’s very challenging at a detailed level with the technology stack that people are looking at, because if I work with two people in vacation scheduling, nurses and let’s say people who work in airports or something like that, the context, the details of how they run their businesses actually change the quality of solutions and the approach, the formulation, as Samuel was saying, that you would take in those two cases. However, businesses do compare themselves, and certainly when we’re talking with people about Save-On-Foods and we say we took an important grocery optimization problem from 25 hours under two minutes, there are other people who are doing business operations who are saying, “Okay, well, I can understand some of the complexities of grocery optimization.” That’s a reference point. I think as we do better, as we get more of these use cases built, there will be more of these contextual reference points that Sergio is talking about, which would be important for businesses to try to make comparisons and orient themselves.

Yuval: Excellent. Sam, Murray, Sergio, how can people get in touch with you to learn more about your work?

Sam: Well, yes, visit our website, inspiration-q.com, or just get in touch with me directly through LinkedIn and Twitter or whatever. Yeah. You will be welcome.

Murray: Yeah. I mean, you can find me on LinkedIn, I guess, if you’re looking for me specifically. D-Wave Systems, if you want to learn about our quantum computing systems, come and check out our website. You can also get free access to our quantum computers through our Leap cloud service. If you search D-Wave and Leap, as in jump, you can sign up, you can get access to the systems and our open-source code examples. I was going to say, for developers, maybe the more natural way would be to go through our GitHub Ocean SDK and our open-source examples there and start looking through our Python and C toolkits. Then also for businesses, we often work with them to help them through a phased approach in terms of discovery to proof of concepts called our Launch program. So, contacting us through sales is a great way to get connected with how do I identify business problems that are suitable, or quantum computing and quantum hybrid approaches that are suitable, and then start to build them out.

Sergio: I won’t be able to give anyone cloud access to any algorithm because I don’t have that, but feel free to go to quantumpirates.substack.com, or look for me on LinkedIn as well, and on what we do in Moody’s in quantum. Especially interested if you have a good exponential solution for the TSP or any other algorithm, I would really love to hear you and then put on the challenge of improving on each other’s results and standing on top of giant’s shoulders, I guess.

Yuval: Excellent. Gentlemen, thank you so much for joining me today.

Yuval Boger is an executive working at the intersection of quantum technology and business. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he can be reached on LinkedIn or at this email.

October 23, 2022