Home Computing Podcast with Joe Fitzsimons, CEO of Horizon Quantum Computing

Podcast with Joe Fitzsimons, CEO of Horizon Quantum Computing

Joe Fitzsimons, CEO of Horizon Quantum Computing, is interviewed by Yuval Boger. Joe describes the company’s approach of building software development tools that aim to accelerate classical code and make it run more efficiently on quantum hardware. They discuss the advantages and disadvantages of abstraction layers, the potential for quantum computing in chemistry, and much more.

Transcript

Yuval Boger: Hello, Joe, and thank you for joining me today.

Joe Fitzsimons: Thank you very much. Very happy to be here.

Yuval: So, who are you, and what do you do?

Joe: I’m the CEO of a company called Horizon Quantum Computing. Before I started Horizon, I was a professor of quantum computing for nearly 20 years now. At Horizon, we’re focused on building software development tools to make it easier to build programs that take advantage of quantum computing.

Yuval: At a high level, there are several companies that build software for quantum computers. What makes Horizon unique or what makes your approach unique?

Joe: The approach we’ve been taking is to recognize that it’s going to be very hard to take advantage of quantum computers if you don’t have a really in-depth knowledge of quantum algorithms and how to construct them. If you look at the numbers, really only a few hundred people do have that kind of level of knowledge. So what we’ve been doing is trying to build up tools to make it both easier to program the systems from a technical point of view, being able to do more with less code, but also being able to enable domain experts to take advantage of quantum computing in different domains like finance, pharma, but also things like the energy sector, automotive, aerospace, and so on. For us, what that has meant, our kind of North Star, is that we are building towards being able to accelerate classical code, code written to run on a conventional computer. We want to be able to take legacy code, code that has been written for systems that have nothing to do with quantum computing, and make it run faster on quantum hardware.

At the moment, I think we’re probably the only ones that have capabilities in that direction. We’ve put quite a lot of effort into being able to, for example, accelerate a subset of MATLAB code:  to break it apart, automatically construct quantum algorithms from that classical code, and then, the intention is to be able to compile that all the way down to run on hardware. Now, where we are at the moment, the first tools you’ll see coming out from us are a little bit lower down the chain than that. We have tech demos. You may have seen us at Q2B last year, for example, or the year before, where we’ve had demonstrations of accelerating MATLAB code. But what our focus is on right now is getting to early access with our integrated development environment that allows users to program at a somewhat higher level of abstraction than existing frameworks, but still not quite with classical code. What that means for us is programming in a quantum programming language that looks a little bit like BASIC. We call it Helium. It’s fully Turing-complete, so you’re not programming circuits, you’re writing programs that may have some indefinite runtime. And you’re doing it in a way where you can write subroutines, for example, in C or C++, and compile those directly down to extremely efficient quantum circuits. So that’s kind of what we’ve been building. It’s coming up to early access now, so there’ll be more updates at Q2B this year.

Yuval: If I were to play devil’s advocate on abstraction layers, I would say that abstraction layers are the best way to get code to be equally bad on all hardware platforms. How do you respond to that?

Joe: I think with a smile. So in some sense, you’re right. And if the approach we had been taking was to, for example, build up libraries for optimization algorithms or something like that, then I would 100% agree with you. But that’s not what we’re doing. And we’re not focused on those kind of black box algorithms. Rather, we’re focused on the way conventional compilers work. So we are taking source code and building an optimizing compiler that not only does the classical optimizations but also does quantum optimizations on the way down to construct a quantum program for the same task. At every layer it’s passing through, it is getting optimized for the processor closer and closer and closer to the hardware. So we’ve had to put in a lot of effort. We’ve built an entire stack. We don’t rely on any of the existing frameworks at any point in our code. So going from C or going from Helium, compiling that down, that process that it goes through, everything from the constructing quantum circuits to converting between instruction sets, doing the gate synthesis, compiling down to target particular hardware, and also taking things that are maybe loops, like while loops and things like this, that you cannot run on current hardware and turning those into hybrid programs –  all of that is us. So we’re doing all of this without any other quantum computing frameworks in there except when it comes to export time. So if you want to export in QASM, for example, to target an IBM system or something like that, then, of course, we give you framework code that you can run on an IBM system. But all of the generation behind the scenes, that’s not based on any of the existing frameworks or anything like that. So we’ve built our entire stack to go the whole way down.

Yuval: If we look at one of the biggest computing revolutions, that was the transition from CPU only to CPU plus GPU. And when you look at a GPU, it is programmed similarly but still different than a CPU. You have to think about the cores; you have to think about some local processing and so on. So, what have we learned from that transition from CPU to GPU, and how does it apply to the QPU transition?

Joe: That’s a great question. So what I would say is that there’s different ways you can think about this. If you are a developer working at a low level with GPUs, for example, then you need to be writing GPU-specific code. If you are trying to implement faster linear algebra algorithms, you need to be very close to the hardware. If however, you are writing machine learning models, you don’t need to worry about the GPUs. Not really. You just work within whatever Python libraries you’re working in, and it’s taken care of for you. So there are different layers of abstraction going on in the classical world as well.

We’ve been building our system in such a way that it has kind of layered abstraction. At the lowest layer, you can work directly with the hardware, with the native instruction set for it, constrained by the connectivity graph of the hardware and so on. But you can also work at a layer that is hardware-agnostic, where you can write kind of general-purpose quantum assembly code but that also allows for arbitrary flow control, loops, and so on, which can then be compiled down to target particular systems. Or you can work above that. You can work with Helium and with subroutines written in C and C++. And where we’re going is we’re going to classical code. There’ll be several other layers above where we’re currently at. The intention here is that depending on your expertise, depending on the place you’re contributing, you can dive in at whatever level of abstraction you want, make changes at that level of abstraction, develop at that level of abstraction, and leave all of the other layers as automatically compiled. So, if you’re a quantum algorithms designer, maybe you don’t want to be all that close to the hardware. Maybe you want to be a little bit higher up in the abstraction layers but not so high that it’s classical. You still want to be doing your quantum Fourier transforms and having full control over the system. If you’re working on quantum error correction, you may want to be a little lower down the stack. And if you’re a domain expert in the oil and gas industry, for example, then you probably don’t want to deal with quantum code at all. So we’ve been trying to build a system where there is the flexibility to dive in at the layer that you care about, the layer that you can contribute at, and leave what’s below it automatic so that you do not need to worry about those lower levels of abstraction.

Yuval: Let’s talk a little bit about marketing this platform to customers. When you go to customers, I mean, I think it’s easy to get into the technical details. Well, what would you say are the top three benefits that a customer would have with your platform? Is it hardware independence? Is it the ability of non-domain experts to code? Is it something else? How would you pitch this to customers?

Joe: Sure. What I can say at the moment in terms of how I view the market is that actually, what is critically important at this point in time is technical lead and getting to quantum advantage as soon as we can. For quantum advantage, talking about approaching particular customer groups on what we can do for them, what everyone can do for them is extremely limited. Until we’re at a point where quantum computing is affecting these customers’ bottom line, it’s going to significantly affect willingness to pay, but we’re not really contributing to them. So really for us, our goal is to get to useful quantum computing as soon as possible. 

In terms of what makes our system different, why we think it contributes to that goal, it starts to allow new capabilities that are not possible in existing frameworks, and it starts to make it much easier to do quite complex things. If you want to program a really large quantum program, and I would say the largest ones we’ve explored so far have been at the range of about 50 trillion gates, then there are not very many options in terms of how you develop that kind of complex software. So we’ve been trying to build a system that is capable both of developing for systems today, but also far into the future, so that we’re building a framework that will stand the test of time and that starts enabling new capabilities. For example, within our system, it’s very easy to make programs that have indefinite runtime to directly simulate a quantum Turing machine, for example. And that is just something that’s extremely difficult. If you want to do it, just construct it from scratch as some kind of hybrid program, unless you have mid-circuit measurements, it’s not really going to be possible. Unless you think about how to do it with postselection and all of these other things, for us it’s trivial. You express it in our language. We just write a repeat until loop, and it’s going to run through that loop until it sees a particular value from a measurement and it will stop. ven though not all hardware can do that today, we compile that down to a hybrid program. And that’s completely abstracted from the user. They don’t need to worry about that hybrid program. It already converts it to do the postselection for you to run it as a series of circuits rather than a single circuit and so on. 

So I would say, a big part of what we’ve been doing is ease of use, ease of writing more complex systems. This is true both from the development perspective, but also from the deployment perspective. For us, the end point of compilation should be a deployed program. It shouldn’t be a single shot run or 10,000 shot run or whatever it is on that piece of hardware. It should be an API that the user can call with whatever inputs are used to describe that problem that will run the code on whatever hardware backend that has been compiled for and return the results to the user through a standard API interface so that they can build whatever frontend they want to process their results in whatever way they want. So if they want to incorporate it, they don’t need to be working in Python. They can incorporate it straight into JavaScript. They can incorporate it into MATLAB. They can incorporate it into whatever technology they’re building.

Yuval: How does the platform deal with hybrid algorithms where part is classical and part is quantum? Do you expect users to use your quantum version of BASIC to write hybrid algorithms as well?

Joe: What I would say is there’s perhaps two different categories that you can fall into here. There are hybrid algorithms where you’re thinking about doing many different shots. You have some classical logic that is processing the statistics from the previous set of shots, that were run to determine the next circuit to run. And this is the variational algorithms that we’re all very familiar with at this point, I would say. But also, if you think about things like error correction or anything like that, you also need to think about classical processing that is happening concurrently with the quantum circuit. And that’s somewhat different because your classical processing has to be able to feed back into the circuit. That means thinking about code that is running maybe locally on FPGAs rather than a nearby GPU system or something like that. We think about both within our system, how this is built. There is a simple way to implement basic functions classically concurrently with your quantum algorithm, but you’re also able to include classical code that computes classical functions that run in a sandboxed way. That allows for the development of both types of algorithms, but importantly, it allows for the development of these more advanced algorithms where you need concurrent classical processing happening live with the quantum processing that’s going on, which is clearly something we need as we move to fault tolerance, as we move to more complex quantum programs. Now, if you talk about how you would you do a variational Eigensolver or QAOA or something like this, what I would tell you is that our system is really designed for programming the quantum backend. By the quantum backend, I mean the quantum processor itself, as well as any classical control that sits with it. It’s not intended for running large compute loads. It’s intended for very fast functions. So it’s intended for pure quantum backend development, but what you would do if you were developing a circuit, we have a way of specifying inputs to be read at call time. So what you would do is you would specify each of the parameters that can be varied within your circuit as an input. You would compile that program and deploy that program with those inputs marked. And then from your front-end code that’s implementing stochastic gradient descent in whatever framework you want, whatever technology you want, whatever hardware you want, you’re implementing that, calling this API in the background with the specified parameters. Now, I will freely admit our system has been built to target more structured algorithms. My view has always been that there’s unlikely to be a big advantage in NISQ, except perhaps for chemistry. Now, I could be proven wrong, and I am not saying that it is not worthwhile for people to be exploring variational algorithms. It’s just personally, I don’t think that’s the direction of the future, and we have never been working toward that goal. You don’t see Horizon QAOA or Horizon VQE implementations. That’s not our core competency.

Yuval: You mentioned getting to quantum advantage, and some people talk about the quantum equivalent of the GPT moment, where all of a sudden it’s clear that there’s something there for general use. What is that going to look like in your opinion, and how soon will it come?

Joe: That’s a tricky question, and I don’t think anyone really knows the answer to this. What I think is pretty clear is that the first real advantage is likely to be seen in chemistry. There are numerous reasons why you would believe this, but one of them is just that chemistry looks a lot like what’s happening in a quantum computer. They’re both quantum mechanical systems, they’re both obeying the same equations. You might also think that you can get away with a higher error rate for chemistry calculations if you can make the errors in the quantum computer look a little bit like the noisy environment that a molecule is experiencing. You may not need to cancel out noise, it may just be a process of shaping the noise to look like the natural world because the natural world is just not that clean. But yet chemistry works even in the complex environments of the real world. So that’s why I think there will be a first narrow advantage for chemistry.

But for us, we also need to care about getting from that first narrow advantage to a broader-based advantage, where you start to see an advantage across a large number of applications. And I think a lot of it depends not just on hardware advance but also on advances in algorithms and advances in dev tools. You can of course, speculate what timelines look like on this, but what I would say that is maybe not so obvious is that we’re in an interesting time now where it is both advances in hardware and advances in software that can independently lead to a real-world quantum advantage. We seem to have convincing arguments at this point that there are at least a small number of quantum processors that are hard to simulate. With that being the case, the fact that we cannot yet make use of them to do real work means there are a couple of possibilities. Either they’re not capable of doing real work, but that could be because they’re easily simulable. But if they’re not simulable, then why can’t they do real work? Maybe that’s just a gap in our understanding. So advances on the algorithm side can help bring us closer there. Some of that will be theoretical advances on algorithms, and some of that will be advances in compilation where we are just getting better at harnessing those systems, getting better at echoing out the noise, getting better at taking the problem we care about and making it as small as we possibly can. At the same time, advances are happening in hardware. So you’ve got these two things that are going on in parallel. Over the last year, we’ve seen quite a few interesting demonstrations and quite a lot of progress around error correction and fault tolerance. It’s clear we’re getting much closer to the goal of seeing a real proper demonstration of complete fault tolerance where you’re doing useful computation, you’re getting a performance that is improved compared to the physical qubits and so on, and where that encoding as it grows suppresses error further and further. We’ve seen all of these components demonstrated individually, and in many cases, we’ve seen collections of these demonstrated. So we’re still just getting there to the first real fault-tolerant quantum computers. You can see that starting to be on the horizon, even if there are only two-qubit systems, three-qubit systems at first.

Yuval: Going back to customers, when you go to customers today with the products that you’re releasing that are now available for early access, do you tell them, hey, you could run this? Have you done benchmarking? I mean, have you published benchmarking? Can you say, well, you can run this, this code is faster with our framework, or is it that this algorithm is much easier to program this way? Lots of customers use Qiskit, and so I think that’s sort of their frame of reference. How do you compare to what they’re using today?

Joe: So look, I’d say there’s two different ways you can make comparisons. But the correct point of comparison for us I do not think are other quantum programming frameworks because we’re enabling functionality that is just not possible within those frameworks. For a start, those are our circuit frameworks. They generate circuits that run, you get results back. They don’t generate Turing machines. They also don’t have the capability of compiling classical code. So there’s not really a place where I can compare our performance on C compilation to anything else. There have been a couple of demonstrations where you see people maybe implementing a unitary that implements some small function written in Python or something like that, but those are usually using cosine-sine decompositions or something like this, which are exponentially bad. To give you an idea of how well we’ve been doing in terms of generated C, we tried a couple of problems. We talked about this, I’d say, maybe two years ago at Q2B. You may remember that Goldman Sachs had a paper out on options pricing, maybe, I think, 2021. The hard part of that, the bottleneck in that algorithm using Monte Carlo methods, is actually just a classical computation, a classical subroutine that computes e to the minus x. So there’s an analysis that they did in terms of how many T gates it needs, how many Toffoli gates it needs. There was a subsequent paper about a year later that showed improved results. We tried compiling this from about 15 lines of C. So, we just implemented that inverse exponential in a fixed-point manner. Okay, there’s some boilerplate code as well, but it’s about 15 non-trivial lines of C. We compiled it through our system with our default settings. What we found was that we outperformed the code that had been in both papers by a large margin, and in some parameter ranges up to a factor of 112 times in terms of the reduction of the number of T gates, or Toffoli gates. So this is a really large difference in performance. 

For the actual number of gates for the level of precision involved, it’s pretty close to the square root of the number of the original gates. So we’re clearly getting good performance out of this, but it’s also limited by how good your C algorithm is. So if you use good C code, it’s better than bad C code. But that’s a classical problem. So we have a lot of trade-offs you can make, and some of these do extremely well. So we have, for example, special constructions that we can use if we’re targeting low-depth circuits or low T-depth. So we use different constructions for different types of gates. If you’re using it as part of an oracle, then again, we use special constructions that take into account that the phase that’s incurred on each of the computational basis states doesn’t matter because you’re just going to compute this thing; you’re going to do some controlled operations of it, and then you’re going to uncompute it. So we’re trying to take into account these kinds of optimizations, and we’re trying to come up with the right kind of structures for being able to actively reuse qubits, for example, uncomputing them, making use of them again, recomputing things, and so on. As you can imagine, that’s a fairly complicated process, but we’ve been getting good performance out of it. 

In terms of what benchmarks will look like overall, as I say, we’re about to start early access. We’ll start seeing some examples there of how this looks like applied to real code. But what I can say is that there really aren’t good points of comparison in terms of other quantum frameworks because what we’re doing is quite different from what many of the other frameworks are doing.

Yuval: And as we reach the end of our conversation, I wanted to ask you a hypothetical. If you could have dinner with one of the quantum greats, dead or alive, who would that person be?

Joe: So I’ve been really fortunate. I’ve worked in this area for a long time. So I’ve had dinner with some really impressive people over the years, with Artur Ekert, with Anton Zeilinger, with Frank Wilczek, with quite a few very eminent physicists. And I guess if it was allowed to pick someone from the past, maybe it would be Richard Feynman or something like that. Not just because of being a great physicist, but because I think he’d be quite funny at dinner, and I kind of appreciate that in a dinner companion. But if we’re looking to today who it would be, then I have two answers for you. One is whoever runs @QuantumMemeing on Twitter. Because, again, I’d like a bit of humor with dinner. I think that’s, you know, I kind of enjoy that. And the other is Ewin Tang. So I’m not sure if you know Ewin. I think she’s a postdoc at Berkeley at the moment. But she’s had a huge string of incredibly impressive results on the theory of quantum computing, particularly in relation to dequantization of machine learning algorithms. I think she thinks about quantum computing in a different way than I think about quantum computing, and I think I’d have something to gain from being more exposed to that. I’ve never met her before, so I think that would be beneficial. 

Yuval: Excellent. Joe, thank you so much for joining me today. 

Joe: Sure, no problem. Thanks for having me. Thank you.

Yuval Boger is the chief marketing officer for QuEra, the leader in neutral-atom quantum computers. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he can be reached on LinkedIn or at this email.

January 1, 2024

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment