This photo shows the central working region of the device. In the lower section, the three large rectangles (light blue) represent the two quantum bits, or qubits, at right and left and the resonator in the center. In the upper, magnified section, driving microwaves through the antenna (large dark-blue rectangle at bottom) induces a magnetic field in the SQUID loop (smaller white square at center, whose sides are about 20 micrometers long). The magnetic field activates the toggle switch. The microwaves’ frequency and magnitude determine the switch’s position and strength of connection among the qubits and resonator. (Image: K. Cicak and R. Simmonds/NIST)

What good is a powerful computer if you can’t read its output or readily reprogram it to perform different tasks? Quantum-computer designers are constantly faced with these — and many other — challenges, but a new device from a team at the National Institute of Standards and Technology (NIST) may make them easier to solve.

The device includes two superconducting quantum bits — qubits — which are a quantum computer’s analog to the logic bits in a classical computer’s processing chip. The heart of this new method relies on a “toggle switch” device that connects the qubits to a circuit called a “readout resonator,” which can read the output of the qubits’ calculations.

Ray Simmonds, an NIST physicist and one of the paper’s authors, sat down for an exclusive Tech Briefs interview, which you can read — edited for length and clarity — below.

Tech Briefs: What was the catalyst or inspiration for your work?

Simmonds: The inspiration was that a lot of the current architectures that are out there have focused on trying to scale-up the number of qubits. We worked on some of these early things before everything scaled up. We worked on some of the early first examples of coupling two qubits and having them interact, which is the precursor to being able to do gates and the communication between the two quantum bits.

The original way we did that was to just take two pieces of wire and put them near each other so that they form a capacitor and the electrons on either side of these plates can interact and feel each other. In that way, the two qubits can exchange energy between each other, which allows the information to sort of slosh from one to the other.

In the early days, that was the easiest way to get them to interact with each other. We knew that that was great … but, in fact, in the very first experiments, the coupling didn’t turn on and off. We would start off with everything just sitting there and we'd throw energy into one of the qubits … and then we would start seeing stuff happen. It was totally uncontrolled, basically.

That was great, but it’s not a switch that you can turn on and off. It's like having a hose that's open — you let water through it, and it flies through. What you really need to do is have a valve in the middle. That was the motivation. A lot of the circuits that are out there today are wired up in a way where they're statically connected. So, we thought we really need to come up with a way to make a switch. We needed some way to have interactions turn on and turn off at will.

We’re kind of thinking farther ahead, saying, ‘How do we build an architecture that has a bunch of switches between everything, so you can tailor-make it the way you like?’ And the other thing is that each of the switches should be software programmable; you send a pulse down and that turns these switches on and off. That means the connections between things are not just fixed on the chip.

Field-programmable gate arrays, FPGAs, are chips that do computing with a ton of internal transistor logic gates. But with software you can go in and program the switches to get different types of gate arrangements for all kinds of different functions. You can take one chip and effectively turn it into many different processors. This was great for the chip makers because now they could make one chip that did tons of different things and you could sell one to an oscilloscope company, and then somebody else might use it for driving a screen or any of a vast number of applications. So, that makes it really flexible. In a way, we were trying to do something similar with the quantum computing architecture — to make something that has more connections that are software programmable.

This idea has been around for a while. People have come up with different ways of trying to make tunable coupling. But this one's, I think, a little bit more unique. For example, Google does use a coupler between their two qubits, which allows them to turn the coupling between them on and off, but it's a fixed structure between the two qubits and it's only between just those two. So, it will turn on and off, but it’s not like you can change the arrangement of how the couplings work between them. It’s physically placed between those two things, and it’ll just turn something on and off and that’s it.

Tech Briefs: Can you explain in simple terms how the technology works? What are the advantages?

Simmonds: The advantages are similar to regular computer chips. For example, a chip company will make one device and then many different people can use it because they can reprogram it to do whatever task they want. People are trying to work toward making a universal quantum computer, which works with gates kind of like a classical computer.

The hard part about making a universal quantum computer is that there are a lot of errors right now in our hardware. And correcting those errors is not simple. It's not quite as easy as a regular computer because, in a regular computer, you can simply go ask your bits “what's your value.” By knowing the value, you knew something was supposed to stay a one and it dropped to a zero, you can just return it back to being a one, no problem. And it's easy to know the value; that's not a big deal.

With quantum information, you can’t actually know what the bit values are. If you do, you get rid of the quantumness. So, you need to know that a switch flipped or that your bit flipped, or your state changed, but you don't need to know what the state was.

I liken it to: Imagine you’re in your office and there’s someone in an office next to you and you hear the light switch flick, you know the state in the room changed but you don't know whether the light turned on or off, you only know that you heard it flip. Quantum error correction is kind of like that. You must determine that something changed and know what changed, but you don't know what the state of the switch was. You don't know if it's on or off, but you know that it changed. And then you can correct it — you can flip it back to wherever you think it was. Meaning you don't know what it is, but you know it flipped so you just flip it again. That’s kind of the weird situation you're in.

Quantum error correction's really hard to do, and that's what makes it difficult to make a universal computer out of it. That's the thing everyone wants, but it's probably the most difficult thing.

Another way of making something that works well but can handle having errors is with something called an analog simulator. In the late ‘40s, ‘50s, and maybe even into the early ‘60s, there were analog computers, where they'd have all these gears and spools, and everything was done in a continuous fashion. So, in a way, a quantum simulator is kind of like that. It's an analog processor. You try to cook up something that operates or behaves in a similar way to the system that you want to study.

Then it'll have errors and things go wrong, which makes it a little noisy, but the behavior should sort of follow the thing that you're trying to emulate. In that way you can try to get a quantum system working that's not necessarily a universal computer, but it's going to be wired up in a certain way to solve a certain problem. So, if you have the technology that we're trying to develop, where you can have programmable connections, maybe you can set up your hardware to do a particular type of simulation. You could run it and get something out. Then if you want to do a different kind of simulation, you can reconfigure the hardware and try another simulation. You don't necessarily have to completely rebuild all new chips for that setup.

Just like the example of chip manufacturers, this would be for a quantum purpose where you can reorganize the hardware in a way that will simulate something that you like but it's still flexible. It also makes the device more flexible. And then, ultimately if you could make a universal quantum computer, you would maximize the number of connections and build the architecture to be very flexible because it's not really clear yet what is the best way to build this thing, what's the best way to connect all the qubits together. So that's another piece.

Tech Briefs: How long is it going to be until we have a practical working quantum computer?

Simmonds: Maybe 20 years; I don't know. And again, this is an opinion. When you say, ‘practical working quantum computer,’ I'm thinking of a universal quantum computer where it can be programmed to do pretty much anything. I'm hoping that within 5-10 years there might be a quantum simulator that could show it can do a calculation better than, say, any classical algorithm.

If we could get a quantum simulator to perform better than any classical computer, that's still a huge advantage. Again, it's one of those things where it may be a very particular problem and it's not universal, it's not everything, but that would still be a big win. It would show the power of quantum computing that people keep talking about and say, ‘Look at all the great things it's going to be able to do.’ But to have it actually do these things is very difficult.

Obviously, if you have a universal one, you can solve anything you like, you can just keep trying different things. But if we pick certain difficult problems that we can simulate, say producing certain chemicals or something like that, and we can do that in a simulator, and do those calculations faster than any other way, then we’ve already shown a benefit — even before we get to error correction, which is extremely difficult.

The reason I give the 20-year number is that right now we're trying to figure out the best tools to make a quantum computer, but it's not obvious what the winner will be, the best way to do these things. So, progress is slow —it's one of those things where if you get a major, major breakthrough sometimes, boom, suddenly everything just takes off. We've had little breakthroughs that have definitely pushed things. But we're still pretty far from where we need to be to pull off full error correction and make a real computer that's going to do anything we want.

Tech Briefs: Are you hoping that your work will contribute to speeding up the time when we see one?

Simmonds: I would say yes. With all these other pieces, if we've created an architecture that we can expand and are able to produce a quantum simulator sooner, and something more flexible sooner, and learn faster, then we do think that we may be able to speed things up with these techniques, which we call parametric coupling. We believe this strategy really has legs and can be used in many, many ways.

Tech Briefs: What were the biggest technical challenges faced throughout your research?

Simmonds: The challenge is learning how to simulate what you're going to make, like learning how to predict what it's going to do when you have a circuit layout. That is a technical challenge: How do you lay out the circuits properly and model what you think they're going to do so that they actually come out properly? That's tricky.

When you think of engineers, say they build a bridge, they draw the thing up and then they have to check if all the stresses are right. Will the thing collapse? Will it be strong enough? In a way, we have to do something similar. We have to lay out what the circuit looks like and whether we think it'll actually perform the way we think it should. So, that's one of the technical challenges. But we are getting better at the computer modeling, the classical modeling of how the circuit should react, to predict what it's going to do.

The other technical challenge, I would say, is making it and having those parameters that you tried to set, having them come out properly. One of the biggest problems for superconducting qubits in general is that they have defects inside them.

Sometimes when you make your circuit, it just works poorly. The layout is good, your parameters might even not be bad, so things kind of work the way they should, but they're noisy and there are extra defects in the materials that make them not operate the way they're supposed to. They're like little rogue qubits inside your circuit. They're also quantum mechanical and their states can also change and move around. You didn't plan on them being there. It's like having more qubits around than you expected, and they interact with the ones that you made. That is a big challenge right now.

There’s a bigger program to try to fix those problems by making better materials, coming up with better ways to make things. But one thing we can do is if we warm up the refrigerator and then cool it back down again and everything works. Our circuits operate at 0.01 kelvin, while space, for example, is about 3 kelvin — so it’s super-duper cold in that fridge. Sometimes those little defects have sort of moved out of the way — it’s a tricky problem. In a regular computer you turn on your chip and it either works or it doesn't work.

Tech Briefs: What are your next steps, short-term and long-term?

Simmonds: Well, the short-term goal is pretty simple. In this example we only had three elements: two qubits and a resonator in the middle. That is the thing that helps you read out the other two qubits. So, we were using one of the elements to ask: ‘What are the two qubits doing? What are your states?’ So, the obvious short-term goal is to add more elements. Can we do three qubits? Four qubits?

The longer-term goal is how many can we put together? And then how do we add more and more and more so we can build up a whole architecture for either a simulator or a computer. What's the best way to do that and how do you do that well without losing the benefits of the switches we have just created? if we start adding more things, does the whole switch network start to get worse and dirty and muddied and doesn't work as well? Or can we maintain this great ability to switch things while adding more elements?

That's the scaling problem: As you add more, does it get way harder? Does everything start getting messed up or does it maintain the quality you had on a small scale, and you can just make it bigger, and you get more and more, and everything's great.

Tech Briefs: Can you explain in simple terms how the technology works?

Simmonds: We use something called a parametric interaction. What that means is you take a parameter of the circuit, and you have a control knob and basically turn that knob up and down. You sort of modulate that parameter; you can modulate how hard you change that parameter up and down, and you can modulate the frequency at which you change it. What that does is kind of like a kid on a swing. If a kid is swinging back and forth and they wiggle their legs at twice the frequency they're swinging, that will pump up the energy in their motion. They amplify their motion. So, they had two things: They got the right frequency and the amplitude of how hard they pump determines how fast they get themselves to swing higher and higher.

What we're doing is a parametric process that most people don't hear much about, called frequency conversion. If you had two people swinging, and they each have their frequency, and if they're coupled to each other … when one swings, it will give the energy to the other, and then one will stop and the other one swings the most and then it comes back again. That's a coupled pendulum. When they’re at the same frequency, they're basically sending energy back and forth.

What we have is something a little different. If you take a pendulum that's at one frequency and then a much shorter pendulum that's at a different frequency, they don’t exchange energy anymore. They're not at the same frequency.

But if you have a way to jiggle the connection between them at the frequency difference, then you basically make up for the energy that you need to get from one to the other and they'll start talking to each other again. And that's called frequency conversion, and that's essentially what we've made in an electrical element. We've made an electrical element that we can pump and push at a given frequency and amplitude, which allows the two qubits to actually exchange energy again and talk to each other.

I think the field’s really exciting, and I guess my last comment would be that even though quantum computing is super hard to make happen, and it could take 20 years, it has been theoretically proven that its capabilities are beyond anything we can possibly do with a regular computer — even with an AI computer.

If you have exponentially growing computational power, I don't think anything can match that. So, even if it takes 50 years it could be worth it, if you're doing something that's literally impossible to do with any other piece of technology. I think that's why people are so excited about this — it can essentially do the impossible.