Tech Briefs: What got you interested in this work?
Leila Bridgeman: I really enjoy doing math, so the prospect of applying it to important applications was very appealing to me. When I started my doctoral studies, I was interested in robust control theory and the use of conic sectors. I thought it was so much more mathematically general than the passivity-based control I was seeing in the literature. I found that the extra generality of conic sectors was very useful when you have time delays, especially when we want to do passivity-based control for networked systems.
Tech Briefs: What is passivity-based control?
Bridgeman: Passive systems can only dissipate or store energy but not create it. If you hook up two passive systems, whether in parallel or with negative feedback or with several different types of interconnections, the resulting network is automatically passive. It’s intuitively logical that if you hook up two systems, both of which are dissipating energy, then the whole network is going to dissipate energy as well. Since passivity means energy dissipation, it also means stability — if energy cannot be created in a system, it cannot sustain instability.
That's a hugely powerful framework because so many physical systems are passive. The electrical circuit elements, resistors, capacitors, and inductors are passive. Also, anything that follows Lagrangian dynamics, which includes most mechanical systems, is passive. For example, a robot arm where torques are the inputs and angular rates are the outputs, are passive. If you have a satellite and you're doing attitude control, a boat, or a vehicle, or a drone, if you are mapping torques to rates, those are all passive systems.
As long as you have the right pairs of sensors and actuators, you can tell that a system is passive — you get the automatic passivity of subsystems. This is really important for the analysis of networked systems, because when you have a complicated interconnection of many agents interacting with one another, or a complicated circuit board with many subsystems, you want to be sure that you don't accidentally destabilize anything. But as long as you follow the rule that your interconnection structure preserves passivity, you can understand the stability of the entire complex architecture from understanding just the simpler subsystems.
Tech Briefs: What motivated you to look beyond passivity-based control?
Bridgeman: Passivity is not a cure-all: You can hook together two stable systems, and their interactions can destabilize each other. But as long as you hook them up so that each one can “perceive” that the other is passive, then their interconnection is going to be stable. If your sensors or actuators are imperfect, you’ll cease to perceive passivity. Also, for example, if I have a system and I'm giving it torques as an input and I'm looking at rates as outputs, then I can tell that it's passive. But if I applied torque and looked at acceleration, I wouldn't perceive that it is passive. Wireless interconnections, communication delays, and changing network structures obscure the passivity of the systems they connect. So, in those cases, you lose the assurance of stability.
I wanted to account for those issues, especially because more and more people want to implement controllers in the cloud to take advantage of its compute power, rather than relying on local hardware. The costs of that, however, are substantial communication delays within the wireless network.
I realized that conic sectors were a nice framework for dealing with the potential issues of wireless networks. Passivity is such a simple rule. You just make every subsystem passive and if you restrict your interconnections, then everything works out very nicely. I was able to use conic sectors to quantify passivity violations in a context of two interacting systems.
We're thinking about large wireless systems, like a network of 100 drones. So, I wanted to figure out how to break down the problem of understanding the whole wireless network through understanding the subsystems, while accounting for factors such as time delays and digitization.
When I looked into the literature, what I saw was that one way people get stability assurances for a network, is to restrict how things can interact with one another. They assume that every agent, each subsystem, is identical. They make assumptions about the linearity of the subsystems and require a constant structure for the interconnections of the network. I think those are overly restrictive requirements when we think of interacting drone networks.
I received an award from Office of Naval Research (ONR) to create practical algorithms that take these factors into account.
Tech Briefs: Part of your project was to incorporate the concept of invariant sets. Can you explain that?
Bridgeman: The reason we're interested in invariant sets is because we want to design certifiably safe autonomy. From a state-space perspective, safety means that we establish allowable inputs that will result in outputs defined as safe.
If you think of driving a car, you want to respect the speed limit, stay on the road, and probably not hit anything. So, in the space of velocity and position, there's a set of allowable states. If you imagine that your car is driving toward a cliff, you're allowed to drive anywhere on the road exactly up to that cliff. And maybe your speed limit is 70 mph. So technically, you're allowed to be driving 70 mph in the direction of the cliff all the way up to its edge. But you have limits of actuation — your brakes can only brake so hard.
So, if you augment position by adding velocity, you shouldn't just implement a speed limit of 70 mph all the way up to the cliff. You should have kind of a line, such that, as you're about to hit the cliff, your speed limit is zero. Your speed limit could be reduced in proportion to how far you are from the edge of the cliff.
So, if we're trying to automate vehicle control, or any autonomous system, it's not just a matter of figuring out the set of allowable inputs and states, but also the set of actions and states that are allowable now and in the future — what we want is not just safety now, but safety in perpetuity.
In any system that is safety critical, with given constraints, invariant sets are always the mathematics that underlies it. There are well-established algorithms, given a mathematical model of the system dynamics, to compute these invariant sets. But usually, the computations get really hairy if you have non-linear dynamics and non-convex sets, so dealing with them has been a longstanding problem.
With linear dynamics for example, approaching that cliff, the speed that you're allowed is proportional to your distance from its edge. However, even if you have a really simple, nice description of the allowable region, under non-linear dynamics, the allowable invariant set can become a crazy, weird shape that is hard to do computations on, hard to find, hard to store, hard to do anything with. So, when finding the algorithms to search for those invariant sets, in the model-based world, there are problems with a lack of guarantees that we'll find an invariant set or even an approximation of one. And now with the whole world obsessed with AI and data, it seems to me that there's less and less of a willingness to build up mathematical models of the systems we’re trying to engineer and control.
In part, I think it's because we're trying to develop insanely complex systems for which it is very difficult to build physics first-principles models — I understand that. But the only option that leaves us with is to try and collect data observations on how the system responds to stimuli. That will give the constraints that our system has to adhere to, but we no longer have a model. So, if we're trying to do learned data-driven control, as everyone is trying to do, and we want to get safety assurances, we have to be able to construct invariant sets based on data alone without constructing a mathematical model. To me, this is one of the most important things we could be doing right now in control theory.
People are implementing learned controllers on physical systems, which terrifies me because we do not have a mathematical framework to certify their safety. I think that they're doing a lot of kludges and conservative workarounds to try and get safety into autonomous vehicles. But we really don't know what we're doing in this area. So that's what made me want to do this research.
I just organized, with some of my colleagues, a workshop on this data-driven invariance at the IEEE Conference on Decision and Control. There's been a lot of progress; I and others have found that this data-driven perspective allows us to potentially certify learned controllers. I think that there is a future horizon where we can have some safety certification in this autonomy, which also gets around a lot of the traditional roadblocks such as non-linear dynamics. To do this data-driven invariance, you have to try and infer what happens in a little region around your data point, based on the information you have.
It's created a new landscape of algorithms to find invariant sets and then how to use them to build safety into controllers and establish certification.
So, in my group, we've been looking at a number of different algorithms and trying to establish different perspectives on how to find them. We need to be exploring every avenue to develop them since I don't know which algorithms are going to be the best class.
Tech Briefs: How do you judge which is the best?
Bridgeman: Yeah, that's a great question. At the annual National Science Foundation (NSF) program meeting for Dynamics, Controls and Systems Diagnostics (DCSD), they were talking about that. One of the conclusions was that we’ve been pretty bad at quantifying what is a good controller, what we should use as metrics of goodness is an open question. However, my colleagues and I have been talking a lot about benchmarking to identify different aspects of what can be good or bad.
First of all, a certified invariant set is the region where we can work well. If you have a worse algorithm, maybe you end up certifying a smaller region vs. a better algorithm, which would certify a larger region. So, the area of the region you can certify as safe for a given problem is one marker of how good it is. Another marker is how long it takes to compute this algorithmically and how many data points you need to get a given size of the region established as safe to operate in.
The other thing is how bad a system can you work with. What's really pernicious for invariant sets are soft landing problems. For example, where you have a rocket and you have to land it perfectly on the ground. The difficult part is that the equilibrium point of your closed loop system is exactly on the boundary of what's feasible. If you overshoot as you're trying to land, you crash the vehicle, so you need really high precision; and approximation error is really difficult for that.
We've also thought of dynamics coming from a Julia map. Because its invariant sets are fractals, they have really complex boundaries. So, the question is how do we capture a complex boundary? If we have chaotic dynamics, how do we deal with that? If you have limit cycles where your system doesn't converge to an equilibrium point, but orbits — that is really tricky. And then also, if you don't know where your equilibria are, and you just want to say, here are my requirements, go and give me something — those are typically really tough problems.
So, computation time, memory usage, the complexity of your characterization and algorithms you use, and also the size of the invariant set, would be our metrics for comparison.
But I think the real questions include: Can you ensure safe soft landing? Can you deal with limit cycles? Also, when there are more than six states, computation becomes really difficult. Robotic systems, for example, usually have many joints and arms, or a drone might have 12 states. So, progress relative to those classic problems might be the most important metrics for comparing algorithms.
Tech Briefs: So, if I were to make a simple sentence, describing what you're doing, would I be correct in saying you're trying to take a generalizable mathematical approach to solving concrete system problems?
Bridgeman: Yes, I would say so. And I specialize in robust control and safety-assured control because if I'm making engineering systems that can kill people, making them safe is the best thing I can be doing.
I’m also now working on a practical project with Dr. Patrick J. Codd, who has a joint appointment in the engineering and medical schools, trying to develop a robot that can autonomously ablate tumor tissue. It basically takes a laser and burns away the tumor tissue. We're trying to make practical algorithms that precisely remove the tumor while removing as little surrounding healthy tissue as possible. Typically, with cancer resection, there's the estimated area of the tumor and then there's a margin around it that they also want to resect because there's a continuum from tumor to healthy cells.
My colleague works on brain tumors — there you really want to have those margins quite precise. Sometimes you can't just resect the tumor plus a bit because there will be critical nerve structures and veins that you could damage, causing people to lose brain function. Getting higher precision on those things would mean that you resect more tumor while harming less tissue. That could achieve real improvements in survival rates and subsequent recovery and return of functionality. But uncertainty is a problem because there's very large variability in the ablation properties of tumor tissue. There is a lot of patient-to-patient variability, and all you really have is data-driven estimates. And most of the ablation models in the literature — the mathematical ones — are not very reliable. The ones that are kind of reliable can take minutes or even hours to predict seconds of evolution, which means that they can't really be incorporated into control algorithms.
So, I see that as an application where it's going to be really important to do data-driven controllers incorporating robustness and critical safety constraints. We've submitted a paper, which if it gets accepted, will be the first ever in the literature about using a robot to autonomously resect a volume of tissue.

