Dr. David Lary of the University of Texas at Dallas is leading a research group that has developed a team of autonomous robotic devices that can be used at hazardous or difficult-to-reach sites to make surveys and collect data — providing more and faster insights than human beings are able to deliver.

Tech Briefs: What inspired you to use multiple autonomous devices to collect holistic sets of environmental data?

Dr. David Lary: Well, there are two parts to that journey. The first is the passion that drives me. I deeply desire to have comprehensive holistic sensing to keep people out of harm's way so that there can be the appropriate actionable insights to make timely decisions. That is my motivation, but the actual journey for me to this point started a few years ago —well, nearly 30 years ago.

When I did my PhD at Cambridge, the person that was really close by, who discovered the ozone hole, was a guy called Joe Farman. So, for my PhD I developed the first 3-dimensional global model for ozone depletion. It was a chemical module that was a plug-in to the global model that was used by the European Centre for Medium Range weather forecasting. With my plug-in I could do global simulations for ozone-related chemistry. And so, the obvious question I wanted to ask was: how good is this model? In order to verify it I had to bring together as many data sources I could: satellites, aircraft, ground-based sensors, and balloons. One of the

pernicious things that I faced was inter-instrument bias. So, I was looking for a way to help to deal with these biases. Although this was 30 years ago, I quite by chance, came across machine learning. This was before it hit the big-time adoption that it has today, and I found it did a really good job. That started me looking into what else we could do with it. Along the way, we were the first people to develop chemical data assimilation, which is now used by agencies around the world as part of their air quality forecasting systems.

Aerial robots developed by UT Dallas researchers can carry several cameras, an array of onboard sensors and a downwelling irradiance spectrometer — all used to gather data for mapping and to learn the characteristics of environments. (Image Credit: Lakitha Wijeratne/UT Dallas physics doctoral student)

One of the things we do in data assimilation is to pay a lot of attention to uncertainties. Part of my work at NASA was creating global remote-sensing data products. The way that works is, you use the remote sensing information to create a data product, say on the composition of the atmosphere or the land surface or underwater composition, say for global oceans. You get the remote sensing data from the satellite and compare it to the in-situ ground truth. Typically collecting the training data to do that could take up to a decade or so. It is quite a task because you want to be able to sample as many different conditions and contexts as you are likely to encounter globally.

Our autonomous robotic team, in about 15 minutes, collected the same data volume as is typically used to create those satellite-based remote sensing data products, albeit for one location. So, it could go into a new environment, one that it's not seen before and rapidly make precisely coordinated observations. In this case, the team was a boat and an aerial vehicle. We chose the boat because that's a bit more challenging than a ground-based measurement because of the access issues.

This type of paradigm is useful not only for helping to rapidly create new products, but also for the calibration and validation of satellite observations, it is also useful for helping to keep people out of harm's way. If you have a very contaminated environment, or an environment in which there are threats to humans that enter it, the robot team can go there and in a coordinated way collect the appropriate data.

The study in our paper used an aerial vehicle with a hyperspectral imager that collects a huge volume of data very rapidly. So, even our fastest data pipes right now, maybe 5G cellular communication, are not fast enough to deal with the required bandwidth for streaming hyperspectral images. We address that problem with on-board processing that lets us create these data products on the fly, using on-board machine learning, and then stream them. The final data products, say the abundance of a contaminant, is a much smaller amount of data that we can easily stream in real time.

So, it's really the ability to quickly gather comprehensive data that could be used to keep people out of harm's way, to characterize ecosystems, be part of an emergency response activity, say after a hurricane that has caused inundation of a neighborhood near a chemical plant, or any number of such applications: harmful algal blooms, oil spills, or various agricultural applications.

It's designed to be a flexible set of components. Just like now we're used to having an app store on our phone, or Tesla cars having over the air updates — these are software-defined sensors with their own app store that can be updated to improve their capabilities with time.

Tech Briefs: You have flying sensors and ground sensors and you send the information back to where? How is all that processed? Where is it processed?

Dr. Lary: Think of this as an ensemble of smart sensors. There's a set of things: so first there's a software-defined sensor. The software-defined sensor would be a smart sensing package, which combines the physical sensing system, say a camera, a hyperspectral camera, thermal camera, or a mass spectrometer. It could be any sensing device with some software/machine learning wrapped around it, which then provides the facility to provide some calibrated and/or derived data products. Most sensors will need calibration of some kind.

By coupling the sensor with a software/machine learning wrapper we can do a complicated calibration that lets us have a much more flexible system. So, that software-defined sensor can also have an app store of its own. One or more of these software-defined sensors can be on a platform that provides the sensor with power and time and location stamps for all the data it produces, and also communication connectivity, and maybe where relevant, mobility.

Tech Briefs: So, this is a physical platform?

Dr. Lary: Yes — in this example we had two platforms. We had the robotic aerial vehicle with the hyperspectral camera and the thermal camera and a few other sensors on board. And then the second platform was the robotic boat, which had a whole suite of sensors in the water underneath it, including sonar and various composition sensors and on the top of it, an ultrasonic weather station.

The software-defined sensor plus the platform forms a sentinel. This sentinel is something that can typically move around, make measurements, process data and/or stream it.

Researchers at UT Dallas have developed an autonomous team of robotic devices that can be used at hazardous or difficult-to-reach sites. Working together, the robots can make surveys and collect thousands of data records in just a few minutes. (Image Credit: Lakitha Wijeratne/UT Dallas physics doctoral student)

Multiple sentinels working together can form a robot team that can cooperate with each other to provide more capabilities than any of them can do on their own. In this case, the aerial robot with its sensors is cooperating with the water robot — the robotic boat and its sensors. Since they’re on the same network, the aerial robots, by design, glide over the same path as the boat. The boat measures what's in the water, while the aerial robot is looking down at the water from above with its remote hyperspectral camera and, using machine learning, learns the mapping from what our hyperspectral camera sees to the composition of the water. Once it's learned that mapping, it can now rapidly fly over a much wider area and provide for us, say, a wide-area composition map of the oil concentration, the chlorophyll abundance, dissolved organic carbon, or whatever component of the water we’re interested in.

We can do that, having never seen that environment before — the robot team cooperates to gather this training data. The training data is used by the machine-learning to create new data products like the wide-area composition map. Once that model has been trained, the hyperspectral sensing can be done just from the aerial measurements, it can be processed on-board the aerial vehicle and then the results streamed in real-time. Normally, that is such a huge volume of data that it can take a substantial time to do the number crunching. Since you can't stream it in real time because it's so large, being able to edge process it on board and then stream it, not only gives you a new capability, but it also reduces the latency, the delay in being able to even do such a task.

Tech Briefs: It's like what they talk about with sensors for edge processing to reduce the amount of data you have to send.

Dr. Lary: Exactly.

Tech Briefs: Would you consider the work that you’ve just done to be prototyping?

Dr. Lary: Yes, we have to start somewhere, so this is our first step.

Tech Briefs: How would you foresee it being used practically — say there's a disaster and the authorities want to use your system, what would they do?

Dr. Lary: This prototype is just one instance of a much more comprehensive vision, it's the minimal implementation of what could be a multi-robot team. Here we just had two robots, the aerial one and the robotic boat. We chose those two because, being able to sense water has access challenges. But this team could easily have many more members, say a ground-walking robot, or an amphibious vehicle that could carry the entire robot team into a potentially hazardous environment to be deployed remotely.

It could respond to an oil spill like Deepwater Horizon, where we saw the pictures of the wildlife impacts, fisheries being impacted, and so on — and oil spills are happening all the time. Or there could be chemical spills. For example, when Hurricane Harvey hit Houston, with its large number of processing plants, there was a heavy inundation of those facilities and some nearby neighborhoods were surrounded on three sides by contaminated water. Volatile organic compounds in the water ended up out-gassing, which caused severe respiratory issues — people didn't know what they were breathing, but they knew it was affecting them. Workers going in to clean up were also being affected by the contaminated water.

With our sensing system, you’d know exactly what you’re dealing with so you could tailor your response appropriately. But it could just as well apply to other cases like harmful algal blooms. Or even if there's no disaster, this type of capability can be used to characterize ecosystems and do surveys of infrastructure. Say, roads, railways, and bridges, where the autonomous robots with their sensing can rapidly take detailed measurements.

Now imagine a different scenario. Say you have the aerial robots much like we had in this example. With the hyperspectral, thermal, and say synthetic aperture radar, looking at the texture of a surface, it could be coupled with a ground robot that has a ground-penetrating radar looking for voids or other faults. Whether it's a tunnel or a road, cavities are formed with use and weathering. There also many different scenarios you could us for agriculture. It's designed to be comprehensive sensing, which like Lego blocks you can use together, like plug and play. You would be rapidly able to use them for a whole variety of real-life use cases, where real-time data-driven decisions lead to more transparency, keeping people out of harm's way.

Tech Briefs: If someone wants to use this system, would they need to have custom-built robots and drones, or would you have a package that you could mount on an existing device? How do you envision this becoming practical?

Dr. Lary: I’ve had to struggle for many years with getting things to work together. It's one thing to buy the equipment, it's another thing for the components to work together. Everything we bought was off the shelf, because our effort went into the, for want of a better word, the smarts, like the software integration.

Having said that, a key step for these software-designed sensors for which we're using the machine learning, is to calibrate against a reference or learn on the fly. We're using the same type of idea for air quality and distributing low-cost sensors across cities that have been calibrated against really expensive reference sensors. We can deploy sensors on a neighborhood scale, which previously would have been prohibitive in cost.

By being able to calibrate the low-cost sensors against a reference in much the same way as in this robot team, we calibrated the hyperspectral measurements made by the remote sensing capability against the in-situ composition, in this case of the water, and you can achieve things that otherwise would be very, very challenging.

It's really the network of sensors, the network of autonomous sentinels working together cooperatively using machine learning, that lets you do far more than any of those components could do on their own.

Tech Briefs: Do you foresee this being commercialized, built by private companies or do you see the government getting involved with it? What do you see going forward?

Dr. Lary: My dream going forward is to have a store where individuals, municipalities, or companies, can have ready access to these types of capabilities, and not just get the sensors but also the back-end services. So that when you plug and play this stuff together, it just works, and you don't have to go through a long development. The National Science Foundation categorizes this as a cyber-physical system. Cyber-physical systems are basically sensing systems coupled with algorithms to help you make better, timely decisions.

So, my dream for all of this, and what several of us are working towards — and we welcome partners of all kinds — is to have a cyber-physical social observatory. It would have to be a national facility just like an astronomical observatory with a giant telescope, because no one else can afford to do something on that scale.

Imagine now you have a set of sensing capabilities with multiple components, which in our project, are aerial robots and a robotic boat. But our system actually has nine sentinel types for various types of situations. We can use remote sensing from satellites and weather radars. Besides aerial vehicles, we have street-level sensors, streaming 24/7 air quality, light intensity, ionizing radiation and so-on. We have walking robots, we have electric ground vehicles and robotic boats, but then also we have wearable sensors.

We also want to be able to have multiscale sensing from the global big picture, from a satellite. So, say we now go back to the Hurricane Harvey example. Long before Hurricane Harvey made landfall, we could see it with the satellites and then as it got closer to land fall, with weather radar. But then the minute it makes landfall, the micro-environment details become critical. The exact height of particular streams could make a really big difference to the local environment. So, we want to have information on both the global large-scale and the hyper-local scale because you and I live at that very localized scale. To be able to simultaneously sense both large scale and local scale, we really need multiple sentinels.

But then wearable sensing is also really important. For example, in some of the parallel work we're doing, you see news headlines that say poor air quality makes you “dumb.” But how dumb is dumb? What pollutants can make us dumber than the rest? So, in one of our studies, we're using comprehensive biometric sensing and measuring over 16,000 parameters a second, together with comprehensive environmental sensing of about 2,000 environmental parameters to see how the environmental state affects our autonomic response.

All of this is designed to both be the holistic sensing to keep people out of harm's way, but also to find that unseen elephant in the room that can be impacting our health. Once we realize what that is and we can quantify it, there's normally an obvious path of data-driven decisions to make things better and then to take the appropriate next steps to monitor our progress.

That is really my dream — to be a catalyst for this holistic sensing to keep people out of harm's way: sensing in service of society. We have many prototypes that we are trying to bring to the point that they can be usable. So, we always welcome partnerships to help expedite that — from governments, from individuals, local municipalities, community groups, companies. We are working with all of those types of entities.

Tech Briefs: Sounds like you're inventing a whole brand-new kind of infrastructure.

Dr. Lary: We're trying— it's basically driven by needs. Holistic information can make such a big difference by giving us information to make appropriate decisions. It would be non-trivial to do that without the appropriate infrastructure.

Tech Briefs: It sounds wonderful, I just hope it can be implemented someday.

Dr. Lary: Me too, me too. We've come a long way. I think we're making the first step.

The other bit is outside the physical, which I didn't get to, is you can have things like school absenteeism, which then leads to poor learning outcomes. But it often turns out that the absenteeism can be due to things like asthma. The asthma is due to high pollen or air pollution, and it's actually a cascade of effects — the social is interacting with the environmental. We don't want this to be just a one-way thing. We want the triggers for the observations to be both what we see from the sensing directly, but also from societal issues like clusters of health outcomes or algal blooms, which can take out fisheries, or an oil spill — it's a two-way interaction.

Having a data platform that can bring together all these environmental parameter layers with the societal layers like mortality trends, absenteeism, cancer incidence, etc. can help make the decision-making process to help individuals, so much more transparent and effective.

Tech Briefs: What are you working on now? What's your very next step?

Dr. Lary: The very next step for this particular robot team with the boats plus the aerial vehicle would be to enhance various aspects of that autonomous team. And then we want to extend it to have more members of the team. For example, to have an amphibious ground vehicle that can maybe carry both the boat and the aerial vehicle into a contaminated environment, and then deploy them, while also making measurements on its own. Also, to make the robots be part of the remediation.

It's the different components working together. The same type of team can also be used for looking at maintenance of infrastructure, whether it's roads or rails or bridges, and also for other aspects of the environmental quality — air quality, water quality. So really, this proof of concept was just a prototype to show: “Hey really this can be done and now we would like to scale it in so many different applications.”

Also, these can be prototypes for satellite missions. You can imagine a pipeline where you have a proof of concept on an aerial vehicle. Then it could transition to other platforms like a CubeSat, for example. This can also be part of the validation process, collecting data for satellite missions, as well as collecting data for any of the different purposes I’ve mentioned.

An edited version of this interview appeared in the June 2021 issue of Tech Briefs.