In mid-October, a NASA-developed software called AEGIS was uploaded to the Mars Science Laboratory (MSL) rover. The AEGIS technology, winner of NASA’s 2011 Software of the Year award, will soon allow scientists on the ground to more easily identify interesting rocks and other terrain features on the Red Planet.

The above image shows a sample AEGIS target prioritization. Here, AEGIS was directed to select the top five targets based on rock size in an MSL navigation camera image (sol 59, FOV = 2.5m). (Image Credit: Jet Propulsion Laboratory)
Photonics & Imaging Technology spoke to two NASA software experts about how the Mars tool will help rovers become more autonomous.

Michael Burl of Jet Propulsion Laboratory assisted in the development of an AEGIS component known as ROCKSTER, a rock detection algorithm that analyzes images taken by the rover and identifies targets of interest. Tara Estlin, who supervises the Machine Learning and Instrument Autonomy group at the Jet Propulsion Laboratory, leads the AEGIS efforts.

Photonics & Imaging Technology: What is AEGIS?

Tara Estlin
Tara Estlin: With AEGIS, we analyze different types of images to pick out science targets with certain features or properties. Then, if we find science targets with those properties, we can automatically take measurements with the ChemCam Laser Induced Breakdown Spectrometer (LIBS), which identifies rock composition.

Michael Burl: The Laser Induced Breakdown Spectrometer (LIBS) is part of the MSL Curiosity rover’s ChemCam instrument package. The LIBS instrument shoots a laser at the rocks, vaporizes the material, and takes spectra of it; that [technology] could really benefit from this intelligent targeting that our software provides.

Estlin: The AEGIS software analyzes navigation camera images, which are wide-angle images showing a decently large area of the terrain. For these images, the software is mainly looking for rocks. We pull out information on their size, their shape, their relative intensity, or lightness or brightness in the image — all properties that we can pull out of a greyscale image. Then, we use that property information to prioritize what rocks to target first.

P&IT: What other kinds of images are possible?

Estlin: The software can also analyze what’s known as Remote Micro Imager (RMI) images. The RMI is another camera on the rover’s mast, actually part of the ChemCam instrument, which covers a much smaller area of terrain. For this imager, you can only get a couple of centimeters of the terrain in view, since it has a much smaller field of view. For those images, we’re looking for small features like veins, concretions, white spots, or pebbles.

P&IT: Why is it important to get this kind of terrain data?

Michael Burl
Burl: With the software, the rocks get prioritized; the rover itself can then repoint its mast camera to look at, say, the highest-priority rock that it detected autonomously. That saves a huge amount of time; the other way of doing targeted observations like that is to send down images to the Earth, have scientists look at them, and then they’ll say, “We want to target this rock right here.” Then, they upload commands; that takes a couple days, with the communication cycles and accessibility. By doing all this target selection onboard — the rock detection, prioritization, retargeting — it saves a lot of time getting those targeted observations.

Estlin: The other benefit: When we’re analyzing RMI images and picking out these tiny targets, the ground [control] has actually seen the local area, and they see that there might be some interesting white veins in the area that they want to take ChemCam measurements on. But the veins are so small; they have a hard time accurately hitting them with ChemCam. Due to motor backlash and other pointing inaccuracies, it’s really hard to hit very small targets. AEGIS, using ROCKSTER, will pick out exactly where the veins are in the image and refine the pointing online. Now you can do the data acquisition all in one day. That saves several days of ground activities, which is very valuable time.

P&IT: How does ROCKSTER find its targets and prioritize them?

Burl: The software runs onboard the rover, autonomously analyzes the rover’s images, and finds rocks. Then, the software prioritizes the rocks according to some scientist-specified criteria; for example, the science team may be interested in large white rocks or angular rocks, or other attributes that can be measured from images of the detected rocks. ROCKSTER detects and segments rocks from the background, and other parts of AEGIS prioritize the rocks and do the repointing.

Estlin: ROCKSTER is basically our target finder. It was originally developed to find rocks in images, but it also works really well on these RMI images that have veins and concretions. ROCKSTER uses a set of edgefinding techniques. It’s looking for intensity variation in the image that would correspond to an edge, and then it’s looking for edges that it can fully enclose. It’s basically looking for the border of the rock, or targets where it can fully enclose that border. This is one of the reasons why it works not only on rocks in the image, but also on veins in an image.

P&IT: Are there false alarms? What is challenging for the software to detect?

Estlin: There are definitely things in an image you might want to pick out that ROCKSTER was not designed for. If you wanted to look for layering where there’s not a clear border, for example, it’s not really designed for that. It’s designed for things where you can find a border around the target.

The properties that we pull out were preset based on conversations with scientists about what they look for in the image; it was also preset partly by what information we can pull out of a greyscale image. We’re hoping that future versions can also analyze color images. That adds a whole other set of information that you can use and prioritize.

P&IT: What are the scientists looking for in these images?

Estlin: Every scientist has different criteria that they look for. Often, they’re interested in distinguishing rocks from soil. Or veins from the rest of the rock. Depending on the area of terrain you’re in, there are other criteria. Sometimes the scientists want to do surveys of the bedrock or outcrop in the area; that’s usually more representative of that particular area’s history since it’s usually embedded in the ground. For that, you might look for criteria such as lightly colored pieces of rock that have a smoother border that often more corresponds to bedrock than other types of loose rock.

Another sample of AEGIS target prioritization is shown here. AEGIS was directed to select the top five targets based on a combination of rock size and high intensity in an MSL Remote Micro Image (Mell target, sol 530, FOV = 15 cm). (Image Credit: Jet Propulsion Laboratory)
On the converse, if you’re near a crater, you might be more interested in crater ejecta. That’s usually going to be darker rocks that you might find. We sit down with the scientists, and they tell us the different types of rocks that they would like to find. Then, we come up with a set of properties that we think will work well for this system, to be able to zero in on those types of rocks.

P&IT: What are you working on now as it relates to the AEGIS software?

Estlin: One of the big future directions is working with the different 2020 instrument teams. For instance, ChemCam has a follow-on instrument in the 2020 [rover payload] called SuperCam. We’re going to expand our capabilities to better support the new instrument.

We’re also going to bring in a new way to find targets in the images. But instead of just looking for edges, the software will look for statistical patterns of pixels. This can be used to pick out features like layering, which you can’t necessarily get with an edge-finder.

P&IT: What other capabilities can we expect?

Estlin: We’re also going to add an image registration capability. Right now, ground can tell the onboard system: “Find veins,” and AEGIS will find all the veins it can see. But it doesn’t know the exact one that they were originally interested in. The new system will be able to mark the vein in an image, and a summarized version of that will be sent up in the command. Then, [the ground team will] be able to zero in on the exact vein that they want.

The other expansion: We hope to be working with not only instruments like ChemCam, which is on the mast, but also with some of the arm-mounted instruments. There are several spectrometers on the arm, including Scanning Habitable Environments with Raman & Luminescence for Organics and Chemicals (SHERLOC) and Planetary Instrument for X-ray Lithochemistry (PIXL), that are pointed at very small targets. We hope to help them.

P&IT: What do you think is most exciting about this kind of software?

Estlin: AEGIS is intended to be a step towards helping the rovers to be more autonomous. People in my field have visions of one day getting to a new area and having the rover do a lot of exploration on its own. Right now, ground decides on almost every command and every activity. Instead of telling the rover to specifically drive 1.5 meters and place the arm on an exact xyz location, you could someday tell it to explore an area and create a scientific map, without providing specific instructions on what exact steps to take. It’s a step towards making the vehicles more intelligent.

Burl: The ability to run sophisticated algorithms onboard was never a consideration when the computing hardware for the current rovers was selected. Hence, the processing speed and memory are several decades behind what exists on your desktop computer or even your smartphone. Despite this severely resource-constrained environment, AEGIS has opened the eyes of the science community and the mission managers to the fact that autonomy technologies are ready and able to multiply the scientific return from a mission. With a better computing environment on future missions, even more possibilities open up.

For more information on AEGIS software and other Jet Propulsion Laboratory technology development efforts, visit www.jpl.nasa.gov .


NASA Tech Briefs Magazine

This article first appeared in the January, 2016 issue of NASA Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.