In July of 2015, NASA published NASA Technology Roadmaps — TA9: Entry, Descent, and Landing Systems (EDL). In it, they laid out their EDL goals for the coming years: To develop new and innovative technology, not just for the moon, but also for future exploration throughout our solar system. Toward achieving these goals, NASA awarded a contract to the Charles Stark Draper Laboratory, or Draper for short, to develop and test their multi-environment navigator (DMEN), which uses vision-based navigation techniques, as a means to guide small craft to land on the moon.
We interviewed Dr. Brett Streetman, Principal Member, Technical Staff at Draper, to learn about the DMEN.
Tech Briefs: Why that name — DMEN?
Streetman: The reason for that name, which stands for Draper Multi-Environment Navigator, is we were building on lots of work that Draper was already doing, not just for space navigation, but also on Earth and in orbit. A lot of this work stems from vision navigation on guided parafoils, where we’re trying to navigate the craft’s descent through the atmosphere, relative to a landing spot. We’re taking that technology and developing it for space. Additionally, we’re using technology that we had developed for small drones that were flying back near the ground level, both indoors and outdoors. Further, we did some work for tracking astronauts on the International Space Station. We built the DMEN for them to carry and track their location inside the station. We brought together all those technologies from indoors, outdoors, on the Earth, in the air, and in space, for navigating a small lunar lander.
Tech Briefs: Could you describe the device.
Streetman: We tested our DMEN by flying it aboard a World View Enterprises balloon over Arizona at an altitude of 108,000 feet. The device that flew on the balloon had two cameras. We were testing what viewpoints and lens sizes we were interested in having for future flights. It had a downward facing camera and a slightly forward-facing camera. Their outputs go to an in-house-developed sensor board. The data from the cameras is combined with data from other sensors and sent to a flight computer running our algorithms.
The prototype weighs about 3 kg and is about 12" wide by 10" high by 10" deep. The camera lenses are mounted on the outside of the box but are included within that dimension window.
Tech Briefs: What is the basis of your navigation technology?
Streetman: The main thing we’re developing here is the software that processes the images in order to come up with an estimated position of where the camera is. We have a few different algorithms we’re working on — essentially one does visual odometry, where it’s tracking features from frame to frame to tell how you’re moving relative to the scene that you’re viewing. We’ve also enhanced the performance by including an inertial measurement component. For our absolute position measurement technologies, we take an image captured at high altitude and compare it to a database of satellite imagery to come up with a match for absolute locations of features in the same exact scene.
Tech Briefs: What is the role of inertial measurement?
Streetman: Our inertial measurement system uses standard 3-axis accelerometers and 3-axis gyroscopes. It adds robustness and a second stream of information that helps you predict what your next image will show. So, by combining these two sorts of information, you can get a much more accurate measurement. In an outer space vehicle, you tend to be doing more than just sitting still — you’re moving and rotating around. So, your view of the ground is changing, based upon your forward and backward motions and your rotation. The inertial measurement allows you to keep track of those changes between image captures. You are then able to make predictions based on what happened between the current and last images. The system accuracy is improved by comparing what you expect to see in the next image and how that image differs from your expectation.
Tech Briefs: Is there anything special about the optics?
Streetman: For these demonstrations, there were no special characteristics for the optics. We bought off-the-shelf cameras and lenses in order to test our algorithms and software. We didn’t buy space-qualified optics or anything you might actually send into space. Off-the-shelf cheaper optics were effective for the tests we were doing.
For these tests, we don’t necessarily need a very high imaging rate, so we don’t have to use global or rolling shutters. When we go to designing for actual space operation, where we will need greater accuracy, that will have to be considered.
Tech Briefs: Can you sum up where your project stands now.
Streetman: Overall, what we’re trying to do here is to develop a small system for effectively guiding a lunar landing and similar operations. When using passive image and inertial sensing only, you can develop a much smaller system. But there are limitations in comparison with a technique that uses an active signal like lidar or radar. You can trade down to much smaller sizes and weights, but you do lose some capability — say operating in the dark or in heavy shadows. There’s a trade-off between passive and active sensors. But with a passive sensor, you can shrink down the size of what you need to accurately navigate somewhere like the moon. For example, on the last big NASA lunar push — autonomous landing hazard avoidance technology (ALHAT) — they developed a big sensor suite with a very large active flash-lidar, but it’s also about 40 times more massive than ours. Although It’s been flown on the earth, it hasn’t gone to the moon.
We predict that landers based on our DMEN system will have a very productive future in our coming space explorations.
This article was written by Ed Brown, Associate Editor of Photonics & Imaging Technology.