Landers to large planetary bodies such as Mars typically use a secondary reconnaissance spacecraft to generate high-fidelity 3D terrain maps that are subsequently used for landing site selection and creating onboard maps for terrain-relative navigation systems. This luxury does not exist with small primitive bodies such as comets and asteroids. For these bodies, the landing spacecraft has to perform the 3D mapping and, with possible help from ground control, choose a feasible landing site. To enable this operation, the spacecraft would need to carry a 3D ranging sensor system such as a LiDAR. With the spacecraft placed in extended mapping orbits, 3D range measurement data is then used to create a shape model of the object. Terrain-based navigation schemes that employ cameras could then be used to image, detect, match, and track features against the map database to provide a 6-degrees-of-freedom (DOF) navigation solution during descent. Camera-based systems, however, are not robust to lighting variations and do not provide a direct 3D position/range feedback.

Recently, there has been significant attention to direct 3D range measurements using LiDARs and using this data type directly in the navigation solution estimation. This is done by matching 3D point clouds obtained either at different time instants or matching a 3D point cloud at current time to a 3D shape model of the body.

Typically, an iterative closest point (ICP) algorithm or its robust variants is used to estimate the differential position and attitude between the two point cloud datasets. While the ICP provides a measure of goodness of the matching, it does not provide any information with regards to how well or accurately the 6-DOF can be estimated. As an example, if the two point clouds originated from imaging flat terrain, the residual matching errors could be very small; however, the navigational uncertainty is extremely large, as scale is well estimated but not the lateral translations or rotations about axes perpendicular to the terrain.

A new tool for estimating the terrain richness of a previously unmapped small body has been developed. This tool takes as input the 3D range maps (point clouds) and given the observation geometry (sensor boresight direction) and sensor parameters (field of view, number of measurements, angular resolution), estimates a metric that defines how rough or varied the terrain is. The software runs on a MATLAB platform and uses a 3D shape model as an input, along with a 3D range sensor model. Mission designers and systems engineers can use the software to evaluate various approaches for providing navigation solutions to small bodies, and answering questions such as whether a navigation solution based only upon LiDAR measurements will provide enough accuracy/fidelity.

A 3D range measurement matrix taken at an instantaneous spacecraft position appears to contain sufficient terrain “texture” information, i.e. how flat, bumpy, or cratered the terrain is. Bumpier, asymmetric terrain should provide better point cloud matching reference points (aka features in 2D images). Subsequently, better matching combined with terrain richness should result in a better position estimation and therefore smaller navigation solution errors. In the 3D vision research community, some tools or methods have been developed that try to relate the richness of the texture being 3D-mapped. Analysis of the acquired point clouds provides an estimate of unconstrained directions that imply degrees of freedom that cannot be accurately estimated. This is usually done for controlled environments such as reverse engineering applications with fixed standoff distances wherein the range to the target does not change and is not intended to provide any navigation-observable related value.

The developed tool provides a methodology to estimate the richness of the terrain and the measured point clouds that will be used in estimating a navigation solution(change in position and change in attitude between two successive frames of point cloud data). Additionally, the tool could be used to develop a global 3D body shape metric that could be evaluated to estimate the fidelity of using just 3D range measurements for navigation with respect to small bodies.

The primary unique feature of the developed tool is the ability to incorporate a variable resolution or ground footprint of the sensor given the fixed field of view in the estimation of the expected metric. This is done by normalizing the conventional metric that has been calculated over the entire small body. A multiresolution level-of-detail-based approach is used to store the local estimated metrics.

This software is available for commercial licensing. Please contact Dan Broderick at This email address is being protected from spambots. You need JavaScript enabled to view it.. Refer to NPO-49795.