The purpose of Hazard Relative Navigation (HRN) is to provide measurements to the Navigation Filter so that it can limit errors on the position estimate after hazards have been detected. The hazards are detected by processing a hazard digital elevation map (HDEM). The HRN process takes lidar images as the spacecraft descends to the surface and matches these to the HDEM to compute relative position measurements. Since the HDEM has the hazards embedded in it, the position measurements are relative to the hazards, hence the name Hazard Relative Navigation.
HRN processing starts with an initial elevation map from the Hazard Detection and Avoidance (HDA) phase. This map is generated by mosaicking the lidar over the Hazard Map Area (HMA). A feature selector is applied to the map to find a reference surface point that is surrounded by significant terrain relief and is therefore easier to identify in subsequent lidar images. This reference point does not have to be the landing site, and it probably won’t be because the landing site should be free of terrain relief.
Next, the gimbal points the lidar sensor at the reference point and a lidar image is taken. The lidar image is converted to 3D points and these points are transformed into the local level coordinate frame using the current knowledge of the spacecraft position and attitude. These points are regridded into an elevation map. This elevation map is spatially correlated with the HDEM to determine the position change of the reference point in the local level frame between where it was predicted to be given the current state and its observed position when the HDEM was constructed.
The reference point is not actually moving in the local level frame, so this change in position is actually a measurement of current navigation state error growth from the time the HDEM was created. Since attitude errors are expected to be very small, the change in position of the reference point is most likely due to errors in the position of the spacecraft. This process is repeated with multiple new lidar images as the spacecraft descends.
During descent, the correlation performance degrades due to the shrinking field of view, increasing resolution and changing in view angle. The ground sample distance (GSD) of the basemap should be no more than twice the GSD of the current lidar map. To prevent the correlation from failing, resulting in a loss of knowledge of the position error on the reference point, a new base map is generated for correlation. This new base map is created by mosaicking the lidar around the landing site. A new, higher-resolution elevation map is created from the lidar mosaic. The feature selector is applied to the new base map to generate a new reference point. Lidar images are then taken of this new reference point and correlated with the new base map.
The process of generating a new base map, then correlating lidar images to it, is repeated until the beginning of vertical descent (30 m). Each time the basemap changes, it is correlated with the previous base map to tie its position to the original HDEM. This correlation introduces a fixed error to the estimate of the change in position of the original reference point. Fortunately, this fixed error is a function of the resolution of the corresponding base map, so the fixed error contribution is decreasing.
The algorithm is related to other motion and velocity estimation algorithms, but is different because the data processed is 3D points, not camera images. This difference in input data makes a large difference in how feature selection and correlation are implemented. The algorithm also must handle oblique viewing angles and relative high sensor noise; both of these make HRN challenging. Finally the HRN algorithm actually commands the lidar to collect data during descent that is the best for HRN. This “Active Vision” approach was not used in previous work.
This work was done by David M. Myers, Andrew E. Johnson, and Robert A. Werner of Caltech for NASA’s Jet Propulsion Laboratory.