Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an oversegmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis.

In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.

A segmentation strategy was tested based on the “Felzenszwalb” algorithm for its simplicity and computational efficiency. This approach represents the hyperspectral image as an 8-connected grid of pixels that can begin as independent segments. Edges between nodes represent the distance between neighboring spectra, and each is weighted according to a measure of distance between pixels. The algorithm iteratively joins neighboring pixels together into larger segments, and describes each segment by the minimum spanning tree of edges that joins all segments in the cluster.

Hyperspectral segmentation algorithms partition images into spectrally homogenous regions. However, the exact definition of homogeneity is dependent on the chosen similarity metric. The segmentation algorithm is augmented with a task-specific distance metric. Here, a Mahalanobis distance metric is used, learned from training data. By leveraging a (small) set of labeled pixels with known mineralogical interpretations, the metric suppresses uninformative spectral content. Multiclass linear discriminant analysis (LDA) is used to maximize the ratio of between-class vs. within-class separation, defined by the Rayleigh quotient computed over labeled training data. Other distance metrics and segmentation strategies are possible, and can be substituted for these choices in modular fashion as different applications demand.

This work was done by David R. Thompson and Rebecca Castano of Caltech, Brian Bue of Rice University, and Martha S. Gilmore of Wesleyan University for NASA’s Jet Propulsion Laboratory. For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it..

This software is available for commercial licensing. Please contact Daniel Broderick of the California Institute of Technology at This email address is being protected from spambots. You need JavaScript enabled to view it.. NPO-48092

NASA Tech Briefs Magazine

This article first appeared in the January, 2013 issue of NASA Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.