A method was developed for comparing the set of landmarks extracted from two images of the same location.
Traditional methods for change detection involve the pixel-wise registration of two images and a subsequent differencing operation that highlights any changes at the individual pixel level. These methods have two limitations: (1) they are computationally expensive, and (2) they are extremely sensitive to misregistration and any changes in illumination.
Building on the prior development of landmark-based image analysis, a method was developed for comparing the set of landmarks extracted from two images of the same location taken at different times. Each landmark is a closed contour surrounding an area of high visual salience (e.g., a crater, streak, or dust devil track) that is annotated with computed attributes (e.g., size, shape, intensity). Since the two images are not necessarily co-registered, the algorithm searches for a mapping that best aligns the two landmark sets automatically. The mapping is expressed as a bipartite graph matching between the two landmark sets using their adjacency relationships as well as the similarity in their attributes. The matching is constrained to be an affine transformation, thereby permitting only rotation, translation, scaling, and shear changes. Any landmarks that do not have a corresponding match are flagged as changes.
First, landmarks are detected independently in each image, and then the attributes described above are used for landmark classification. A relative landmark graph (RLG) is constructed that represents the adjacency relationships between landmarks. The RLG is a graph G = (V, E) in which there is one vertex in V for each landmark in the image and an unweighted edge in E from vl to v2 if the landmark represented by v2 is one of v1’s nearest neighbors. The RLG preserves relative position rather than the absolute x,y pixel coordinates of each landmark, improving the robustness of the system by enabling the matching of landmarks between images that may exhibit a shift or rotation in field of view.
Next, a mapping is identified between the two RLGs from the two images. Two new landmark sets X and Y are constructed such that X contains all landmarks from image 1, plus enough “dummy” landmarks to permit landmarks from image 2 that may not have a match in image 1. Y is constructed similarly from the landmarks in image 2. Edges between landmarks in X and Y are weighted according to the similarity between the two landmarks represented by the vertices that the edge connects. This similarity is calculated as a combination of landmark-based similarity and RLG neighborhood similarity. Landmark similarity is computed from the landmark attributes. Neighborhood similarity is the score obtained when recursively computing the best bipartite matching between the nodes in vl and v2’s neighborhoods, respectively. This score is the sum of all edge weights in the matching, divided by the larger number of nodes. The final matching is obtained by computing the best bipartite matching between X and Y, subject to an affine constraint: the matching must only result in translation, rotation, scaling, or shear between the two images.
The Hungarian (Munkres) algorithm is used to compute a candidate matching, and then an iterative RANSAC (RANdom SAmple Consensus) process is employed to refine the matching. Finally, each landmark is matched with its corresponding overlapping landmark in the other image, if one exists. If not, the landmark is added to the set Cv of vanished landmarks (if it only appears in L1) or Cn of new landmarks (if it only appears in L2).