### Topics

### features

### Publications

### Issue Archive

# Reactive Collision Avoidance Algorithm

### Algorithm is used for safe operation of autonomous, collaborative, vehicle formations.

The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles.

# Modeling Common-Sense Decisions in Artificial Intelligence

### Common sense is implemented partly by feedback from mental to motor dynamics.

A methodology has been conceived for efficient synthesis of dynamical models that simulate common-sense decision-making processes. This methodology is intended to contribute to the design of artificial-intelligence systems that could imitate human commonsense decision making or assist humans in making correct decisions in unanticipated circumstances. This methodology is a product of continuing research on mathematical models of the behaviors of single- and multi-agent systems known in biology, economics, and sociology, ranging from a single-cell organism at one extreme to the whole of human society at the other extreme. Earlier results of this research were reported in several prior *NASA Tech Briefs* articles, the three most recent and relevant being “Characteristics of Dynamics of Intelligent Systems” (NPO-21037), *NASA Tech Briefs*, Vol. 26, No. 12 (December 2002), page 48; “Self-Supervised Dynamical Systems” (NPO-30634), *NASA Tech Briefs*, Vol. 27, No. 3 (March 2003), page 72; and “Complexity for Survival of Living Systems” (NPO-43302), *NASA Tech Briefs*, Vol. 33, No. 7 (July 2009), page 62.

# Fast Solution in Sparse LDA for Binary Classification

### Special properties of binary classification and greedy algorithms enable speedup.

An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable-selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bioinformatics. Because of its combinatorial nature, feature- or variable-selection problems are “NP-hard” or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms.

# Efficient Bit-to-Symbol Likelihood Mappings

### A new algorithm that increases decoder speed contributes to the development of high-speed optical communications links.

This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8-percent reduction in overall area relative to the prior design. This gain resulted from changing just two operations in a complex decoder. Larger gains are possible for larger constellations that are of interest for deep-space optical communications. The algorithm structures the bit-to-symbol/symbol-to-bit operations like a tree that forms a portion of a Fast-Fourier-Transform (FFT). Much like an FFT, the parallel computation may be structured in order to reduce repeated computations. Symmetry in the values was noted and allowed for the reduction of the bit-to-symbol mapping by a factor of 2.

# Landmark Detection in Orbital Images Using Salience Histograms

NASA’s planetary missions have collected, and continue to collect, massive volumes of orbital imagery. The volume is such that it is difficult to manually review all of the data and determine its significance. As a result, images are indexed and searchable by location and date but generally not by their content. A new automated method analyzes images and identifies “landmarks,” or visually salient features such as gullies, craters, dust devil tracks, and the like. This technique uses a statistical measure of salience derived from information theory, so it is not associated with any specific landmark type. It identifies regions that are unusual or that stand out from their surroundings, so the resulting landmarks are context-sensitive areas that can be used to recognize the same area when it is encountered again.

# Capacity Maximizing Constellations

### Locations and bit labels of constellation points are optimized jointly.

Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. (As used here, “constellation” signifies, with respect to a signal-modulation scheme, discrete amplitude and/or phase points corresponding to symbols to be transmitted.) Theoretically, in comparison with traditional constellations, these constellations enable the communication systems in which they are used to more closely approach Shannon limits on channel capacities. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth-efficient communications, variously, over longer distances, using less power, or using smaller antennas.

# Processing Images of Craters for Spacecraft Navigation

A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps:

# Software Tool Integrating Data Flow Diagrams and Petri Nets

Data Flow Diagram – Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.)

# Reducing the Volume of NASA Earth-Science Data

A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data.

# AutoGen Version 5.0

Version 5.0 of the AutoGen software has been released. Previous versions, variously denoted “Autogen” and “autogen,” were reported in two articles: “Automated Sequence Generation Process and Software” (NPO-30746), Software Tech Briefs (Special Supplement to NASA Tech Briefs), September 2007, page 30, and “Autogen Version 2.0” (NPO-41501), NASA Tech Briefs, Vol. 31, No. 10 (October 2007), page 58.