Information Technology & Software

Mission Reliability Estimation for Repairable Robot Teams

An analytical model demonstrates autonomous and intelligent control systems capable of operating distributed, multi-planetary surface vehicles for scouting or construction.

A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost- effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint.

Posted in: Briefs, TSP, Information Sciences, Mathematical analysis, Fleet management, Cost analysis, Robotics, Reliability, Reliability
Read More >>

Algorithm for Stabilizing a POD-Based Dynamical System

This algorithm provides a new way to improve the accuracy and asymptotic behavior of a low-dimensional system based on the proper orthogonal decomposition (POD). Given a data set representing the evolution of a system of partial differential equations (PDEs), such as the Navier-Stokes equations for incompressible flow, one may obtain a low-dimensional model in the form of ordinary differential equations (ODEs) that should model the dynamics of the flow. Temporal sampling of the direct numerical simulation of the PDEs produces a spatial time series. The POD extracts the temporal and spatial eigen-functions of this data set. Truncated to retain only the most energetic modes followed by Galerkin projection of these modes onto the PDEs obtains a dynamical system of ordinary differential equations for the time-dependent behavior of the flow.

Posted in: Briefs, Information Sciences, Computational fluid dynamics, Mathematical analysis, Mathematical models
Read More >>

Parameterizing Coefficients of a POD-Based Dynamical System

This parameterization enables accurate prediction of temporal evolution of certain flow dynamics.

A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers.

Posted in: Briefs, Information Sciences, Computational fluid dynamics, Mathematical models
Read More >>

Confidence-Based Feature Acquisition

Selective acquisition of data values enables higher classification performance at lower cost.

Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost).

Posted in: Briefs, Information Sciences, Artificial intelligence, Artificial intelligence, Cost analysis
Read More >>

Universal Decoder for PPM of any Order

Complexity can be reduced and flexibility increased, at small cost in performance.

A recently developed algorithm for demodulation and decoding of a pulse-position-modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders.

Posted in: Briefs, Information Sciences, Mathematical models, Computer software / hardware, Computer software and hardware, Cryptography, Computer software / hardware, Computer software and hardware, Cryptography
Read More >>

Metal Vapor Arcing Risk Assessment Tool

The Tin Whisker Metal Vapor Arcing Risk Assessment Tool has been designed to evaluate the risk of metal vapor arcing and to help facilitate a decision toward a researched risk disposition. Users can evaluate a system without having to open up the hardware. This process allows for investigating components at risk rather than spending time and money analyzing every component. The tool points to a risk level and provides direction for appropriate action and documentation.

Posted in: Briefs, Information Sciences, CAD / CAM / CAE, CAD, CAM, and CAE, Metals, Refractory materials, Reliability, Reliability, Risk assessments
Read More >>

Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding.

Posted in: Briefs, Information Sciences, Mathematical models, Imaging, Imaging and visualization, Imaging, Imaging and visualization, Data management
Read More >>

ICER-3D Hyperspectral Image Compression Software

Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, threedimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume.

Posted in: Briefs, Information Sciences, Mathematical models, Computer software / hardware, Computer software and hardware, Imaging, Imaging and visualization, Computer software / hardware, Computer software and hardware, Imaging, Imaging and visualization
Read More >>

Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

A context-modeling subalgorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible.

Posted in: Briefs, Information Sciences, Mathematical models, Imaging, Imaging and visualization, Imaging, Imaging and visualization
Read More >>

Methodology and Software for Designing Data-Processing ICs

A main goal is to reduce labor and errors in the design process.

A methodology and software to implement the methodology are under development in an effort to automate at least part of the process of designing integrated-circuit (IC) chips that perform complex data-processing fun- ctions. An important element of the methodology is reuse of prior designs, with modifications as required to optimize for a specific application. This minimizes a labor-intensive, errorprone part of the design process. The prior designs include what are known in the art as intellectual-property (IP) cores — that is, designs of functional blocks [e.g., random-access memories (RAMs), communications circuits, processors] that are incorporated into larger designs. Circuits may be optimized with respect to design goals, such as reducing chip size, reducing power consumption, and/or increasing radiation hardness.

Posted in: Briefs, TSP, Information Sciences, Design processes, Computer software / hardware, Computer software and hardware, Integrated circuits, Computer software / hardware, Computer software and hardware, Integrated circuits, Automation
Read More >>

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.