Information Technology & Software

Progressive Classification Using Support Vector Machines

An approximate classification is generated rapidly, then iteratively refined over time.

Progressive Classification Using Support Vector MachinesAn algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis.

Posted in: Briefs, Information Sciences, Mathematical models, Data management, Reliability
Read More >>

Automated CFD for Generation of Airfoil Performance Tables

A method of automated computational fluid dynamics (CFD) has been invented for the generation of performance tables for an object subject to fluid flow. The method is applicable to the generation of tables that summarize the effects of two-dimensional flows about airfoils and that are in a format known in the art as “C81.” (A C81 airfoil performance table is a text file that lists coefficients of lift, drag, and pitching moment of an airfoil as functions of angle of attack for a range of Mach numbers.) The method makes it possible to efficiently generate and tabulate data from simulations of flows for parameter values spanning all operational ranges of actual or potential interest. In so doing, the method also enables filling of gaps and resolution of inconsistencies in C81 tables generated previously from incomplete experimental data or from theoretical calculations that involved questionable assumptions.

Posted in: Briefs, TSP, Information Sciences, Airframes, Computational fluid dynamics, Mathematical models, Documentation
Read More >>

A Data Matrix Method for Improving the Quantification of Element Percentages of SEM/EDX Analysis

A simple 2D M×N matrix involving sample preparation enables the microanalyst to peer below the noise floor of element percentages reported by the SEM/EDX (scanning electron microscopy/energy dispersive x-ray) analysis, thus yielding more meaningful data.

Posted in: Briefs, Information Sciences, Mathematical models, Imaging and visualization, Microscopy, Noise, Test equipment and instrumentation
Read More >>

Improved Model of a Mercury Ring Damper

A short document discusses the general problem of mathematical modeling of the three-dimensional rotational dynamics of rigid bodies and of the use of Euler parameters to eliminate the singularities occasioned by the use of Euler angles in such modeling. The document goes on to characterize a Hamiltonian model, developed by the authors, that utilizes the Euler parameters and, hence, is suitable for use in computational simulations that involve arbitrary rotational motion. In this formulation unlike in prior Eulerparameter- based formulations, there are no algebraic constraints. This formulation includes a general potential-energy function, incorporates a minimum set of momentum variables, and takes an explicit state-space form convenient for numerical implementation.

Posted in: Briefs, TSP, Information Sciences, Computer simulation, Design processes, Mathematical models
Read More >>

Progressive Classification Using Support Vector Machines

An approximate classification is generated rapidly, then iteratively refined over time.

An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis.

Posted in: Briefs, TSP, Information Sciences, Mathematical models, Data management
Read More >>

Active Learning With Irrelevant Examples

Classification algorithms can be trained to recognize and reject irrelevant data.

An improved active learning method has been devised for training data classifiers. One example of a data classifier is the algorithm used by the United States Postal Service since the 1960s to recognize scans of handwritten digits for processing zip codes. Active learning algorithms enable rapid training with minimal investment of time on the part of human experts to provide training examples consisting of correctly classified (labeled) input data. They function by identifying which examples would be most profitable for a human expert to label. The goal is to maximize classifier accuracy while minimizing the number of examples the expert must label.

Posted in: Briefs, Information Sciences, Mathematical models, Artificial intelligence, Reliability
Read More >>

Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

Fast analysis rejects frames at the first sign of unacceptable quality instead of waiting until the full analysis is complete.

A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error.

Posted in: Briefs, Information Sciences, Mathematical models, Optics, Sensors and actuators
Read More >>

Integrated Risk and Knowledge Management Program — IRKM-P

Program helps people do work more effectively.

The NASA Exploration Systems Mission Directorate (ESMD) IRKM-P tightly couples risk management and knowledge management processes and tools to produce an effective “modern” work environment. IRKM-P objectives include: (1) to learn lessons from past and current programs (Apollo, Space Shuttle, and the International Space Station); (2) to generate and share new engineering design, operations, and management best practices through pre-existing Continuous Risk Management (CRM) procedures and knowledge-management practices; and (3) to infuse those lessons and best practices into current activities. The conceptual framework of the IRKM-P is based on the assumption that risks highlight potential knowledge gaps that might be mitigated through one or more knowledge management practices or artifacts. These same risks also serve as cues for collection of knowledge — particularly, knowledge of technical or programmatic challenges that might recur.

Posted in: Briefs, TSP, Information Sciences, Data management, Risk management, Technical review
Read More >>

Constructing LDPC Codes From Loop-Free Encoding Modules

High-speed iterative decoders can readily be implemented in hardware.

A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in “Accumulate-Repeat-Accumulate- Accumulate Codes” (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms.

Posted in: Briefs, Information Sciences
Read More >>

LDPC Codes With Minimum Distance Proportional to Block Size

These codes offer both low decoding thresholds and low error floors.

Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.

Posted in: Briefs, Information Sciences
Read More >>

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.