Special Coverage

Mechanoresponsive Healing Polymers
Variable Permeability Magnetometer Systems and Methods for Aerospace Applicationst
Evaluation Standard for Robotic Research
Small Robot Has Outstanding Vertical Agility
Smart Optical Material Characterization System and Method
Lightweight, Flexible Thermal Protection System for Fire Protection
High-Precision Electric Gate for Time-of-Flight Ion Mass Spectrometers
Polyimide Wire Insulation Repair System
Distributed Propulsion Concepts and Superparamagnetic Energy Harvesting Hummingbird Engine

AMMOS-PDS Pipeline Service (APPS) — Label Design Tool (LDT)

NASA’s Jet Propulsion Laboratory, Pasadena, California A software program builds PDS4 science product label (metadata) and automatically generates its description as part of the software interface specification (SIS) document. This software allows the mission system engineer to interact programmatically with the PDS4 information model, and retrieve science product metadata information via graphical user interfaces (GUIs). This capability will greatly improve the processes of creating and generating software interface specification documents for science instruments. Given that PDS4 is a newly defined standard, most of the work that is simplified by this software suite is being done manually. This improvement allows the definition and design of PDS4 science data archive models for generating PDS4 compliant labels.

Posted in: Briefs, TSP, Electronics & Computers, Information Sciences, Software, Computer software and hardware, Data acquisition, Test equipment and instrumentation


Ontological System for Context Artifacts and Resources (OSCAR)

NASA’s Jet Propulsion Laboratory, Pasadena, California Current data systems catalog and link data using a synthetic modeling approach that requires much domain knowledge in order to interact with the system. Domain knowledge includes what keyword to look for and how data artifacts are linked. OSCAR offers a semantic solution to data management by using ontology and reasoning. Information is automatically linked according to its internal ontology. An internal ontological reasoning engine handles information inference. Artifacts are linked by information mined from the input metadata and reasoned according to the internal ontology.

Posted in: Briefs, TSP, Electronics & Computers, Information Sciences, Software, Simulation and modeling, Data management



NASA’s Jet Propulsion Laboratory, Pasadena, California SPSCGR generates a contact graph suitable for use by the ION (Interplanetary Overlay Network) DTN (Delay/Disruption Tolerant Network) implementation from data provided by the JPL SPS (Service Preparation System) Portal. Prior to SPSCGR, there was no way for a mission or other entity to route DTN traffic across the DSN without manually constructing a contact graph. SPSCGR automates this process of contact graph construction.

Posted in: Briefs, TSP, Electronics & Computers, Information Sciences, Software, Computer software and hardware, Data management


Computer Aided Design of Suspension Mechanisms

Automobile suspension mechanisms have to date been designed using two-dimensional graphic oriented methods. Computer-aided design has allowed many two-dimensional mechanisms to be designed much more accurately. However, this has not translated to suspension mechanisms because these mechanisms are not two-dimensional but instead three-dimensional.

Posted in: White Papers, Electronics & Computers, Manufacturing & Prototyping, Software, Test & Measurement


Software Framework for Control and Observation in Distributed Environments (CODE)

Ames Research Center, Moffett Field, California CODE is a framework for control and observation in distributed environments. The framework enables the observation of resources (computer systems, storage systems, networks, and so on), services (database servers, application execution, servers, file transfer servers, and so on), and applications. Further, the framework provides support for the secure and scalable transmission of this observed information to programs that are interested in it. The framework also supports the secure execution of actions on remote computer systems so that a management program can respond to the observed data that it receives. To assist in writing management programs, the framework interfaces to an existing expert system so that a user can define a set of rules for the expert system to reason on, instead of writing a large amount of code. The framework is modular and can be easily extended to incorporate new sensors to make observations, new actuators to perform actions, new communication protocols, and new security mechanisms. The software also includes several applications that show how the framework can be used.

Posted in: Briefs, Electronics & Computers, Software, Communication protocols, Computer software and hardware


Simple RunTime eXecutive (SRTX)

Marshall Space Flight Center, Alabama Simple RunTime eXecutive (SRTX) software provides scheduling and publish/subscribe data transfer services. The scheduler allows dynamic allocation of real-time periodic and asynchronous tasks across homogeneous multi core/multiprocessor systems. Most real-time systems assign tasks to specific cores on an a priori basis. Allowing the operating system scheduler to determine the best allocation of threads is not a unique innovation. However, it is coupled with a deterministic publish/subscribe data transfer system that guarantees the tasks process data deterministically, regardless of the number of processor cores in the system.

Posted in: Briefs, Electronics & Computers, Software, Computer software and hardware, Data management


v-Anomica: A Fast Support Vector-Based Novelty Detection Technique

Ames Research Center, Moffett Field, California Outlier or anomaly detection refers to the task of identifying abnormal or inconsistent patterns from a dataset. While they may seem to be undesirable entities, identifying them has many potential applications in fraud and intrusion detection, medical research, and safety-critical vehicle health management. Outliers can be detected using supervised, semi-supervised, or unsupervised techniques. Unsupervised techniques do not require labeled instances for detecting outliers. Supervised techniques require labeled instances of both normal and abnormal operation data for first building a model (e.g., a classifier), and then testing if an unknown data point is a normal one or an outlier. The model can be probabilistic such as Bayesian inference or deterministic such as decision trees, Support Vector Machines (SVMs), and neural networks. Semi-supervised techniques only require labeled instances of normal data. Hence, they are more widely applicable than the fully supervised ones. These techniques build models of normal data and then flag outliers that do not fit the model.

Posted in: Briefs, Electronics & Computers, Software, Analysis methodologies, Safety critical systems, Vehicle health management, Data management


The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.