Home

Ontological System for Context Artifacts and Resources (OSCAR)

NASA’s Jet Propulsion Laboratory, Pasadena, California Current data systems catalog and link data using a synthetic modeling approach that requires much domain knowledge in order to interact with the system. Domain knowledge includes what keyword to look for and how data artifacts are linked. OSCAR offers a semantic solution to data management by using ontology and reasoning. Information is automatically linked according to its internal ontology. An internal ontological reasoning engine handles information inference. Artifacts are linked by information mined from the input metadata and reasoned according to the internal ontology.

Posted in: Briefs, TSP

Read More >>

SPSCGR

NASA’s Jet Propulsion Laboratory, Pasadena, California SPSCGR generates a contact graph suitable for use by the ION (Interplanetary Overlay Network) DTN (Delay/Disruption Tolerant Network) implementation from data provided by the JPL SPS (Service Preparation System) Portal. Prior to SPSCGR, there was no way for a mission or other entity to route DTN traffic across the DSN without manually constructing a contact graph. SPSCGR automates this process of contact graph construction.

Posted in: Briefs, TSP

Read More >>

Retools: Restriping Tools for Lustre

Ames Research Center, Moffett Field, California Modern parallel file systems achieve high performance by distributing (“striping”) the contents of a single file across multiple physical disks to overcome single-disk I/O bandwidth limitations. The striping characteristics of a file determine how many disks it will be striped across and how large each stripe is. These characteristics can only be set at the time a file is created, and cannot be changed later. Standard open-source tools do not typically take striping into account when creating files, so files created by those tools will have their striping characteristics set to the default. The default stripe count is typically set to a small number to favor small files that are more numerous. A small default stripe count, however, penalizes large files that use the default settings, as they will be striped over fewer disks so access to these files will only achieve a fraction of the performance that is possible with a larger stripe count. A large default stripe count, however, causes small files to be striped over too many disks, which increases contention and reduces performance of the file system as a whole.

Posted in: Briefs

Read More >>

Method and Program Code for Improving Machine Efficiency in the Computation of Nearly-Singular Integrals

Lyndon B. Johnson Space Center, Houston, Texas Currently, there is a need for the computational handling of near-singularities that arise in many branches of physics, particularly for handling near-strong singularities. An example of such singularities is presented by the case of gradients of Newton-type potentials and modified Newton-type potentials. Currently, practitioners resort to multiple methods that do not work well, suffer from accuracy issues, or work only for very specialized cases. Accuracy issues provide results that cannot be trusted. Using codes that work only for specialized cases results in either misapplication of the code, and hence reduced accuracy, or failed attempts at a solution or infrequent and expensive code modifications to handle new cases.

Posted in: Briefs, TSP

Read More >>

Software Framework for Control and Observation in Distributed Environments (CODE)

Ames Research Center, Moffett Field, California CODE is a framework for control and observation in distributed environments. The framework enables the observation of resources (computer systems, storage systems, networks, and so on), services (database servers, application execution, servers, file transfer servers, and so on), and applications. Further, the framework provides support for the secure and scalable transmission of this observed information to programs that are interested in it. The framework also supports the secure execution of actions on remote computer systems so that a management program can respond to the observed data that it receives. To assist in writing management programs, the framework interfaces to an existing expert system so that a user can define a set of rules for the expert system to reason on, instead of writing a large amount of code. The framework is modular and can be easily extended to incorporate new sensors to make observations, new actuators to perform actions, new communication protocols, and new security mechanisms. The software also includes several applications that show how the framework can be used.

Posted in: Briefs, Electronics & Computers

Read More >>

Simple RunTime eXecutive (SRTX)

Marshall Space Flight Center, Alabama Simple RunTime eXecutive (SRTX) software provides scheduling and publish/subscribe data transfer services. The scheduler allows dynamic allocation of real-time periodic and asynchronous tasks across homogeneous multi core/multiprocessor systems. Most real-time systems assign tasks to specific cores on an a priori basis. Allowing the operating system scheduler to determine the best allocation of threads is not a unique innovation. However, it is coupled with a deterministic publish/subscribe data transfer system that guarantees the tasks process data deterministically, regardless of the number of processor cores in the system.

Posted in: Briefs, Electronics & Computers

Read More >>

v-Anomica: A Fast Support Vector-Based Novelty Detection Technique

Ames Research Center, Moffett Field, California Outlier or anomaly detection refers to the task of identifying abnormal or inconsistent patterns from a dataset. While they may seem to be undesirable entities, identifying them has many potential applications in fraud and intrusion detection, medical research, and safety-critical vehicle health management. Outliers can be detected using supervised, semi-supervised, or unsupervised techniques. Unsupervised techniques do not require labeled instances for detecting outliers. Supervised techniques require labeled instances of both normal and abnormal operation data for first building a model (e.g., a classifier), and then testing if an unknown data point is a normal one or an outlier. The model can be probabilistic such as Bayesian inference or deterministic such as decision trees, Support Vector Machines (SVMs), and neural networks. Semi-supervised techniques only require labeled instances of normal data. Hence, they are more widely applicable than the fully supervised ones. These techniques build models of normal data and then flag outliers that do not fit the model.

Posted in: Briefs, Electronics & Computers

Read More >>

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.