Home

Flight Processor Virtualization for Size, Weight, and Power Reduction

A flight software system that was originally deployed on six separate physical processors is modeled using a single processor. Goddard Space Flight Center, Greenbelt, Maryland This work demonstrated the cost-saving and fault-tolerant benefits of virtualization technology by consolidating the flight software from multiple flight processors into a single virtualized system. In this study, a flight software system that was originally deployed on six separate physical processors was modeled using a single processor and a real-time embedded hypervisor.

Posted in: Briefs, TSP, Electronics & Computers

Read More >>

Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

Future decision support tools may make use of the model with commercial-off-the-shelf software and hardware. Ames Research Center, Moffett Field, California Traffic flow management (TFM) of the National Airspace System (NAS) endeavors to deliver flights from their origins to their destinations while minimizing delays and respecting all capacities. There are several models for solving this problem. Some models aggregate flights into flows and others consider controls for individual flights. Typically, the latter set of models is computationally difficult to solve for large-scale, high-fidelity scenarios. One of the more heavily studied aircraft-level models presented by Bertsimas and Stock-Patterson (BSP) has runtime concerns that should not be overlooked, but it neatly describes the issues associated with TFM (respecting capacities of airspace resources and the schedules of individual aircraft).

Posted in: Briefs, Electronics & Computers

Read More >>

Mission Control Technologies (MCT)

Ames Research Center, Moffett Field, California MCT enables users to compose software from objects that can be assembled by end users to create integrated functionality. Applications are eliminated in favor of compositions of “live objects” that can be combined in different ways for different users and missions as required, in contrast to the more traditional software development method of pre-determining functionality and building a monolithic application.

Posted in: Briefs, Electronics & Computers

Read More >>

Interface Validation for Distributed Software Systems

Goddard Space Flight Center, Greenbelt, Maryland As a result of performing IV&V (Independent Verification and Validation) on Space Station software, a number of interface faults were found during integrated testing or actual software deployment. Faults found at this late phase of the software development lifecycle are very expensive to fix. Other research indicates that significant cost savings can be realized if these types of faults can be discovered at earlier software development lifecycle phases, such as specification or coding. A need was determined for processes, procedures, and tools that will reliably identify interface faults during these earlier software development lifecycle phases. The ability to perform interface validation during earlier phases will reduce costly fixes due to interface faults discovered during later software development phases.

Posted in: Briefs, TSP, Electronics & Computers

Read More >>

Simple, Scalable, Script-Based Science Processing Archive

The system provides data access control, data subscription, metadata publication, and data recovery. Goddard Space Flight Center, Greenbelt, Maryland Simple, Scalable, Script-based, Science Processing (S4P) Archive (S4PA) is a disk-based data-archiving system for remote sensing data. It is based on the data-driven framework of S4P. The system is used for new data transfer, data preprocessing, metadata generation, and data archival. The system provides services such as data access control, data subscription, metadata publication, and data recovery. The data is archived on readily available disk drives, with FTP (File Transfer Protocol) and HTTP (Hypertext Transfer Protocol) being primary modes of data access. S4PA includes a graphical user interface for monitoring and re-configuring the system operation, a tool for deploying the system, and various other tools that help manage the data ingest and archiving process, such as data replication, auxiliary file backup, database merge, storage of dataset README documents in CVS (Concurrent Versions System), an interface for machine search, deployment of S4PA instances from configuration stored in CVS, etc.

Posted in: Briefs, TSP, Electronics & Computers

Read More >>

A Model-Driven Science Data Product Registration Service

The Registry Service will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. NASA’s Jet Propulsion Laboratory, Pasadena, California The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (e.g., data model, data structures, data dictionary, etc.) and deploy a software system (online data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while leveraging modern information technology. A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc.

Posted in: Briefs, TSP, Electronics & Computers

Read More >>

Design of Rate-Compatible Protograph LDPC Codes

This method can be applied in wireless cellular, satellite, and Internet communications. NASA’s Jet Propulsion Laboratory, Pasadena, California The most common way to generate a rate-compatible family of codes is puncturing. In this method, one starts with a low-rate mother code and then selectively discards some of the coded bits to arrive at higher-rate codes. This approach is simple, but is not free of problems. Specifically, the mother code is optimally designed for low rates, so higher-rate punctured codes have a wider gap to capacity, and the optimal low-rate code structure and puncturing patterns are designed separately, which is suboptimal. Even though it has been shown that puncturing can theoretically achieve the same gap to capacity as the mother code, in existing codes puncturing has in creased the gap significantly.

Posted in: Briefs, Electronics & Computers

Read More >>