Special Coverage

Home

Integrated Hardware and Software for No-Loss Computing

Computations on parallel processors can continue, even if one processor fails. When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point.

Posted in: Briefs, TSP

Read More >>

Software for Allocating Resources in the Deep Space Network

TIGRAS 2.0 is a computer program designed to satisfy a need for improved means for analyzing the tracking demands of interplanetary space-flight missions upon the set of ground antenna resources of the Deep Space Network (DSN) and for allocating those resources. Written in Microsoft Visual C++, TIGRAS 2.0 provides a single rich graphical analysis environment for use by diverse DSN personnel, by connecting to various data sources (relational databases or files) based on the stages of the analyses being performed. Notable among the algorithms implemented by TIGRAS 2.0 are a DSN antenna-load-forecasting algorithm and a conflict-aware DSN schedule-generating algorithm. Computers running TIGRAS 2.0 can also be connected using SOAP/XML to a Web services server that provides analysis services via the World Wide Web. TIGRAS 2.0 supports multiple windows and multiple panes in each window for users to view and use information, all in the same environment, to eliminate repeated switching among various application programs and Web pages. TIGRAS 2.0 enables the use of multiple windows for various requirements, trajectory-based time intervals during which spacecraft are viewable, ground resources, forecasts, and schedules. Each window includes a time navigation pane, a selection pane, a graphical display pane, a list pane, and a statistics pane.

Posted in: Briefs, TSP

Read More >>

Expert Seeker

Expert Seeker is a computer program of the knowledge- management-system (KMS) type that falls within the category of expertise-locator systems. The main goal of the KMS system implemented by Expert Seeker is to organize and distribute knowledge of who are the domain experts within and without a given institution, company, or other organization. The intent in developing this KMS was to enable the re-use of organizational knowledge and provide a methodology for querying existing information (including structured, semistructured, and unstructured information) in a way that could help identify organizational experts. More specifically, Expert Seeker was developed to make it possible, by use of an intranet, to do any or all of the following:

Posted in: Briefs

Read More >>

Automated Monitoring With a BSP Fault-Detection Test

This test is sensitive to subtle statistical changes in monitored signals. The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware-and- software system that implements the method and procedure. As used here, "asset" could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.

Posted in: Briefs

Read More >>

Automated Monitoring With a BCP Fault-Decision Test

Fault-detection events are evaluated to reduce the incidence of false alarms. The Bayesian conditional probability (BCP) technique is a statistical fault-decision technique that is suitable as the mathematical basis of the fault-manager module in the automated-monitoring system and method described in the immediately preceding article. Within the automated-monitoring system, the fault-manager module operates in conjunction with the fault-detector module, which can be based on any one of several fault-detection techniques; examples include a threshold-limit-comparison technique or the BSP or SPRT technique mentioned in the preceding article. The present BCP technique is used to evaluate a series of one or more fault-detection events for the purpose of filtering out occasional false alarms produced by many types of statistical fault-detection procedures. The BCP technique increases the probability that an automated monitoring system produces a correct decision regarding the presence or absence of a fault.

Posted in: Briefs

Read More >>

Vector-Ordering Filter Procedure for Data Reduction

The essential characteristics of original large sets of data are preserved. The vector-ordering filter (VOF) technique involves a procedure for sampling a large population of data vectors to select a subset of data vectors that fully characterize the state space of the large population. The VOF technique enables a large reduction of the volume of data that must be handled in the automated-monitoring system and method discussed in the two immediately preceding articles. In so doing, the VOF technique enables the development of data-driven mathematical models of a monitored asset from sets of data that would otherwise exceed the memory capacities of conventional engineering computers.

Posted in: Briefs

Read More >>

Remote Sensing and Information Technology for Large Farms

Timely data on spatial and temporal variations in fields help farmers manage crops. A method of applying remote sensing (RS) and information- management technology to help large farms produce at maximum efficiency is undergoing development. The novelty of the method does not lie in the concept of "precision agriculture," which involves variation of seeding, of application of chemicals, and of irrigation according to the spatially and temporally local variations in the growth stages and health of crops and in the chemical and physical conditions of soils. The novelty also does not lie in the use of RS data registered with other data in a geographic information system (GIS) to guide the use of precise agricultural techniques. Instead, the novelty lies in a systematic approach to overcoming obstacles that, heretofore, have impeded the timely distribution of reliable, relevant, and sufficient GIS data to support day-to-day, acre-to-acre decisions concerning the application of precise agricultural techniques to increase production and decrease cost.

Posted in: Briefs

Read More >>

White Papers

Uncooled Infrared Imaging: Higher Performance, Lower Costs
Sponsored by Sofradir EC
Future Advances in Body Electronics
Sponsored by Freescale
A Brief History of Modern Digital Shaker Controllers
Sponsored by Crystal Instruments
Bridging the Armament Test Gap
Sponsored by Marvin Test Solutions
Oscilloscope Fundamentals
Sponsored by Rohde and Schwarz A and D
It Takes Two: Benefits of Using Laser Beam Welding Together with Electron Beam Welding
Sponsored by Joining Technologies

White Papers Sponsored By: