The system and method of reconfigurable Auditory-Visual Display creates a multi-mode communications environment with the express intent of increasing situational awareness for the operator (controller), and for reducing operator fatigue. Situational awareness is increased by a number of innovations, such as spatially separating each voice communication channel and allowing a single voice channel to be prioritized, while still allowing other channels to be monitored. The controller can see real-time video from each of the controlled individuals, and sensor data from these individuals can be collected electronically rather than being presented over a voice channel. It also provides the controlling individual an interface to record and transmit event data.
Each communications channel is equipped with a video indicator that allows the controller to determine who is speaking, and from which communication channel the signal is being received. The components of this system include a command module (auditory, visual displays, and computer processing equipment), event tracking database, and multiple rescuer systems (helmet, light, camera, throat microphone, ear speaker, and health monitoring sensors).
The reconfigurable auditory-visual display device analyzes and displays signals representing location and angular orientation of a human body, thereby increasing situational awareness. It is a signal analysis and communication system that accepts communication signals from multiple sources simultaneously, and permits a signal recipient to assign priority to, or to focus on, a selected audio signal source. The reconfigurable auditory-visual display for a multi-channel control center and rescue communications system can be expandable to accommodate from two to eight rescuer channels.
Each individual system can be linked to each other to communicate between rescue teams. The video portion of the system (helmet camera) connected to the rescuer has the capability of producing still images in addition to its normal video feed. This capability allows the rescuer to continue working while the attendant analyzes the images. In addition, the video stream can be recorded and played back for analysis purposes. The system may also track events for each rescuer. The software system includes a reconfigurable sound path and specialization algorithm through the use of a software signal processing plug-in architecture. This feature is used to select between different auditory display configurations, create tailored sound path routings for specific applications, add additional signal conditioning and/or analysis to the sound path, and provide upgrades to systems in the field.
This work was done by Durand R. Begault of Ames Research Center; Mark R. Anderson of A.S.R.C Inc; Bryan U. McClain of Metric Lab, Redwood City, CA; and Joel D. Miller of San Jose State University Foundation. NASA invites companies to inquire about partnering opportunities and licensing this patented technology. Contact the Ames Technology Partnerships Office at 1-855-627-2249 or