System offers superior performance from prior version, and a number of commercial applications.
John H. Glenn Research Center, Cleveland, Ohio
Astronauts suffer from poor dexterity of their hands due to the clumsy spacesuit gloves during Extravehicular Activity (EVA) operations, and NASA has had a widely recognized but unmet need for novel human-machine interface technologies to facilitate data entry, communications, and robots or intelligent systems control. A speech interface driven by an astronaut’s own voice is ideal for EVA operations, since speech is the most natural, flexible, efficient, and economical form of human communication and information exchange.
The current solution is a Communication Carrier Assembly (CCA)-based audio system. While the close-talking, noise-canceling microphone used in the CCA system can deliver speech signals with high intelligibility, its performance is sensitive to the microphone’s distance and orientation to the suit subject’s mouth. An integrated audio (IA) system is imperatively pursued. In order to possess similar performance to a CCA, the IA system will consist of multiple microphones that form an array to reduce noise and enhance speech intelligibility.
The developed speech human-machine interface will enable both crewmember usability and operational efficiency. It employs a fast rate of data/text entry, small overall size, and is lightweight. In addition, it frees not only the hands, but also the eyes of a suited crewmember.
The system contains the following key technical components/ steps: beam-forming/multichannel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaptation, ASR (automatic speech recognition) HMM (hidden Markov model) training, and ASR decoding.
Potential applications include in-helmet voice communication for the design of new spacesuits; telecollaboration via multimedia telepresence; human-machine interface for intelligent systems; hands-free, in-car voice communication and processing; mobile phones; military voice communication and speech processing systems; telemedicine and telehealth; multi-party teleconferencing; and acoustic surveillance.
This work was done by Yiteng (Arden) Huang, Sherry Q. Ye, and Yao (Yaron) Zhou of WebVoice, Inc. for Glenn Research Center.
Inquiries concerning rights for the commercial use of this invention should be addressed to NASA Glenn Research Center, Innovative Partnerships Office, Attn: Steven Fedor, Mail Stop 4–8, 21000 Brookpark Road, Cleveland, Ohio 44135. Refer to LEW-18930-1.