Home

Chuck Jorgensen, Chief Scientist for Neuro Engineering, Ames Research Center, Moffett Field, CA

Chuck Jorgensen, Chief Scientist for the Neuro Engineering Lab at NASA Ames Research Center, in Moffett Field, CA, currently studies biolelectrical interfacing and the detection of human emotion and visualization. His research in subvocal speech was a 2006 finalist for the Saatchi & Saatchi international prize for world-changing ideas.

NASA Tech Briefs: What are some of the applications for bioelectrical interfacing?

Chuck Jorgensen: If you put someone in a constrained suit, like a space suit or a firefighter or hazmat suit, the pressurization that’s occurring from the breathing apparatus, as well as the limitations on finger movement in a pressurized suit, make doing tasks like typing or small joystick control very difficult to do, or actually dealing with, say, an external robotics device that you might want to control with this system.

altWe began to ask several questions: Could we intercept the neurological signals prior to actually initiating a full movement and utilize those signals in ways that would send command to devices? The first work that we did was to take a look at the electromyographical signals, the surface measurement of muscle innervation that’s occurring down an arm like when you clench your fist. The electrical signals cause the muscles to contract; there’s basically an electrical signal activity that can be picked up with external sensors, and the magnitude of those and the timing of those and how they behave can be intercepted, recorded, and turned into patterns that can be sent to a machine.

What we looked at first was: Could we intercept those neural commands without requiring something like a pilot’s joystick for an airplane? The general idea would be you reach into the air, you grab an imaginary joystick, and you would fly a plane simply by clenching your fist and moving your wrist in different directions, as though you were magically touching real hardware. We demonstrated it to Administrator Bolden a number of years ago by landing a full Class-4 simulator at San Francisco’s airport by actually reaching in the air and flying the plane.

The next question that arose then was: If we could handle those fairly coarse muscle commands for something like grabbing a joystick, could we move it further? So the question then became: Can we intercept these electromyographic signals and type without a keyboard? We demonstrated that we could use a numeric keyboard by picking up the commands of individual fingers -- picking the information up actually off the outside of the arm, the electromyographic signals on the arm, before they got to the hand. That was important because in certain kinds of tasks you might want to have gloves on, or the hand might impact a surface built for an astronaut or something, say, in a combat situation where the hand would take impact. So we wanted to pick up the signals before they got to the hand.

That finally led to subvocal speech. If we can get signals that tiny on the arm, what about the tiny signals that are sent to the tongue and the larynx and the voice box? The implication being that we might able to understand what somebody is going to say even if they didn’t say it out loud.

We started developing a technology that let us have a person only move the mouth or simulate the articulation of words without making any audible sound at all, and pick that speech up. We demonstrated it in a pressurized firesuit. We demonstrated it in pressurized underwater diving equipment. These are both environments that would be analogous to a space suit where you have a lot of aspirator hits, or you have a background noise.

One of the big interests in subvocal speech was not only the ability to communicate silently, but also to communicate in the presence of extreme noise. An example of that would be someone at an airport near a large jet engine, where normally you wouldn’t be able to talk on a cell phone or communicator. You’d pick it up either auditorily as with a traditional microphone, but if that was overwhelmed with a sudden noise, you’d be able to pick it up through the actuation of the subvocal signals.