Home

Chuck Jorgensen, Chief Scientist for Neuro Engineering, Ames Research Center, Moffett Field, CA

NTB: How limited is the vocabulary? What are the limitations of what you can communicate?


Jorgensen: The original work that we were doing had a fairly limited vocabulary because the types of information you could extract from surface signals without any kind of invasive behavior had us initially starting with very small numbers of words: Ten or fifteen words, for example, for things like police-ten codes. Later, we began to take a look at not just recognizing whole words, which is the way we originally started out: left, right, forward, backward, words that would control a robot flat-form, for example. We began to wonder: Can we pick up the vowels and the consonants, the building blocks of many words?

So there was some preliminary work done on that, and the answer was: Yes, we can pick up some of those vowels and consonants, but not all of them, because not everything that you’re doing with the muscles reflects what goes on with speech. An example of that would be what they call applosives, which are the popping type of sounds that you make by closing your lips and pressurizing your mouth (Peter, Paul, Pickled Peppers, etc.). Those types of applosive noises are not represented.

We did some work also at Carnegie Mellon connecting it to a classical speech recognition engine, except the front end of it was now a subvocal pickup. I believe that work got up into the 100s to possibly 1000-2000 word capability. That was probably the most advanced work using that specific approach to subvocal speech.

NTB: Where are we at now? Is it in use currently?

Jorgensen: The NASA budget for that work was terminated, partly to do with a termination of a broader program for the Extension of the Human Senses. The idea has been picked up worldwide, and there’s a very large group in Germany working on it now, and there were a number of worldwide activities. I’m still getting calls from different people around the world that are pursuing it in their laboratory. Our ultimate goal on this, and I still think that there’s work that can be done, was to develop a silent cell phone, so that we would be capable of communicating either auditorily or silently on a cell phone using the same type of technology.

What does it look like, and is it a user-friendly technology?

Jorgensen: It’s mixed. It’s easier to implement with the coarser muscle movements like with, for example, the control stick area of that technology. It’s very straightforward. That can be a sleeve that is slid over your arm. Something like subvocal speech requires picking up signals on different areas around the mouth and the larynx. The reality of it is that you have to still put in place sensors in different areas of the face to pick it up.

We were originally doing our work with the classical type of wet electrode sensors that you would see if you want to have an electro-cardiogram in a doctor’s office. They’re bulky. They’re patchy. We later did work on dry electrodes, which didn’t require that moisture, and the most advanced work currently out there that we had also initiated was capacitive sensors, which picked up the tiny electromagnetic fields without requiring direct contact with the skin. These sensors were brought down to the level of about the size of a dime, and they’ve continued to shrink since then. That was an important part of the puzzle. We needed to both have the sensor technology to collect the signals in a non-obtrusive way and the processing algorithms to do something with it. We focused more on the processing algorithms. The Department of Defense has advanced the sensor side of it quite heavily. They, in fact, have entire helmets that have been populated with microsensors. The components are there, but so far it wouldn’t be a “drop it on.” There would have to be individual training and customization.

NTB: What were your biggest technical challenges when you were designing this type of sensor technology?

Jorgensen: The sensor technology itself was not designed at NASA. We subcontracted it. It was based on an earlier technology that initially was developed by IBM called SQUID (Superconducting Quantum Interference Device). That patent was picked up by a company in southern California, Quasar Corp., that solved a number of processes that IBM was not able to solve. They’ve advanced that technology substantially, as well as several other people that have begun to do the same thing with nanosensors in gaming systems. So you’ll see a lot of the children’s gaming systems are beginning to get pretty sophisticated in terms of what they can pick up, in terms of the same kinds of signals.

NTB: What is your day-to-day work? What are you working on currently?

Jorgensen: I’m a chief scientist at NASA Ames, and I started what is now referred to as the Neuro Engineering Laboratory. My current projects are actually focused in a slightly different area. They’re taking a look at the detection of human emotions. We’re looking at a number of ways to extract the human emotional responses from various characteristics of the speech signal, particularly the characteristics called prosody.

We’ve been looking at the capability, for example, of using prosody as a way of detecting fatigue in pilot communications or air traffic controller communications, also the detection of emotional states (fear, anger, happiness) by analyzing typical microphone acoustic signals and determining what the emotional state of the individual is. We’ve also been looking at the automation of various systems that are looking at the overall human behavior: things like pupil dilation, for example, eye tracking, other areas that all reflect emotional states.