Chuck Jorgensen, Chief Scientist for Neuro Engineering, Ames Research Center, Moffett Field, CA
- Tuesday, 01 January 2013
NTB: Pupils and the eyes?
Jorgensen: There’s a large interest now in the commercial community as well as developing interest in NASA in determining what people’s emotional reactions are from a commercial standpoint. For example, advertisers are very interested if you’re on the phone. They want to determine if somebody’s getting unhappy with their service or whether they’re reacting positively to the pitch that they might be getting over the Internet, or to some kind of vocal communication. They want that kind of feedback that sometimes we’re missing in emails, where somebody has to put the little smiling icon in the email. They’d rather know automatically what somebody is really feeling when they’re saying these things. Those human communication aspects, which are sort of the focus that I’m really most involved in now, are broadcast on many channels. Those channels are things like your facial expression, the timbre of your voice, the dilation of your pupils, the rate of movement of the eyes, and the rate at which the body position changes in time.
NTB: How do you respond to someone who might be skeptical saying that a machine couldn’t possibly detect emotion as well as a human?
Jorgensen: It’s certainly not at that state, but the interesting thing is that what we’ve observed, for example, with actors attempting to show different emotions and have the machine detect it, is that the human raters of what emotion is being expressed don’t agree at a much higher percentage than what some of our machine evaluations report. So the humans themselves can’t always agree on what emotion is being expressed. The person can say “I’m trying to express a happy emotion,” but the observer can be confused, whether they’re grimacing sometimes, or whether they’re laughing. It’s surprising. It’s hard to establish what truth is when someone says how well a machine is doing and how well a person is doing.
NTB: What do you see as the easiest application for this type of technology?
Jorgensen: Within NASA, what I’m currently most interested in trying to do would be something that would help in the identification of pilot fatigue, where pilots may be reaching fatigue states and not be consciously aware of it themselves. Fatigue begins to show up in various properties of their performance in their voice or in their emotional or neurological response.
NTB: What are your biggest challenges there? Does your work involve constantly calibrating that technology?
Jorgensen: Parts of this are fairly cutting-edge. For example, in our current work, we’re looking at over 988 variables extracted from just the human voice alone, and the challenges there are formidable when determining which variables are actually the drivers for the different emotions, and how they have to be combined mathematically into different models since we have the pattern recognition questions.
We’re actually looking at some other aspects of it as well, which is how to turn those patterns into visual images or having all those variables draw a picture. The picture can be recognized as the emotion anger or the emotion happiness, or something else, to actually have the data themselves tell you the state of the system. This has applications beyond just emotions, and it can be used for system health monitoring, for example.
NTB: What would you say is your favorite part of the job?
Jorgensen: Definitely trying to do something that’s cutting-edge. My background is sort of a weird combination called mathematical psychology. And what’s interesting to me is to try and take the soft sciences of psychology and social science and overlay a hard engineering mathematics basis for it. I find that a very fascinating combination because one side of it is rather intuitive, and the other side has to be very hardnosed and analytical. Where the two meet makes for some interesting research challenges.
To download this interview as a podcast, click here