The head simulators are 3D printed into components and assembled, enabling customization at low cost. (Image: Augmented Listening Laboratory at the University of Illinois Urbana-Champaign)

The Augmented Listening Laboratory at the University of Illinois Urbana-Champaign wants to throw a cocktail party like you’ve never seen: One full of 3D-printed, humanoid robots listening and talking to each other.

While that may sound like nightmare fuel, realistic talking — and listening — heads are important for investigating how humans receive sound and developing audio technology.

“Simulating realistic scenarios for conversation enhancement often requires hours of recording with human subjects. The entire process can be exhausting for the subjects, and it is extremely hard for a subject to remain perfectly still in between and during recordings, which affects the measured acoustic pressures,” said Austin Lu, a student member of the team, which detailed its work last month at the 184th Meeting of the Acoustical Society of America. “Acoustic head simulators can overcome both drawbacks. They can be used to create large data sets with continuous recording and are guaranteed to remain still.”

The heads are 3D printed as components and assembled, enabling customization at low cost. The highly detailed ears are fitted with microphones along different parts to simulate both human hearing and Bluetooth earpieces. The “talkbox,” or mouthlike loudspeaker, closely mimics human vocals. To facilitate motion, the researchers paid special attention to the neck. Because the 3D model of the head design is open source, other teams can download and modify it as needed. The diminishing cost of 3D printing means there is a relatively low barrier for fabricating these heads.

Acoustic head simulators are designed to have similar absorptive and structural properties as a real human head. (Image: illinois.edu)

“Our acoustic head project is the culmination of the work done by many students with highly varied technical backgrounds,” said Manan Mittal, a graduate researcher with the team. “Projects like this are due to interdisciplinary research that requires engineers to work with designers.”

The Augmented Listening Laboratory has also created wheeled and pully-driven systems to simulate walking and more complex motion.

Here is a Tech Briefs interview, edited for length and clarity, with Mittal and Lu.

Tech Briefs: What inspired your research?

Mittal: As a group, the Augmented Listening Lab wants to work on, as you can imagine, augmenting human listening so that it’s beyond what you’re capable of today. The comparison our advisor tries to make is that when we want to observe something with our eyes that we can’t normally observe, we use a telescope or a microscope. In the same way, we want to be able to use microphone arrays in order to observe things that we normally wouldn’t be able to with our ears. But in order to do that, we need to understand how humans listen.

Typically, when you want to create data sets that involve human listeners, you have to get approval from the institutional board and get your research approved. That’s more time consuming and contains a lot of private information that people don’t necessarily want to share. So, the idea was to be able to create these reproducible human heads that can take away that burden from the human talker or listener. So now, instead of having to get this approval, then go out and essentially do the whole process, we have a human replica head that can listen and talk for us.

The idea is just to be able to create large data sets because machine learning models, etc., these days require large data sets with realistic cues, and humans use these cues in order to localize and separate sounds. So, we want to be able to have cues as accurate as possible.

Tech Briefs: What was the biggest technical challenge you faced in your work?

Mittal: We broke the project up into many years of steps. The first one was to just build a wooden head so that we can mimic the directivity patterns of human talkers. And that itself, I’d say, was the largest challenge to begin with — just getting that pattern right — finding a loud-speaker driver that’s capable of producing the vocal range of a human is very hard. And then to build a casing around it that can support that structure that looks like a human and can listen like one is a challenging task. We actually had an industrial design student work on that. Then, obviously, a wood design is not convenient for anyone; it doesn’t propagate acoustics across it like a human would. So, the idea was to make that 3D printable and reproducible so that anyone could print it.

Then the problem really became to be able to make a model of this that was 3D printable and reproducible. And now our largest issue would be to replicate the acoustics of the human head; when the sound hits the head from, let’s say, the left, does it propagate to the right the same way my head would or your head would?

Human listeners can discern where a particular sound is coming from based on the difference in volume and timing of sound at their left and right ears. (Image: illinois.edu)

Tech Briefs: Would you mind just explaining in simple terms how the technology works, please?

Lu: We want the head to talk realistically, to listen realistically, and to move realistically. For talking and listening realistically, that’s where the whole 3D printing comes in. While there’s plenty of opportunity to use various types of 3D printing, we’re focusing purely on the shape and then filling in the head with some materials. The challenge now is basically figuring out what materials will allow us to really refine the sort of property that we need.

Mittal: The whole thing is really hinging on 3D printing and acoustics and simulation. Essentially the goal is to be able to build these 3D-printed heads and then add other materials on top of that. When I was at the Acoustical Society of America conference, I was advised by someone to get this silicone paint that is soft to touch and can coat the material in order to augment the surface of the head.

There’s many things that we’re looking into in terms of manufacturing and construction of the head. Like Austin mentioned, the last goal was to have movement; for that, we ended up building a multi-access turret that can make the head look left, right, up, or down — essentially like a human would during a conversation.

Tech Briefs: Your findings were presented a couple weeks ago at the 184th meeting of the Acoustical Society of America. Please talk about how that went.

Mittal: The idea of the Acoustical Society presentation was to present this work because we’ve not presented it before. We essentially went through the ideations of the head, starting with the wooden head and continuing through the 3D-printed head. Then we moved on to the acoustic simulations and the data collection. For the data collection and simulations part, we spoke about how we tried out multiple ear shapes; because, obviously, different people have different ear shapes. So, we like to be able to swap out the ears that the head is wearing so that it matches a human or listener.

The other thing that we spoke about was the directivity pattern measurements. So, we showed a comparison between a real human talker, the plastic head that we 3D printed, the original wooden head, and one of the commercially available head devices.

Finally, we showed in one of the videos from our presentation about the project, that the head is actually a part of the acoustical room system, which essentially is about robotizing acoustic experiments in order to make them reproducible and repeatable without human intervention.

Tech Briefs: What are your next steps? What’s the next move for the team?

Mittal: One, we’re trying to write an article on the acoustics on the heads because we think it’s important to start getting people’s opinions and having them start trying out the head. We are going to be uploading the 3D design files for anyone to print and replicate as they like. Then we’re going to get the silicon paint and try that on the heads to see how that affects the propagation of the waves across it.

Obviously, we’d like to be able to test the talking heads from different angles. Since we have the target, we’d like to be able to make it perhaps point to the northwest and then talk to someone who’s standing 180 degrees behind them, something like that. The idea would be to now get to 360 for full access data rather than the one plane that we’ve been restricted to.

Lastly, we’ve ordered commercial heads and also simulator devices that are used in industry and research today, which can cost up to 1000X more than ours, in order to test our head simulators against them. We’d like to be able to compare them and at least try to bring ours up to that standard for a fraction of the cost.

Tech Briefs: Do you have any advice for engineers aiming to bring their ideas to fruition?

Mittal: It’s OK to fail. This head has taken five years to become a paper that has gathered attention.

Not at any point during the whole process did any of us get demotivated or said it’s not worth doing, because it was for our research and we knew that there was a use for it. So, as an engineer coming up, don’t ever feel like you need to stop doing what you’re doing; one day maybe you will see the value or you’ll see how people can use it and how you can improve things with it.