Interest in the simulation of acoustic environments has prompted a number of technology development efforts over the years for applications such as auralization of concert halls and listening rooms, spatial information displays in aviation, virtual reality, and better sound effects for video games. Each of these applications implies different task requirements that require different approaches in the development of rendering software and hardware.
SLAB (Sound Lab) is a software-based, real-time virtual acoustic environment rendering system being developed as a tool for the study of spatial hearing. SLAB is designed to work in the personal computer environment to take advantage of the low-cost PC platform while providing a flexible, maintainable, and extensible architecture to enable the quick development of experiments. The software provides an API (Application Programming Interface) for specifying the acoustic scenario, as well as an extensible architecture for exploring multiple rendering strategies. The SLAB Render API supports a number of parameters, including sound source specification (waveform and signal generation), source gain, source location, source trajectory, listener position, listener HRTF (Head-Related Transfer Function) database, surface location, surface material type, render plug-in specification, scripting, and low-level signal processing parameters.
This work was done by Joel Miller of San José State University Research Foundation (SJSURF) for Ames Research Center. This software is available for use. To request a copy, please visit here .