(Image: Institute of Science Tokyo)

Virtual reality (VR) technologies are rapidly advancing, allowing users to see and hear highly realistic virtual environments. But most VR systems only rely on visual and auditory experiences, leaving out one of the most powerful human senses — the sense of smell. Research shows that the sense of smell is strongly connected to memory, emotions, and environmental perception. However, incorporating multiple scents into VR experiences remains challenging.

Olfactory displays are devices that generate scents in response to digital content. Although promising, most of these devices are bulky and difficult to integrate into wearable VR systems. To overcome this, a team of researchers led by Specially Appointed Professor Takamichi Nakamoto from Laboratory for Future Interdisciplinary Research of Science and Technology (FIRST), Institute of Integrated Research, Institute of Science Tokyo (Science Tokyo), Japan, along with Doctoral Student Zhe Zou from the Department of Information and Communications Engineering, School of Engineering, Science Tokyo, and Kelvin Cheng, R&D Manager at Rakuten Mobile, Inc. and Rakuten Institute of Technology, Japan, has developed a multi-channel wearable olfactory display capable of generating blended scents in real time. Their findings were published in the IEEE Sensors Journal.

"We created a small-sized scent generation system that can be worn together with a VR device, so a user can experience scents that match the virtual environments as they explore, and a single user can use it at the same time," explained Nakamoto.

One of the key features of this device is its ability to blend multiple scents to match the VR display in real time. It can blend up to eight different fragrance components simultaneously, and by adjusting their mixing ratio, the system can reproduce a wide range of scents. The researchers achieved this by optimizing the methods for supplying and controlling fragrances while limiting the size of the driving circuit.

"We wanted to develop a system that could reproduce complex scents quickly during immersive virtual experiences," noted Nakamoto.

To make this precise control possible, the team used several specialized components within the device. For scent generation, they used a microdispenser that releases extremely small droplets of liquid fragrance, along with a surface acoustic wave atomizer that uses ultrasound to convert the liquid droplets into a fine mist, which can be easily detected as scent. Additionally, they also incorporated an electroosmotic pump (a device that moves liquids using electrical forces) to accurately control the amount of fragrance delivered to the microdispenser at any given time. Together, these components ensure stable scent generation with minimal delay.

The researchers then tested the device by measuring how accurately it controlled the odor concentration and how quickly it generated scent. Through multiple experiments, they optimized the system to produce scent levels suitable for human perception in practical settings.

"We also created virtual travel content using these devices, so that users could visit various virtual locations and experience the scent at those places for a realistic travel experience," added Nakamoto.

After testing, the participants reported that adding scents significantly improved the sense of presence in the virtual environment. The device was also demonstrated at multiple international conferences and events, where many attendees experienced the technology first-hand. According to the authors, combining the smell-based feedback with visual and auditory cues transforms the perception of the virtual environment, making it feel far more realistic and engaging.

Beyond entertainment, the technology could have potential applications in advancing simulation-based training and therapeutic programs to stimulate memory and rehabilitation, especially for elderly people, as well as immersive demonstrations for fragrance products. As digital scent technology continues to evolve, this innovation marks a significant step in bringing multi-sensory virtual experiences closer to reality.

Here is an exclusive Tech Briefs interview, edited for length and clarity, with Nakamoto.

Tech Briefs: What was the biggest technical challenge you faced while developing this olfactory VR?

Nakamoto: Since the space available for its implementation is very limited, we cannot normally have many ingredients.

This work enables us to use a wearable olfactory display with many channels after developing the new circuits.

Tech Briefs: Can you explain in simple terms how it works please?

Nakamoto: We use eight microdispensers to eject tiny droplets. Those droplets are instantaneously atomized by the surface acoustic wave (ultrasonic) device. Then, the airflow carries the scent to a human nose. The device can blend the ingredients at the specified recipe.

Tech Briefs: Do you have any set plans for further research/work/etc.? If not, what are your next steps?

Nakamoto: It can be applied to games, simulators, and advertisements if we have many channels. Moreover, it can be applied to medical use. Several ones are on-going. We plan to ask a company to make it.

Tech Briefs: Is there anything else you’d like to add that I didn’t touch upon?

Nakamoto: We have a technique to reproduce an odor using a small set of odor components. It can be applied to odor reproduction even using the wearable olfactory display if the number of ingredients available increases.

Tech Briefs: Do you have any advice for researchers aiming to bring their ideas to fruition?

Nakamoto: Creators can make contents with scents if an appropriate device is available. HCI (Human Computer Interaction) researchers can also use the wearable olfactory display.