Researchers from the University of Sussex are the first to combine two cutting-edge visualization technologies in one: a fog screen and a shape-shifting display. The “MistForm” system, according to one of its creators, enables interaction capabilities that improve upon today’s virtual- and augmented-reality offerings.

Breaking Down the Boundary

Virtual reality (VR) headsets immerse a user in a completely different environment, allowing a CAD designer, for example, to create a part and see it in three dimensions.

Although VR technology presents a potentially exciting and new way to visualize parts and test design demonstrations, Dr. Diego Martinez Plasencia sees an obstacle to design collaboration: the headset.

“Something that really annoys me about 3D displays these days is that they are behind the boundary,” said Martinez, a Lecturer in the Interact Lab at the University of Sussex’s School of Engineering and Informatics. “It’s not like using your phone. Your phone – you take it out of your pocket and you’re using it.”

To use VR devices like the Oculus Rift and HTC Vive, a user must often load virtual prototypes and take the headset on and off, especially when working together with others. Martinez, who co-developed MistForm with University of Sussex researchers Yutaka Tokuda, Mohd Adili Norasikin, and Sriram Subramanian, believes the shape-changing fog screen enables easier interaction between users.

“You can still see the 3D content without wearing anything on your head, and that way it’s more like a mixed reality environment,” said Martinez.

The MistForm fog screen bends to keep objects in focus. (Credit: University of Sussex)

The Form of MistForm

The MistForm mid-air display uses a flexible pipe to release fog particles, which are then stabilized by curtains of air, blown by a series of fans. Actuators allow the pipe to change shape, pushing the 39-inch screen back and forth within a range of 7 inches. The display is projected from above, using an off-the-shelf 3D projector.

Unlike a standard projector that achieves uniform brightness regardless of one's position in a room, fog scatters light unevenly and in various directions. With a stationary fog screen, a pixel will appear bright when standing directly in front of the projector, and dim when viewing from the side.

MistForm’s display changes shape based on the viewer’s position, providing a more consistently focused image.

The University of Sussex system makes use of the Microsoft Kinect motion-sensing platform, which detects a user’s hand and adjusts the screen accordingly, moving the fog inward or outward to achieve the best viewing experience. The display surface morphs so that both a 3D object and a user’s hand remain comfortably visible.

“With other 3D display technologies, your eyes need to focus on the display surface, even if you see an object ‘popping out’ of the screen,” said Martinez. “If you then try to touch it, your eyes will need to focus either on your hand or on the display, which soon can lead to eye fatigue.“

To adjust the screen, the system relies upon 3D coordinates of the fingers as well as the 3D coordinates of the content that the user is interacting with. A shape reconstruction model and a machine-learning algorithm then compute the display shape that fits the points best, actuating the pipe and minimizing the distance between the fog screen and the finger or 3D object.

By reducing the distance between the fog surface and the 3D content, both remain in focus.

"Our technology keeps the fog within 6 centimeters to your hand, and if you reach farther behind or closer to you, the fog can follow you so that in the end you can cover the whole interactive range," said Martinez.

The fog screen has several configuration possibilities. For example, the screen can take on a triangular or curved shape to accommodate two collaborators and offer optimum visibility for both users.

To allow several users to interact simultaneously, developers have the option to implement code-specific tools. The University of Sussex prototype includes a low-level API to control shape and 2D/3D widgets to provide multi-user views.

A Step-by-Step Process

Josh Kinne, a deputy project manager at NASA Langley Research Center, based in Hampton, VA, has helped to deploy virtual and augmented reality headsets throughout the facility. Kinne appreciates Langley’s current AR/VR hardware compared to a change requiring an infrastructure overhaul.

"One of the things we like about the current AR/VR hardware is that it’s cheap, relatively portable, and can be deployed easily just about anywhere in minutes," said Kinne.

Kinne suggested, however, that the MistForm display, as it matures, could be an eventual candidate for data visualization applications, such as the finding of Mars landing site elevations or cloud height data from an Earth observing satellite.

Martinez sees the 3D display technology someday supporting drug designers who are looking for more explorative ways to study electrical interactions between proteins, or determine the shape of a drug and how it binds to cells.

The tool, however, is such a leap from standard visualization practices that it will have to be applied gradually over years, said Martinez.

“This is such a big change. You need to be bringing this into the light, step by step.”

RELATED CONTENT:

See who is making VR a NASA Reality.

Learn about NASA’s Head-Mounted Display Latency Measurement Rig.

Read more Imaging tech briefs.

What do you think? Will shape-shifting fog screens catch on? Place your comments in the form below. Send comments to This email address is being protected from spambots. You need JavaScript enabled to view it..