Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures.
Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.
This work was done by Jeffrey S. Norris, Victor Luo, Thomas M. Crockett, Khawaja S. Shams, and Mark W. Powell of Caltech; and Anthony Valderrama of MIT for NASA’s Jet Propulsion Laboratory.
This software is available for commercial licensing. Please contact Daniel Broderick of the California Institute of Technology at
This Brief includes a Technical Support Package (TSP).

Motion-Capture-Enabled Software for Gestural Control of 3D Models
(reference NPO-47893) is currently available for download from the TSP library.
Don't have an account?
Overview
The document outlines NASA's Jet Propulsion Laboratory (JPL) initiative focused on the development of Motion-Capture-Enabled Software for Gestural Control of 3D Models, as detailed in NASA Tech Briefs NPO-47893. The project, led by Principal Investigator Jeff Norris, aims to leverage commercial motion capture technology to address challenges in human-robot interaction, particularly in the context of space exploration.
The primary objectives of the project include enhancing navigation within virtual environments, enabling gestural control of robotic systems, and facilitating the manipulation of virtual objects. The results from the fiscal year 2010 demonstrate significant advancements, including the creation of demonstration applications that allow users to manipulate 3D models through gestures, navigate lunar landscapes, and control a virtual avatar. These applications were tested with multiple users, and feedback was collected to refine the technology further.
One of the key benefits of this technology is its ability to control high-degree-of-freedom robotic systems, such as the Robonaut 2 humanoid robot aboard the International Space Station (ISS). Traditional control methods, which often rely on keyboards, mice, or specialized controllers, can be unintuitive and expensive. In contrast, the motion capture system provides a more natural mapping of human movements to robotic commands, eliminating the need for custom controllers for each robotic system.
The document also describes the immersive environment created by JPL’s Stage, a 12-foot diameter, 270-degree display that enhances the operator's sense of immersion. Users can wear augmented reality glasses to interact with physical objects from the ISS while their movements are tracked and translated into commands for the Robonaut 2. This innovative approach not only improves the usability of control systems in immersive displays but also opens new avenues for human-robot collaboration in space.
Overall, the project signifies a substantial step forward in the integration of motion capture technology into aerospace applications, with potential implications for both scientific research and commercial technology. The document emphasizes the importance of continued research and development in this area, highlighting the potential for broader technological, scientific, and commercial applications stemming from these advancements.

