Gesture-controlled interfaces are software-driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) — for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure).

A User Would Control an Automated Teller Machine through gestures. Panels on the sides of the machine would depict static and dynamic hand and arm signals recognized by the system.

A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism.

Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture-controlled interface can extract more information from dynamic than it can from static gestures.

Gesture-controlled interfaces offer several advantages over other input devices commonly used in SSMs:

  • There would be no mechanical wear because unlike a keyboard, push-button switch, and/or computer mouse, a gesture- controlled interface contains no moving parts.
  • Inasmuch as there would be no direct contact with users, there would be no problem of hygiene as there is with a touch screen.
  • Unlike a speech-recognition system, a gesture-controlled interface could operate in a noisy location because it does not respond to sound.
  • The safety of users of automated teller machines could be increased because the translator subsystems of gesture-controlled interfaces could be made to recognize poses and motions associated with the crimes committed at such machines.
  • Systems will be designed to recognize gestures that are natural to users, thereby decreasing the time required to learn how to operate SSMs. The area of an SSM surrounding a display screen could contain pictures of hand signals or other gestures recognized by the system.
  • The use of gestures as a communication medium may help to overcome language barriers to the use of SSMs in communities with diverse populations.

This work was done by Charles J. Cohen and Glenn Beach of Cybernet Systems Corp. for Johnson Space Center. In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to:

Cybernet Systems Corporation
727 Airport Boulevard
Ann Arbor, MI 48108
Phone: (734) 668-2567
Fax: (734) 668-8780
Web: www.cybernet.com

Refer to MSC-23002, volume and number of this NASA Tech Briefs issue, and the page number.



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the December, 2006 issue of NASA Tech Briefs Magazine (Vol. 30 No. 12).

Read more articles from the archives here.