An operator interface is undergoing development along with other subsystems of a robot that would assist in caring for elderly and disabled patients. This development has been initiated in anticipation of an increasing need for long-term care, in both home and hospital environments, with decreasing ratios of medical personnel to patients. Robots of the type envisioned would be controllable by both medical personnel and patients with some disabilities. Typically, the robots would be used to perform routine, mostly nonmedical tasks like delivering food and retrieving articles of clothing or other objects for patients.

The prototype robot includes a battery-powered mobile platform equipped with a color camera mounted on a pan-and-tilt head, and an array of ultrasound, infrared, and contact sensors for autonomous navigation. A fully developed robot would also include a manipulator arm. Some control functions are performed by a computer mounted on the platform; other control functions are performed by a computer at a control station, where the operator interface resides (see figure). In a fully developed robotic system of this type, the control station could be located at a patient's bedside or in a nurse's office, for example.

The Operator Interface resides in a computer at a control station that communicates with a computer in the robot.

To accommodate the needs of patients and medical personnel and the limitations on dexterity of some patients in a variety of situations, the operator interface accepts inputs and generates outputs in a variety of media. In particular, it provides an integrated combination of voice, video, and computer graphical inputs and outputs, plus mouse and joystick inputs. The voice mode is the main command mode and is included primarily because many elderly and disabled patients would find it difficult or impossible to use the mouse and/or keyboard. The voice mode is implemented by a voice-recognition engine running commercial speech-recognition software that listens for voice input from the operator and seeks to match the operator's utterances with each of fourteen commands (e.g., stop, turn left, move forward) stored in active voice menus. When the robot receives and interprets a voice command, a speech synthesizer in the robot restates the command for confirmation.

The operator interface includes a graphical user interface that can present visual feedback in the forms of images from the video camera on the robot, images from a remote video camera (e.g., during a video conference with a physician), a graphical display of the robot on a map of the environment, or a graphical display of sonar readings of distances and directions from the robot to nearby objects. The data-entry portion of the graphical user interface includes interactive displays for using the mouse to select the voice mode, to set translational and rotational speeds of the robot, or to select the joystick to control motion. The mouse can be used to stop all motions of the robot in an emergency. The mouse can also be used to move a cursor to a target position indicated on the map graphical display and to command the robot to move to that position; when the mouse button is clicked at the target position, the robot moves toward the target, automatically avoiding obstacles along the way.

This work was done by Paolo Fiorini, Homayoun Seraji, and Khaled Ali of Caltech and K. G. Engelhardt of KG Robotics for NASA's Jet Propulsion Laboratory.For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com under the Electronic Systems category, or circle no. 187 on the TSP Order Card in this issue to receive a copy by mail ($5 charge).

NPO-20072

Motion Control Tech Briefs Magazine

This article first appeared in the February, 1998 issue of Motion Control Tech Briefs Magazine.

Read more articles from the archives here.