A system of electronic hardware and software has been developed as an experimental prototype of a visual interface between a human operator and a possibly remote one-arm anthropomorphic robotic manipulator. The system is denoted, more specifically, as a vision-only operator interface to emphasize that unlike some other operator interfaces, it does not include joysticks, force-feedback devices, or other mechanical devices that could encumber the operator. The operator commands the robot by moving one of his or her arms; the operator receives feedback in the form of a live video image of the work space of the robot overlaid with a graphical model of the robot plus icons that warn of robot poses that should be avoided.
The figure is a simplified schematic depiction of the flow of data within the system and the sequence of actions performed by the system. The flow of data begins with acquisition of images of the operator’s arms, by use of four video cameras that surround the operator. To facilitate tracking, the operator’s space is darkened and the main joints (shoulder, elbow, and wrist) of the operator’s arm are marked with small light bulbs. The image data from the video cameras are processed into three-dimensional Cartesian coordinates of the main joints at a video frame rate of 60 Hz, with an accuracy of 10 mm.
The coordinates of the main joints of the operator’s arm are converted to commanded angles for joints of the robot arm. These commanded angles are used to construct the graphical model of the robot to be overlaid on the live video image of the robot. The model data are analyzed to detect self-collisions, which are defined here as situations in which two links of the manipulator come too close to each other. The links in danger of colliding with each other can be highlighted in the graphical display to help the operator avoid self-collisions; alternatively, lines indicating distances of closest approach can be drawn in the display.
In this system, the commanded joint angles are generated by use of the configuration- control formalism, which has been described in a number of prior NASA Tech Briefs articles. The configuration- control formalism can resolve mathematical singularities associated with kinematic redundancies, but adds algorithmic singularities to robot poses that the operator would not easily recognize as being singular. Therefore, in the graphical display, the work space in the vicinity of the wrist of the robot arm is discretized into cubes, and each such cube is marked in red to indicate that a quantitative measure of the risk of a kinematic or algorithmic singularity at the center of that cube exceeds a specified threshold. The quantitative measure is the determinant of JAJTA, where JA is the augmented Jacobian determinant of the system, computed for the current orientation and the current manipulator arm angle (defined as the angle between shoulder/elbow/wrist plane and the vertical plane that contains both the wrist and the base of the robot).
This work was done by Paolo Fiorini, Eugene Chalfant, Pietro Perona, Enrico DiBernardo, and Yuichi Tsumaki of Caltech for NASA’s Jet Propulsion Laboratory. NPO-20912
This Brief includes a Technical Support Package (TSP).
Unfortunately the TSP Vision-Only Operator Interface for a Robotic Manipulator (reference NPO-20912) appears to be missing from our system.
Don't have an account? Sign up here.