The traditional interface for remotely operating robots employs a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task. But for someone who isn’t an expert, the ring-and-arrow system is cumbersome and error-prone. It’s not ideal, for example, for older people trying to control assistive robots at home.
A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient, and doesn’t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.
The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. The new point-and-click format doesn’t include 3D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot’s perception algorithm analyzes the object’s 3D surface geometry to determine where the gripper should be placed.