Researchers at the Georgia Institute of Technology have developed a way for robots to project their next action into the 3D world and onto any moving object, such as car parts on an assembly line. The achievement will help to improve human and robot safety in manufacturing scenarios.

The team first created algorithms that would allow a robot to detect and track 3D objects. The engineers next developed a second set of algorithms that displays information onto a 3D object in a geometrically correct way. Tying the two pieces together allows a robot to perceive an object, then identify where on that object to project information and act. The information is continuously projected as the object moves, rotates, or adapts.

“We can now use any item in our world as the ‘display screen’ instead of a projection screen or monitor,” said Heni Ben Amor, research scientist in Georgia Tech’s School of Interactive Computing. “The robot’s intention is projected onto something in the 3D world, and its intended action continues to follow the object wherever [it] moves."

The discovery, born from two algorithms and a spare car door, is ideal for manufacturing scenarios in which both humans and robots assemble together. Instead of controlling the robot with a tablet or from a distant computer monitor, the human worker can safely stand at the robot’s side to inspect precision, quickly make adjustments to its work, or move out of the way as the robot and human take turns assembling an object.

Knowing exactly where and what task a robot will do next can help workers avoid injury.

Source

Also: Read about a Method to Improve Wireless System Communication Coverage.