Figure 5. Handle objectified using thresholding, marked in green.

#5 This information is sent to the robot controller: Interfaces are visualized as having “layers,” each of which must be matched between the two systems. The bottom layers are the familiar general types (typically Ethernet or RS232). The top layers are the format and sequence protocol for the data itself and its transfer. Having one side of the link (the robot) define this as a rigid proprietary protocol is still common. When this is the case, the protocol should be stable and documented; then the machine vision supplier often creates a custom translator to that “language.”

#6 The robot uses this information to move to the correct position and orientation to grasp the object: The vision system tells the robot (specifically, the robot controller) where to go, not how to get there. In other configurations, (especially with robot-mounted cameras) the vision may continue to operate during the move to provide feedback for higher accuracy.


Using a simplified example application, we can see the basic steps of how machine vision guides robots. Most applications have additional complexities in one or more areas. Many (such as when the part is moving on a conveyor, and when the camera is mounted on the robot itself) are common and addressed by additional technologies, tools and methods which are currently available.

This article was written by Fred D. Turek, COO at FSI Technologies, Inc. (Lombard, IL). For more information, Click Here.