Machine vision can quickly and accurately determine the location of parts so they can be inspected, measured, or manipulated by a robot. An example is using machine vision to guide a robot unpacking one-gallon cans from a large pallet of cans. Machine vision components — cameras, vision processors, and software — were provided by DALSA, and Faber Industrial Technologies developed the can-picking robot and integrated the robot with the machine vision.
The cans arrive on a large wooden pallet stacked in six layers, each with 56 cans. A brown paper “slip sheet” separates each layer and the stack is topped with a wood rectangle called a “picture frame.” Three workers de-palletized the cans. One worker moved the pallet into the factory and cut the retaining straps, while two unpacked the cans. The unpacked cans are placed on a conveyer belt for filling and sealing. It was difficult for workers to keep up with the demand from the can fill and sealing line — about 1.5 cans per second — so Faber was asked to develop a vision-guided robot system to take over can depalletizing and placement on the fill line conveyer belt.
Viewing the Work Area
A machine vision system works best with an undistorted view of properly lighted and well-controlled parts. The first tasks in developing a machine vision application are to fix the view geometry, control the lighting, and limit the variation of the presented parts. That’s the theory, but the practice was quite different in this application.
There was no practical way of controlling the lighting in this situation, as the area had to be open to allow pallets of cans to be presented to the unpacking robot and to allow the robot to place cans on the fill line conveyer belt. Bright, high-frequency fluorescent lights were used so that changes in ambient illumination would have a relatively small effect on images of the parts.
The best view geometry would be to put the camera directly above the pallet, looking down on the layers of cans. A layer of 6"-diameter cans is about 48 by 40" and, to allow for variation in pallet position, a field of view (FOV) of 5.3 by 4' was used. To get this large field of view with low optical distortion, a camera would be far above the top layer of cans, perhaps 16'. The factory ceilings were 12' high and it was not feasible to knock a hole in the roof.
To get the required FOV, a camera with a short focal length lens was mounted above and at an angle to the can layers. Due to optical distortion from the short focal length lens, the can openings don’t look circular. The vision system corrects the image for some of this distortion. Due to perspective, the FOV decreases as each layer of cans is removed. The vision system compensates for changes in FOV and plane of focus by adjusting the lens’ zoom and focus for each layer of cans.
Integrating the Robot
The robot’s end effectors, or “hands,” are designed to remove the picture frame — the wood frame at the top of the pallet that helps hold the pallet together — and slip sheets, and to remove half a layer of cans at once and load them onto the fill line. These multiple functions made the design and fabrication of the effectors the most difficult task in this application.
A new pallet of cans is first examined by the vision system to find the picture frame. The corners of this frame give the position and angle of the pallet, with respect to the camera, and hence the robot. The robot then uses its vacuum suction-cup end effectors to remove the picture frame and top slip sheet for recycling.
The machine vision system finds can rims so that the robot’s end effectors can be inserted into the open can and then expand to lift the can by the rim. The vision system lighting was designed to give a bright reflection off of the can rims to make them easy to find.
The vision system is trained to know the predicted location of each can’s opening in a layer of cans. This can location map is rotated and translated by the pallet position found from the picture frame corners. This gives an approximate location of each can’s opening in a layer of cans. For each can, the vision system starts at the approximate opening location and probes outward until a bright can rim is found. These rim points form an outline of the rim and from this, the center of each can opening is computed. These center points are used to guide the robot’s end effectors into the can openings.
If any can’s center point is more than 30 mm off from its map position, the process is stopped until the operator corrects the can position. If this is not done, the robot’s end effectors could come down and crush the out-of-place can.
When all cans are within tolerance of the required position, the robot end effectors come down and pick up half (28) of the cans and place the cans onto the fill line conveyer. The robot then picks up the other half of the cans and puts them on the fill line. Then, the next slip sheet is removed to expose the next layer of cans and these are removed. This process repeats until the pallet is exposed. The robot then uses gripper end effectors to remove the pallet and stack it for recycling.
The vision system’s processor is a small computer with embedded image acquisition and process control hardware. Machine vision software runs on the computer to locate cans, correct for image distortion, adjust the lens zoom and focus for each layer of cans, and compute and communicate the can opening locations to the de-palletizing robot.
Using this system, the customer could eliminate two workers. The remaining worker clips the pallet’s retaining straps, monitors the system, and corrects out-of-tolerance can positions. This last task reduces throughput, as the robot system has to be safety-stopped while the worker enters the robot cage and corrects the can position. The solution is to require the can supplier to align the cans on the pallet to within tolerance.
This article was written by Ben Dawson, Director of Strategic Development at DALSA, Waterloo, ON, Canada. For more information, visit: http://info.hotims.com/22918-165 .