Figure 1. A close-up of the smart camera used in the first application. The camera provides visual inspection and accurate measurement of the valve bodies. The lens of the camera points away from the reader, at a mirror that reflects the image of the valve body sitting below the camera. (Image: Festo)

Designing solutions for machine vision and motion control, two of the principal building blocks of industrial automation, requires very different skill sets. As a result, selecting and integrating these systems is often done through separate efforts with different design teams, which can lead to missed opportunities for system simplification, cost reduction, and reduced maintenance requirements.

In this article, we look at two real-world applications in which machine vision and motion control work in harmony to solve different manufacturing challenges. By looking at these applications in detail we identify the benefits of approaching the two different automation systems with a single, comprehensive design process, as opposed to solving the vision and motion control challenges separately. Both applications are for automated assembly, the first for pneumatic valve bodies and the second for electronic circuits in the automotive industry, but the benefits of this approach can be realized in any industry.

Fixed Camera Position, Parts in Motion

In the first application, a global manufacturer of pneumatic valves requires high-quality, defect-free parts and minimal waste in their assembly process. They need a precise yet economical solution, and they prefer to purchase the solution from a single supplier.

The automated assembly of pneumatic valves begins with an aluminum valve body. Trays of several valve bodies are fed into the assembly cell, where a pick-and-place mechanism starts by transferring valve bodies one-by-one into the cell. At the first station the valve body is inspected by the vision system for defects. The vision system used at this station is a smart camera with integrated image processing on-board, as shown in Figure 1. The camera is rigidly mounted in a stationary position above the field of view.

It is important to inspect each valve body at this point so that a defective valve body can be rejected at this early stage, which minimizes waste because no additional components have been installed yet. During the vision inspection process the vision system counts the number of holes in the valve body and measures the dimensions and locations of the holes.

After the vision inspection, each accepted valve body is indexed to the next station where pneumatic fittings are threaded into pre-drilled and tapped holes in the valve body and a small, printed circuit board is pressed into a slot on the side of the valve body. Finally, each fully assembled valve body is indexed to an exit position where another dedicated handling system transfers it to an outbound conveyor.

Figure 2. Some of the electric and pneumatic handling systems used in the automated assembly cell are shown here. The vision inspection station is visible in the upper-left corner. (Image: Festo)

The handling and threading of the pneumatic fittings, the insertion of the printed circuit boards, and the handling of the assembled valve bodies through the cell are accomplished by numerous handling systems, as partially shown in Figure 2. The various handling systems are comprised of both pneumatic and electric variants of different rotary actuators, linear actuators, and grippers.

With this assembly cell, the manufacturer of the pneumatic valves meets their need to produce high-quality, defect-free parts while minimizing waste. From the design perspective of the vision inspection station, rigidly mounting the smart camera in place and presenting parts into the field of view with precision fixturing enables reliable vision inspections with accurate measurements. This kind of vision system set-up is ideal in most applications because the position of the camera relative to the part under inspection is the same over time. As we’ll see in the next application, sometimes it is better to move the camera into position for each visual inspection task.

Moving Camera, Fixed Part Position

Quality control requirements during the assembly of fuel pump control boards in the automotive industry dictate that critical steps of the assembly process are well documented and traceable. This second application is for the pressing of fuses into printed circuit boards. The assembly process requires accurate motion control with multiple points of visual inspection using a machine vision system. The manufacturer of the fuel pumps must not only determine whether a fuse has been properly pressed into the control circuit board and provide a simple OK/NOK (Not OK) result to downstream steps in the process, but also must document the verification of all parts and the results of each assembly step to comply with traceability requirements.

The assembly process starts with a handling system that picks a PC board and pre-fitted fuse from an inbound supply. Before picking the PC board, the vision system confirms that the correct fuse has been provided. In this application the vision system includes a smart camera with a C type lens mount, which enables the selection of an optimal lens for the application from the wide range of commercially available lenses. The visual confirmation of the fuse is done via a scan of the data matrix code imprinted on each fuse. The data matrix code includes lot code and part number information.

After picking the PC board with a pneumatic gripper, the handling system moves the PC board to a fixture where the pre-fitted fuse is pressed into its completely assembled position. To perform the pressing operation, the handling system moves slightly to present a servo-driven balls crew actuator over the fuse. The actuator is also mounted to the Z-axis. An analog force transducer mounted at the working end of the actuator provides force data to the machine controller which uses that data, combined with position data from the servo motor on the actuator, to verify that the pressing operation is successful. After the pressing operation, there is a final vision inspection of the assembled fuse and PC board to determine if the process was successful.

Because the vision system is required to perform multiple vision inspection tasks at different locations in the machine, the solution provider chose to mount the camera to the Z-axis of the handling system that picks and places the PC boards. With stationary camera positions they would have had to install two separate cameras to perform the required vision inspection tasks, which would have increased the overall cost of the machine and unnecessarily complicated the machine design. By choosing to mount the camera to the Z-axis of the handling system, a single camera can be used. The one-camera solution costs less without sacrificing performance or results.

Figure 3. The handling system used for picking and placing each fuse into a waiting PC board is shown. Multiple functions reside on the Z-axis: pneumatic gripper and actuator for picking the fuses, camera with lens and ring light for visual inspection tasks, and ballscrew-driven servo-press actuator. (Image: Festo)

An interesting aspect of the handling system for this application is that there are three distinct functions combined on the Z-axis: a pneumatic actuator and gripper for picking and placing each PC board, the single camera used for all vision inspection tasks, and the servo-press actuator. These different functions, along with the X and Y axes of the handling system, are shown in Figure 3.

As described above, all results from the different vision inspection tasks and the pressing operation must be documented. This is accomplished by the machine controller, which captures the vision inspection results from the camera and transmits the results to an external server used for data storage and archiving. This solution meets the customer’s need for traceability. The serialized pump control boards can be traced back to the exact time and date of manufacture, along with each control board’s unique assembly data and image processing results acquired during the assembly process.

Moving Part vs. Moving Camera

As can be seen from the two examples above, it is preferable in some applications to move parts into the field of view of a stationary vision system. This approach is best suited for applications where the vision system is performing inspection of a single part. It can also be used in applications where a handling system presents multiple parts to a stationary camera, but the key here is that the camera does not move. In other applications it is preferable to move the vision system to the locations of stationary parts. In the second application, we see how doing this allows a single vision system to perform multiple inspection tasks on different parts located in different areas of the assembly cell.

In both applications a precision handling system, consisting of various electric and pneumatic actuators and grippers, works to perform accurate positioning tasks. In the case of the stationary camera, parts are presented to the camera in defined positions with rigid fixtures that provide precise, repeatable part locations over time. In the case where the camera is moving, it is essential that the handling system carrying the camera has very good repeatability so that every time the camera is re-positioned the parts fall within the field of view of the camera. In some applications with a moving camera, it is also necessary to calibrate the position of objects in the camera’s field of view to the external X-Y coordinates of the handling system. This enables the vision system to provide positional information to the handling system, which is often referred to as robotic guidance in machine vision systems.

Combined Controller for Motion and Vision

Figure 4. Smart cameras provide image processing on-board and send OK/NOK results to an external controller. (Image: Festo)

One aspect of the applications above is that the solution in each application includes a single controller that handles all machine control logic, motion control tasks, and visual inspection tasks of the vision systems. In the first application, a smart camera is used to perform the vision inspection of the valve bodies. The smart camera processes each image on-board and provides a simple OK/NOK result to the machine controller (Figure 4). The machine controller uses this OK/NOK result to determine its next movement, to either transfer a good part to the next assembly station or transfer a bad part to a reject pile. In the second application, the vision system sends all inspection results to the single machine controller, which acquires the results, performs some analysis on the combination of the results, and transfers the results to the data server after each cycle. The same machine controller handles all control logic of the assembly process and performs the numerous motion control tasks of the XYZ handling system. One example of such a controller is shown in Figure 5.

Figure 5. Controllers can handle all machine control logic, vision system control, and motion control tasks of the machine or assembly cell. (Image: Festo)

In both applications, the single machine controller allows for greater efficiency to the solution provider (machine builder) and the manufacturer (end user). For the solution provider, only one programming environment is used to create the control code using vendor-supplied function blocks for vision inspection and motion control tasks. This cuts down on development time and simplifies the design process. The manufacturer receives the benefits of a shorter bill of materials and reduced cost for the solution, while benefitting from reduced upfront training costs and reduced maintenance efforts as the machine ages, due to the streamlined control architecture.

This article was written by Eric Rice, Product Market Manager – Electric Automation, Festo (Islandia, NY). For more information, visit here  .