Cameras are still an important technology feature in manufacturing and inspection applications, but they are also increasingly valuable in non-traditional sectors. As part of June’s OEM Camera Directory & Guide, we look at three factors driving imaging tools: speed, resolution, and software.

Figure 1. The SLAM-algorithm system was tested on a PR2 robot, shown above. A map and its details, such as wall edges, were used as a reference for the robotic navigation. (Image courtesy of Hordur Johannsson, CSAIL/MIT)
Industrial cameras still serve the usual inspection tasks, like making sure that labels are placed on the appropriate dimensions of a bottle, or confirming that automotive parts are free of cracks or defects. Machine-vision- quality cameras, however, are becoming a key component in other nonmanufacturing areas where high-quality images are still required, including medical applications, transportation, and surveillance.

Microscope cameras, for example, now have the capability to measure the size and shape of cells. Surveillance devices offer 3D renderings of targets, and have the technology to distinguish human faces from photographs. Other camera systems spot braking vehicles, collect speeds, and analyze traffic flow rates.

Vision is also an essential feature in the development of more advanced and autonomous robot control. Rather than being told where to go, today’s robots understand their environment and look for objects on their own.

A system being developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), for example, uses Simultaneous Localization and Mapping (SLAM) technology. A robotic algorithm builds three-dimensional maps, with no human input. As the robot travels through an unexplored area, a low-cost video camera and an infrared-depth sensor scan the surroundings, and update the map as it learns pieces of its environment (see Figure 1).

Whether a vision customer is a manufacturer seeking inspection help, a military agency looking to map out an area, or a city council trying to understand highway traffic patterns, three technology areas are critical to today’s customer needs: high speed, high resolution, and customizable imaging software.

Speed

Figure 2. A camera system consisting of seven GigE devices inspects for surface blemishes, contour flaws, and thickness. The system has 13 axes of motion.
With imaging technology, high speed is important not only in production and manufacturing, but also in other nontraditional activities: monitoring intruders, recognizing features within a room, and analyzing traffic, to name a few.

The quality of Complementary Metal- Oxide Semiconductor, or CMOS, chips has increased dramatically, according to John Morse, senior market analyst at the Wellingborough, England-based IMS Research, and they contribute to improved speed — especially compared to the more traditionally used CCD chips. Although CCD sensors still have their place, he says, new camera products increasingly incorporate CMOS sensors. The growth in CMOS chip quality is valuable for non-traditional applications, like the monitoring of a busy traffic area, where speedy processing is essential.

“You need to be able to create an image and access a license plate number on a car quickly, get that information back to wherever you need it, and get it ready for the next one that’s whizzing by,” said Morse.

Quality sensors have become available at lower prices, and big manufacturers like CMOSIS, Sony, and Aptina strive for the improved speeds.

Another speed innovation lies in the cameras’ communication features, which now are able to take data very quickly to where a user wants it. Gigabit Ethernet, a technology that transmits Ethernet frames at a rate of a gigabit per second, enables users to take an image away from a chip rapidly and transfer it back to wherever he or she wants to process it, often to a standard computer.

Many companies offer GigE cameras, and the options appeal to customers like Richard Schwarzbach, president of the Sebastian, FL-based systems integrator, InoSys, Inc. The InoSys founder, who builds and integrates between 10 and 20 systems per year, frequently installs PPT Vision multicamera systems, which have the hardware, I/O, and drivers necessary to hook up GigE vision cameras. Schwarzbach recently finished building a large system with seven GigE, high-resolution (3.5 megapixel and 4.2 megapixel) cameras (see Figure 2).

“With GigE, you plug an Ethernet cable into it, you plug it into a port on a PC, and you have communication with a camera,” he said.

The GigE cameras have provided the necessary speed — Schwarzbach recently finished a job that processed 600 parts per minute — along with other additional improvements. On certain jobs, Schwarzbach cannot always place the camera in the right orientation to help a customer who, for example, may need to see a bottle standing up and not laying on its side.

“Normally you’d have to take that image and mirror it inside of the computer, but these cameras are smart enough that you can mirror right in the camera and it takes no time. You’re talking microseconds.”

In the past, says Ben Dawson, Director of Strategic Development for Machine Vision at Teledyne DALSA, a machine-vision manufacturer headquartered in Billerica, MA, there was not enough bandwidth on either the interface link or the computer to handle high-speed demands. “Now customers routinely use four cameras, effectively 40 megabytes per second, with no problem at all,” he said. “In some cases the cameras generate 50 to 60 megabytes per second. A camera has to feed that need.”

Resolution

Figure 3. Software finds the center of each cookie, using the centroid of each of the Line generator patterns. It then calculates the angle of each cookie to determine quantity and quality. The high-intensity, structured light is controlled through the software.
A high-quality picture has always been important to machine vision customers, but consumers have high expectations, too. The latest iPad, for example, features a 5-megapixel iSight camera.

Some manufacturers, however, require superior color resolution for tasks like edge-to-edge measurements of colored objects or verification of the uniformity of a colored part. One technology that has improved resolution is a specialist end of the market called 3- chip technology.

Most machine vision color cameras have a standard lens, along with one sensor, which picks up the light. The onechip cameras use a Bayer Pattern of alternating red/green and blue/green filters over individual pixels.

Three-chip technology uses three, separate charge-coupled device (CCD) sensors and a prism. Each CCD is covered by a different color filter, and handles one of the three primary colors: red, green, and blue. The prism splits the light coming through the lens, and focuses the different wavelengths to the appropriate CCD.

In a 3-chip camera, color (red, green and blue) is fully sampled at every pixel in the image. These types of cameras, however, are only usually used where accurate color recognition is critical, like when providing high-quality images for the food industry or for the print industry. Because each of the three monochrome sensors is dedicated to a different color (rather than one sensor covering all of the colors), there are less resolution problems and no bleeding, fringes, or edge defects. The alignment of the prisms used to send light to the three CCD chips, however, contributes to the high cost of these cameras as well as the cost of two additional sensors. The light must pass through the prism and lens, hitting the same corresponding pixels on each of the carefully placed sensors.

“What I would like to see, and I don’t see anything on the horizon, is a way to inexpensively get full-resolution color,” said Dawson. “I get complaints from customers who use a Bayer Pattern color camera, but can’t afford a full-resolution camera. The typical customer that has problems is looking at manufactured objects with fine color details.”

Although the 3-chip technology has been around for 30-plus years, the nature of the cameras is changing to some degree, according to Morse. The three chips are not always designated to red, green, and blue. Some of the chips correspond to “UV+ visible light + IR” instead, and each will be configured to sense a particular material.

To find important components of an image, like an edge, Schwarzbach relies on the analysis of grayscale values. In most of today’s cameras, there are 256 shades of gray between black and white. Each shade of gray demonstrates a specific intensity of light at a given pixel, and the differences can help determine the location of an edge, a defect, or an artifact. A place to go to improve the technology, according to Schwarzbach, is to increase the number of grayscales from 8-bit to 10-bit, and go from 256 shades of gray to 1024 — increasing spatial resolution rather than just physical resolution.

With a better resolution, he says, a camera can receive more information and do a more detailed inspection. “If I was looking at something that was supposed to be plus-or-minus one thousandth of an inch, and I had 1,000 pixels, that’s plus-or-minus one pixel. You really can’t depend on that,” said Schwarzbach. “But if I had 3,000 pixels, now I have 3 pixels to play with.”

Software

Imaging software is another evolving component of camera technology; software has shifted from code dedicated to a particular function, to more customizable options. Instead of buying piecemeal library modules, which can then be hooked together, companies now provide pliable system software for customized business-specific applications.

“Customers will pay a system house, asking for a specific imaging system that performs functions X, Y, and Z, and the system house will take the software package, set it all up, and switch it on,” said Morse.

As the software part of the market has developed, there are now companies that specialize in writing software for machine vision applications. “It’s not a huge sector of the industry, but it’s growing, and it’s becoming more and more important, because the processing of the data is as important very often as creating the image in the first place,” he said.

Software also offers integrators like Schwarzbach improved control over imaging settings. Using PPT's Control Panel Manager, for example, buttons can be linked on a control panel or HMI — the same human-machine interfaces that display total passes and eminent fails — to any control inside a vision system. With software, a user can enable different intrinsic features of a camera, including gain shutter speed and lighting, from within the program. Specific settings can be used for different jobs.

Schwarzbach can click a button, for example, and find the brightness level that is most appropriate, and then have the program remember the particular adjustments. “You couldn’t do that five years ago,” he said.

The use of laser light and 3D is also a major development — not a giant sector of the market, according to Morse, but an important technical one. Software has developed in a way that makes 3D image processing possible.

An image system developed by the Burlington, MA-based Visidyne, Inc., for example, measures recurring modulated light at three phases in time to get three images. By analyzing the subtle changes in light reflectance, distance, and brightness in those three images, a 3D shape of an object can be determined and rendered.

“What’s really made it all happen,” said Morse, referring to the growth in 3D vision technology, “is the development of software to handle those images and to assimilate the data and create a 3D image.”

The specific software capabilities have improved as well. “Software has gotten better. Pattern finding tools and edge-finding tools have gotten faster and more accurate,” said Schwarzbach. A pattern tool, for example, can locate the center of a part, and check the parts’ edges for surface blemishes and pinholes (see Figure 3). A contour tool, similarly, sits and tracks the outside form of a product, ready to warn the user if there are malformations, flash, or short shots to the left or right of an edge.

Companies, in turn, are providing the whole package: cameras, the wiring, input/output, and the mini-computer. The trend has shifted from a machine that performed the full operation to a PC-based or processor-based system where users can easily take advantage of the new software.

“Now many are running a machine vision processor on a PC platform, so that you can run both vision software and any statistical software in the same box,” said Schwarzbach.

Dawson sees a steady migration of intelligence into cameras, and predicts their future use in applications beyond industry and manufacturing. “They’re going to be going into non-traditional machine vision kinds of applications, such as collision detection in cars or gaming,” he said, noting the Microsoft Kinect, a Microsoft motion sensing input device for the Xbox 360 video game console and Windows PCs. The technology, based around two cameras and a pattern of projected infrared light, enables players to interact without the need to touch a game controller.

“We’ve seen it in the Microsoft Kinect. It’s a smart box with cameras and other sensors in it. You’re moving the intelligence downstream so the Xbox can concentrate on the game play rather than processing pixels,” he said.

Meeting Customer Expectations

Customers have heightened their expectations. Integrators like Schwarzbach run into scenarios where customers want more and more from their systems.

“They don’t expect a vision system to be able to just tell you that there’s a nut on the bolt,” said Schwarzbach. “They want to be able to make sure that there aren’t any cracks, and there aren’t any chips, and to measure something that’s a couple of thousandths of an inch, located anywhere in a 6- to 8-inch field of view.”

As customers are more demanding in what is wanted and expected, camera manufacturers have been tuning their speed, resolution, and software to accommodate the needs of their traditional manufacturing customers and new non-traditional users alike.

This article was written by Billy Hurley, Associate Editor, NASA Tech Briefs. Contact This email address is being protected from spambots. You need JavaScript enabled to view it. for questions or more information.