Machine vision requirements for better performance and higher resolution continue driving developers to incorporate digital cameras into their solutions. This trend will likely accelerate as the price and performance of digital cameras improves. This article will provide you with information on digital camera technology and key factors to consider when choosing a digital camera and associated frame grabber — assuming that all upfront analysis has been performed and that a digital imaging solution is required.

Why Use a Digital Camera?

Analog cameras continue to be the dominant choice in most machine vision applications for several reasons:

  • Huge installed base,
  • Mature technology with well-known standards,
  • Performance is often adequate for the application, and
  • Inexpensive, readily available cabling.
Figure 1: Minimum Resolution Imaging
Figure 2: Higher Camera Resolutions

Until recently, there were several significant disadvantages to using a digital camera. Digital cameras typically were priced higher than similarly performing analog models. And, in addition to being expensive and bulky, digital camera cabling often was not easily interchangeable between different types of cameras and frame grabbers. However, with the introduction of the Camera Link™ and GigE Vision standards, digital camera cabling issues have largely been eliminated, while their performance continues to steadily improve, in many cases far exceeding that of any analog camera.

So why use a digital camera? Digital cameras can deliver higher data rates, higher resolution, and higher bit depths than analog cameras. Digital transmission is also inherently less susceptible to noise than is analog — a key consideration for plant environments.

One of the ways that digital cameras achieve these high data rates is by providing data on multiple taps, or channels. While this has been done with analog cameras, synchronizing the taps is more difficult with analog, as the frame grabber must be able to lock with subpixel accuracy from multiple sources. Since a digital camera performs the data alignment, synchronizing and interfacing to multiple taps using these sources is assured.

Digital Camera Standards

When they were first introduced, commercial digital cameras used parallel TTL (Transistor-Transistor Logic) level signal outputs. These single-ended, 5V signals were prone to noise interference and could not be transmitted over long distances. As frame rates and resolutions increased, it became necessary to transition to a signaling standard that could accommodate the higher data rates.

RS-422 was the next step in the evolution of commercial digital camera standards. RS-422 parallel output cameras provide potentially higher data rates than TTL by using a lower voltage and higher clock frequencies. The differential signal strategy provides better noise immunity, and because it is a differential signal, the number of data lines doubles over that of a single-ended TTL signal and, therefore, cabling becomes an issue. Cables for RS-422 digital cameras typically are unique to the camera/frame grabber pair, so cable costs are high.

Parallel signal standards next moved to LVDS (Low-Voltage Differential Signaling), which has a lower signal voltage than RS-422. LVDS also accommodates higher transmission frequencies (>40 MHz) and consumes less power than TTL or RS-422. However, the same issues with cable costs and pin counts remain.

With the availability of serial digital standards like Camera Link, many of the problems inherent in parallel signaling were alleviated. Because it transmits data serially, the number of required signals per tap is greatly reduced. And, since Camera Link is a standard, cables are interchangeable between cameras and frame grabbers. Camera Link allows up to four independent control signals for events such as reset and integration, as well as integrated serial communication for camera configuration and status, so that Camera Link cameras incorporate many of the control features required for machine vision applications. The ability to dynamically reconfigure and query the status of the camera under software control, while sometimes available, is rare with analog cameras.

What to Consider When Selecting a Digital Camera

The initial step in any imaging application is determining what to acquire and what features within that image should be highlighted. As an example, let’s consider an application in electronics manufacturing that defines an object to be acquired, such as a populated circuit board. The overall image will contain many features of interest for inspection. The electronic component features — such as size, identity, quantity, location, and orientation — are all critical for the correct and complete assembly of a circuit board. Other features may include markings on the board and characteristics of the bare board itself, such as material density and routing/continuity of the circuit board traces. The features of interest within the object’s image become the inspection points, and the characteristics of those features are the inspection criteria.

Figure 3. GigE Vision-compliant digital cameras are designed specifically for high-speed industrial imaging applications. GigE Vision isthe first standard to allow image transfer using low-cost, standard cables over very long lengths.

The resolution of the image required for inspection is determined by two factors: the field of view required and the minimal dimension that must be resolved by the imaging system. Using a basic example, if a beverage packaging system must verify that a case is full prior to sealing, it is necessary for the camera to image the contents from above and verify that 24 bottle caps are present. It is understood that since the bottles and caps fit within the case, the caps are then the smallest feature within the scene that must be resolved.

Once the parameters and smallest features have been determined, the required camera resolution can be roughly defined. It is anticipated that, when the case is imaged, the bottle caps will stand out as light objects within a dark background. With the bottle caps being round, the image will appear as circles bounded by two edges with a span between the edges. The edges are defined as points where the image makes a transition from dark to light or light to dark. The span is the diametrical distance between the edges.

At this point, it is necessary to define the number of pixels that will represent each of these points. In this application, it would be sufficient to allow three pixels to define each of the two edges and four pixels to define the span. Therefore, a minimum of 10 pixels should be used to define the 25-mm bottle cap in the image. From this, we can determine that one pixel will represent 2.5 mm of the object itself.

Now we can determine the overall camera resolution. Choosing 400 mm of the object to represent the horizontal resolution of the camera, the camera then needs a minimum of 400/2.5 = 160 pixels of horizontal resolution. Vertically, the camera then needs 250/2.5 = 100 pixels vertical resolution. Adding a further 10% to each resolution to account for variations in the object location within the field of view will result in the absolute minimum camera resolution of approximately 176 (H) × 110 (V).

If a camera with this resolution were available, the images acquired would appear as seen in Figure 1. Notice that a cluster of relatively lighter pixels compared to the dark background represents each bottle cap. In this image, it is possible to discern that there are 24 bottle caps contained within the case, but little else. Choosing a camera with a higher resolution, such as 640 × 480, which is commonly available, will yield a much-improved image that could be used to acquire more detail within the image (such as print), as shown in Figure 2.

Pros and Cons of Increasing Resolution

While a higher-resolution camera will help increase accuracy by yielding a clearer, more precise image for analysis, the downside is slower speed. Digital cameras transmit image data as a series of digital numbers that represent pixel values. A camera with a resolution of 200 × 100 pixels will have a total of 20,000 pixels, and, therefore, 20,000 digital values must be sent to the acquisition system. If the camera is operating at a data rate of 25 MHz, it takes 40 nanoseconds to send each value. This results in a total time of approximately 0.0008 seconds, which equates to 1,250 frames per second.

Increasing the camera resolution to 640 × 480 results in a total of 307,200 pixels, which is approximately 15 times greater. Using the same data rate of 25 MHz, a total time of 0.012288 seconds, or 81.4 frames per second, is achieved. These values are approximations, but it is apparent that an increase in camera resolution will result in a proportional decrease in camera frame rate.

Speed and Exposure

When selecting a digital camera, the speed of the object being imaged must be considered. In the previous example, it was assumed that the object was not moving during exposure; therefore, a relatively simple and inexpensive camera could be used. This scenario is not always the case. Objects move continuously in many applications and, in others, they may be stationary only for very short periods of time.

Area array cameras are well suited to imaging objects that are stationary or slow-moving. Because the entire area array must be exposed at once, any movement during the exposure time will result in a blurring of the image. Motion blurring can be controlled by reducing exposure times or using strobe lights. When using an area array camera for objects in motion, some consideration must be taken for the amount of movement with respect to the exposure time of the camera and object resolution where it is defined as the smallest feature of the object represented by one pixel. A rule of thumb when acquiring images of a moving object is that the exposure must occur in less time than it takes for the object to move beyond one pixel.

If you are grabbing images of an object that is moving steadily at 1 cm/second and the object resolution is already set at 1 pixel/mm, then the absolute maximum exposure time required is 1/10 per second. There will be some amount of blur when using the maximum amount of exposure time since the object will have moved by an amount equal to one pixel on the camera sensor. In this case, it is preferable to set the exposure time to something faster than the maximum, possibly 1/20 per second, to keep the object within half a pixel.

The frame rate of a camera is the number of complete frames that a camera can send to an acquisition system within a predefined time period, which is usually stated as a specific number of frames per second. As an example, a camera with a sensor resolution of 640 × 480 is specified with a maximum frame rate of 50 frames per second. Therefore, the camera needs 20 milliseconds to send one frame following an exposure.

Some cameras are unable to take a subsequent exposure while the current exposure is being read, so they will require a fixed amount of time between exposures when no imaging takes place. Other types of cameras are capable of reading one image while concurrently taking the next exposure. Therefore, the readout time and method of the camera must be considered when imaging moving objects.

Spectral Response

All digital cameras that employ electronic sensors are sensitive to light energy. The wavelength of the light energy that cameras are sensitive to typically ranges from approximately 400 nanometers to a little beyond 1,000 nanometers. In addition, some variants are sensitive below 400 nanometers and go into the ultraviolet spectrum, while others are sensitive above 1,000 nanometers and go into the infrared. There may be instances in imaging when it is desirable to isolate certain wavelengths of light that emanate from an object, and where characteristics of a camera at the desired wavelength may need to be defined. Filters may be incorporated into the application to tune out the unwanted wavelengths, but it will still be necessary to know how well the camera will respond to the desired wavelength.

The responsiveness of a camera defines how sensitive the camera is to a fixed amount of exposure. The responsiveness of a camera can be defined in LUX or DN/(nJ/cm2). “LUX” is a common term among imaging engineers that is used to define the sensitivity in photometric units over the range of visible light, where DN/(nJ/cm2) is a radiometric expression that does not limit the response to visible light. In general, both terms state how the camera will respond to light. The radiometric expression of × DN/(nJ/cm2) indicates that, for a known exposure of 1 nJ/cm2, the camera will output pixel data of × DN (digital numbers, also known as grayscale).

Gain is another feature available within some cameras that can provide various levels of responsiveness. The responsiveness of a camera should be stated at a defined gain setting. Be aware, however, that a camera may be said to have high responsiveness at a high gain setting, but increased noise level can lead to reduced dynamic range.

Digital cameras produce digital data, or pixel values. Being digital, this data has a specific number of bits per pixel, known as the pixel bit depth. This bit depth typically ranges from 8 to 16 bits. In monochrome cameras, the bit depth defines the quantity of gray levels from dark to light, where a pixel value of 0 is %100 dark and 255 (for 8-bit cameras) is %100 white. Values between 0 and 255 will be shades of gray, where near 0 values are dark gray and near 255 values are almost white. Ten-bit data will produce 1,024 distinct levels of gray, while 12-bit data will produce 4,096 levels.

Each application should be considered carefully to determine whether fine or coarse steps in grayscale are necessary. Machine vision systems commonly use 8-bit pixels, and going to 10 or 12 bits instantly doubles data quantity, as another byte is required to transmit the data. This also results in decreased system speed because two bytes per pixel are used, but not all of the bits are significant. Higher bit depths can also increase the complexity of system integration since higher bit depths necessitate larger cable sizes, especially if a camera has multiple outputs.

This article was written by Chris Brais, Applications Manager at DALSA Corporation in Waterloo, Ontario, Canada. For more information, Click Here  .