With increased digitalization in the field of industrial image processing, the industry sometimes rashly writes off conventional technologies. In actuality, many users cling to their analog image processing systems. Those who take the needs of these users seriously have come to realize that the reason for this does not lie in analog transfer technology as such, but in the advantages of interlaced sensors, which are widely used in analog systems and previously were not available in cameras with a digital interface.

Analog vs Digital

Digital cameras, digital camcorders, MP3 players, and video on demand — when it comes to consumer electronics, digital technology pervades our households. Like the consumer market, the market for industrial image processing is characterized by increasing digitalization. Even so, digital interfaces are not catching on as easily and quickly in industry as they are in the consumer market. More than 50% of all systems still work with analog data transfer technology.

Figure 1. In the interlaced frame integration operation mode, there is no binning. In the first field, only the odd lines, and in the second field, only the even lines are read out. Therefore, while one line is read out, the next line can already be scanned.
Yet digital interfaces, such as the popular FireWire standard (IEEE 1394), offer considerable advantages for industrial image processing. They permit faster, more reliable image data transfer, without loss of quality, to a computer-based system that can process or archive the images directly, without a frame grabber. Furthermore, a digital interface permits multi-camera operation and comfortable parameterization of cameras.

Given all these advantages, why are many users of analog systems still so hesitant about switching to digital? Due to the life expectancy of machine vision systems, the migration to a new digital technology can obviously not be as swift as in the fast-moving consumer electronics sector. Furthermore, many users shy away from the costs involved in conversion, especially in the case of simpler analog systems where price is a major factor.

A more decisive reason is that almost half of analog systems work with interlaced cameras. For users of such systems, the switch to a digital interface previously translated into a twofold technological migration, because there simply were no cameras with interlaced sensors and a digital interface. Therefore, these users had to replace not just the interface, but the system technology as well. This is a complex and costly undertaking, because such a switch may have effects throughout the system. For example, new lenses have to be found, image processing has to be adjusted to the new image analysis, and the mechanism has to be adjusted as well (new mountings, new distances).

What is “Interlaced”?

Interlaced is an invention from the 1930s, when the development of television technology was taking its first steps. Many regard it as more of an art than a technology because it saves bandwidth for a given frame refresh frequency or increases the resolution. At the time, the task was to achieve a frame refresh rate of 50 Hz (in Europe) or 60 Hz (in America). No one spoke of 100-Hz televisions at the time.

The goal was to keep the picture from flickering in the eye of the viewer, with a maximum channel bandwidth of 6 MHz, which therefore can represent only 200,000 (blurry) pixels. The solution was to transmit each frame consecutively in two halves by having the scanning beam of the tube camera of the time and the corresponding electron beam of the tube TV skip every other line.

Applied to today’s technology with CCD image sensors and digital displays, this means that in the first image field, the odd (red) lines are read and drawn, while in the second field, the even (gray) lines are scanned and displayed. This explains the feat accomplished by the line skip: spatial resolution is replaced by temporal resolution. Interlacing them achieves the same resolution as a system with twice as many lines, at least for still scenes, and the image does not flicker due to the image fields’ high frame refresh rate of 50 Hz or 60 Hz, respectively.

Typically, these two fields are displayed sequentially on a monitor or, alternatively, de-interlaced in a PC and then combined to a singe full frame. However, the two fields are recorded consecutively; that is, at different times. This can result in artifacts, especially when the frame consists of moving objects. Certain tricks are necessary to make sure that the interlaced camera delivers good pictures in the intended application; for instance, by adjusting the timing and exposure settings, the lighting, and the image processing software.

« Start Prev 1 2 Next End»

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.