Throughout the history of the electronics industry, the old refrain that systems will continuously become faster, simpler, and cheaper has remained true. In the early days of computer vision, a frame grabber capable of capturing a single 640 × 480 × 8-bit image consisted of multiple boards in a rack and cost tens of thousands of dollars. Back then, few people — aside from Dick Tracy — could imagine a portable device no bigger than a deck of cards having the ability to capture images and video, store gigabytes of data, and act as a radio, GPS device, and telephone — all costing less than $500.

The cell phone is the most common example of combining multiple tecnologies into optimized, highly compact modules, often referred to as “embedded systems.”

When machine vision is added to the mix, the module becomes an embedded vision system. Embedded vision incorporates a small camera or image sensor, powerful processor, and often I/O and display capability into an application-specific system that is low on per-unit cost and energy consumption. Examples in the machine vision, medical, automotive, and consumer markets include smart cameras, ultrasound scanners, and autonomous vehicles.

The Evolution of Embedded Platforms

The introduction of the PC and the increasing functionality of integrated circuits created a new market for PC-based single-board computers, frame grabbers, I/O peripherals, graphics, and communications boards. This allowed systems integrators to build custom systems tailored for specific applications such as data acquisition, communications, computer graphics, and vision systems.

Just as frame grabbers and smart cameras may incorporate FPGAs, designers of systems using board-level products such as off-the-shelf CPUs, frame grabbers, and I/O peripherals are faced with an even wider range of products from which to choose. Numerous board-level products based on standards ranging from OpenVPX, VME, CompactPCI, cPCI Express, PC/104, PC/104-Plus, EPIC, EBX, and COM Express boards can be used to build vision systems with different camera interfaces and I/O options. Standards organizations such as VITA, PICMG, and the PC/104 Consortium detail these open standards and many of the products available to build a machine vision or image processing system.

Embedded vision can take two tracks: open small-form-factor vision and image processing boards and peripherals based on these computing platforms and standards versus custom designs that use cameras, processors, frame grabbers, I/O peripherals, and software. While the hardware of open source embedded vision systems may be relatively easy to reverse engineer, custom embedded vision designs are more complex, highly proprietary, and may use custom-designed CMOS imagers and custom Verilog hardware description language (HDL) embedded in FPGAs and ASICs.

Intellectual Property

In embedded vision design, many of the image processing functions that lend themselves to a parallel dataflow are implemented in FPGAs. Altera (now part of Intel) and Xilinx offer libraries that can be used with their FPGAs to speed these functions. Intel's FPGA Video and Image Processing Suite, for example, is a collection of Intel FPGA intellectual property core (IPcore) functions for the development of custom video and image processing (VIP) designs that range from simple building blocks, such as color space conversion, to video scaling functions. Likewise, Xilinx offers many IP core functions for image processing such as color filter interpolation, gamma correction, and color space conversion. Both Intel and Xilinx offer third-party IP cores as part of their partnership programs. In its Xilinx Alliance Program, Xilinx includes products from companies such as Crucial IP, iWave Systems Technologies, and Xylon that offer IP to perform noise reduction, video encoding, and video-to-RGB converters.

Figure 2. iFixit iPhone X Teardown Front Camera(s). (Image link)

Leveraging the power of FPGAs, camera companies have been quick to recognize the need for peripherals that can be used in open embedded systems. Indeed, companies such as Allied Vision and Basler have already introduced camera modules to meet such demands.

“Many of today's embedded systems rely on a sensor module connected to a processor board via a MIPI serial interface 2 (MIPI CSI-2) that is used in mobile devices and automotive applications,” says Francis Obidimalor, Marketing Manager at Allied Vision, in his video, Sensor Module vs. Camera Module. “However, these sensor modules have limited processing capability. Functions such as noise reduction and color debayering, as well as application-specific software such as facial recognition, must be performed on the host processor.”

To reduce the required host processing, camera modules with on-board processing capability can be used to off-load functions such as noise reduction and color debayering, allowing the developer to concentrate on the application software. “Using such modules employed in the Allied Vision ‘1’ platform, camera vendors can also provide the necessary drivers, alleviating the need for designers to write new camera drivers should a system need to be upgraded with, for example, a camera with a higher-performance image sensor,” Obidimalor says.

For this reason, Basler offers a boardlevel camera, the dart, which measures 27 × 27 mm, weighs 15 g, and offers two interfaces: USB 3.0 and BCON, Basler's proprietary interface based on low-voltage differential signaling (LVDS). Basler will also offer an extension module that lets users operate the camera via a MIPI/CSI-2 camera interface. “The result is that instead of using a sensor module, the designer can integrate a finished camera module with much less effort,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler.

Embedded vision components are being incorporated into a myriad of applications. Even so, a handful of industrial sectors are receiving most of the attention, largely due to economies of scale. These include automotive, medical, security, and consumer applications. Taken together, they spotlight key trends: developers are working to drive down cost and reduce system size, while offering enhanced flexibility.

Automotive and Security

Advanced driver assistance systems (ADAS) capabilities such as as mirror-replacement cameras, driver drowsiness detection, and pedestrian protection systems are pushing the need for enhanced image processing within automobiles. According to the research firm Strategy Analytics, most high-end mass-market vehicles are expected to contain up to 12 cameras within the next few years.

Figure 3. Fixit iPhone X Teardown Rear Camera(s). ( Image link)

“In automotive applications, highspeed computing with low energy consumption is important,” says Ingo Lewerendt, Strategic Business Development Manager at Basler. For now, Basler intends to focus on embedded vision systems installed inside the vehicle. However, custom solutions seem almost inevitable as automakers offer up their own branded cabin configurations of entertainment and information systems.

FLIR Systems is also targeting the automotive market with its Automotive Development Kit (ADK) based on the company's Boson thermal imaging camera core. Designed for developers of automotive thermal vision and ADAS, the uncooled VOx microbolometer detector-based camera cores are already employed on vehicles offered by GM, Mercedes-Benz, Audi, and BMW.

Data from such camera modules must quickly process and analyze images under the most extreme conditions and do so in the face of stringent automotive safety standards. To address these challenges, Arm has developed the Mali-C7l, a custom image signal processor (ISP) capable of processing data from up to four cameras and handling 24 stops of dynamic range to capture detail from images taken in bright sunlight or shadows. Reference software controls the ISP, sensor, auto-white balance, and autoexposure. To further leverage the device into the automotive market, the company has plans to develop Automotive Safety Integrity Level (ASIL)-compliant automotive software.

Embedded vision systems are not only finding themselves in automobiles but the automatic number plate recognition (ANPR) systems that monitor them. While the cameras used in such systems may have in the past been low-cost Internet-enabled cameras based on lossy transmission standards such as H.264/5, these are gradually being replaced by digital systems that need no such compression. Systems such as Optasia Systems’ IMPS ANPR Model AIO, incorporate a GigE Vision camera from Basler interfaced to an off-the-shelf embedded computer housed in a single unit. These types of cameras are especially suited for low-light applications such as ANPR since they have a high dynamic range and are somewhat tolerant of exposure variations.