Throughout the history of the electronics industry, the old refrain that systems will continuously become faster, simpler, and cheaper has remained true. In the early days of computer vision, a frame grabber capable of capturing a single 640 × 480 × 8-bit image consisted of multiple boards in a rack and cost tens of thousands of dollars. Back then, few people — aside from Dick Tracy — could imagine a portable device no bigger than a deck of cards having the ability to capture images and video, store gigabytes of data, and act as a radio, GPS device, and telephone — all costing less than $500.

The cell phone is the most common example of combining multiple tecnologies into optimized, highly compact modules, often referred to as “embedded systems.”

When machine vision is added to the mix, the module becomes an embedded vision system. Embedded vision incorporates a small camera or image sensor, powerful processor, and often I/O and display capability into an application-specific system that is low on per-unit cost and energy consumption. Examples in the machine vision, medical, automotive, and consumer markets include smart cameras, ultrasound scanners, and autonomous vehicles.

The Evolution of Embedded Platforms

The introduction of the PC and the increasing functionality of integrated circuits created a new market for PC-based single-board computers, frame grabbers, I/O peripherals, graphics, and communications boards. This allowed systems integrators to build custom systems tailored for specific applications such as data acquisition, communications, computer graphics, and vision systems.

Just as frame grabbers and smart cameras may incorporate FPGAs, designers of systems using board-level products such as off-the-shelf CPUs, frame grabbers, and I/O peripherals are faced with an even wider range of products from which to choose. Numerous board-level products based on standards ranging from OpenVPX, VME, CompactPCI, cPCI Express, PC/104, PC/104-Plus, EPIC, EBX, and COM Express boards can be used to build vision systems with different camera interfaces and I/O options. Standards organizations such as VITA, PICMG, and the PC/104 Consortium detail these open standards and many of the products available to build a machine vision or image processing system.

Embedded vision can take two tracks: open small-form-factor vision and image processing boards and peripherals based on these computing platforms and standards versus custom designs that use cameras, processors, frame grabbers, I/O peripherals, and software. While the hardware of open source embedded vision systems may be relatively easy to reverse engineer, custom embedded vision designs are more complex, highly proprietary, and may use custom-designed CMOS imagers and custom Verilog hardware description language (HDL) embedded in FPGAs and ASICs.

Intellectual Property

In embedded vision design, many of the image processing functions that lend themselves to a parallel dataflow are implemented in FPGAs. Altera (now part of Intel) and Xilinx offer libraries that can be used with their FPGAs to speed these functions. Intel's FPGA Video and Image Processing Suite, for example, is a collection of Intel FPGA intellectual property core (IPcore) functions for the development of custom video and image processing (VIP) designs that range from simple building blocks, such as color space conversion, to video scaling functions. Likewise, Xilinx offers many IP core functions for image processing such as color filter interpolation, gamma correction, and color space conversion. Both Intel and Xilinx offer third-party IP cores as part of their partnership programs. In its Xilinx Alliance Program, Xilinx includes products from companies such as Crucial IP, iWave Systems Technologies, and Xylon that offer IP to perform noise reduction, video encoding, and video-to-RGB converters.

Figure 2. iFixit iPhone X Teardown Front Camera(s). (Image link)

Leveraging the power of FPGAs, camera companies have been quick to recognize the need for peripherals that can be used in open embedded systems. Indeed, companies such as Allied Vision and Basler have already introduced camera modules to meet such demands.

“Many of today's embedded systems rely on a sensor module connected to a processor board via a MIPI serial interface 2 (MIPI CSI-2) that is used in mobile devices and automotive applications,” says Francis Obidimalor, Marketing Manager at Allied Vision, in his video, Sensor Module vs. Camera Module. “However, these sensor modules have limited processing capability. Functions such as noise reduction and color debayering, as well as application-specific software such as facial recognition, must be performed on the host processor.”

To reduce the required host processing, camera modules with on-board processing capability can be used to off-load functions such as noise reduction and color debayering, allowing the developer to concentrate on the application software. “Using such modules employed in the Allied Vision ‘1’ platform, camera vendors can also provide the necessary drivers, alleviating the need for designers to write new camera drivers should a system need to be upgraded with, for example, a camera with a higher-performance image sensor,” Obidimalor says.

For this reason, Basler offers a boardlevel camera, the dart, which measures 27 × 27 mm, weighs 15 g, and offers two interfaces: USB 3.0 and BCON, Basler's proprietary interface based on low-voltage differential signaling (LVDS). Basler will also offer an extension module that lets users operate the camera via a MIPI/CSI-2 camera interface. “The result is that instead of using a sensor module, the designer can integrate a finished camera module with much less effort,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler.

Embedded vision components are being incorporated into a myriad of applications. Even so, a handful of industrial sectors are receiving most of the attention, largely due to economies of scale. These include automotive, medical, security, and consumer applications. Taken together, they spotlight key trends: developers are working to drive down cost and reduce system size, while offering enhanced flexibility.

Automotive and Security

Advanced driver assistance systems (ADAS) capabilities such as as mirror-replacement cameras, driver drowsiness detection, and pedestrian protection systems are pushing the need for enhanced image processing within automobiles. According to the research firm Strategy Analytics, most high-end mass-market vehicles are expected to contain up to 12 cameras within the next few years.

Figure 3. Fixit iPhone X Teardown Rear Camera(s). ( Image link)

“In automotive applications, highspeed computing with low energy consumption is important,” says Ingo Lewerendt, Strategic Business Development Manager at Basler. For now, Basler intends to focus on embedded vision systems installed inside the vehicle. However, custom solutions seem almost inevitable as automakers offer up their own branded cabin configurations of entertainment and information systems.

FLIR Systems is also targeting the automotive market with its Automotive Development Kit (ADK) based on the company's Boson thermal imaging camera core. Designed for developers of automotive thermal vision and ADAS, the uncooled VOx microbolometer detector-based camera cores are already employed on vehicles offered by GM, Mercedes-Benz, Audi, and BMW.

Data from such camera modules must quickly process and analyze images under the most extreme conditions and do so in the face of stringent automotive safety standards. To address these challenges, Arm has developed the Mali-C7l, a custom image signal processor (ISP) capable of processing data from up to four cameras and handling 24 stops of dynamic range to capture detail from images taken in bright sunlight or shadows. Reference software controls the ISP, sensor, auto-white balance, and autoexposure. To further leverage the device into the automotive market, the company has plans to develop Automotive Safety Integrity Level (ASIL)-compliant automotive software.

Embedded vision systems are not only finding themselves in automobiles but the automatic number plate recognition (ANPR) systems that monitor them. While the cameras used in such systems may have in the past been low-cost Internet-enabled cameras based on lossy transmission standards such as H.264/5, these are gradually being replaced by digital systems that need no such compression. Systems such as Optasia Systems’ IMPS ANPR Model AIO, incorporate a GigE Vision camera from Basler interfaced to an off-the-shelf embedded computer housed in a single unit. These types of cameras are especially suited for low-light applications such as ANPR since they have a high dynamic range and are somewhat tolerant of exposure variations.

Medical Imaging

Two major applications of medical embedded systems are endoscopy and X-ray imaging, which in turn enhance diagnosis and treatment. Use of embedded vision within the medical imaging market is growing rapidly, driven by a call for minimally invasive diagnostic and therapeutic procedures, the need to accommodate aging populations, and rising medical costs.

Figure 4. BIKI: First Bionic Wireless Underwater Fish Drone. (Image: BIKI by Robosea.)

To develop portable products for this market, developers often turn to third-party companies for help. Zibra Corp. turned to NET USA for assistance in the design of its coreVIEW series of borescopes and endoscopes. NET developed a remote camera with a 250 x 250 NanEye pixel imager from AWAIBA and a camera main board that incorporates an FPGA to perform color adjustment and dead pixel correction. An HDMI output on the controller board allows images captured by the camera to be displayed/viewed at distances of up to 25 feet.

Studying the skeletal changes of lizards posed an interesting problem for Yoel Stuart, then a graduate student at Harvard University. Stuart needed a portable X-ray system to use in the field. He worked with Rad-icon Imaging (now part of Teledyne DALSA) and Kodex to produce the final system. To create the system, Rad-icon developed the Remote RadEye200 with a 14-bit Shad-o-Box camera module that has a GigE Vision adapter with an Ethernet interface connected to a portable host PC. Kodex integrated this X-ray camera with a 50 kVp portable X-ray source from Source-Ray.

Figure 5. BIKI: First Bionic Wireless Underwater Fish Drone. (Image: BIKI by Robosea.)

Consumer Demands

“While machine vision integrators can pay $10,000 for a system designed for industrial machine vision applications, to break into consumer markets, the vision system can't cost more than $500,” Basler's Lewerendt says. “New markets want vision without the PC, the GPU, or a hard drive. They want the system reduced to the minimum.”

Reducing the system cost, however, poses a conundrum for those companies traditionally involved in the machine vision market, where high-resolution, high-speed cameras can cost thousands of dollars. In a teardown of the iPhone X by iFixit, researchers identified that the TrueDepth sensor cluster used in the device costs Apple $16.70. Apple refused to comment on the price of these components, but such low costs are not unusual in high-volume consumer products.

While traditional machine vision camera vendors might not want to compete in the consumer market, there are other opportunities for vendors of smart camera modules. These include prosumer drones that can be used for industrial applications such as thermography to analyze the heat loss of buildings. Then there's BIKI from Robosea, an underwater drone created in the form of a fish, which employs a 3840 × 2160-pixel camera, 32 GB memory, and on-board features such as automated balance and obstacle avoidance.

As embedded vision proliferates in automobiles, medical imaging, remote inspection, and consumer electronics, opportunities will continue to increase for vision vendors, both traditional and nontraditional in scope.

Looking to the Future

Several underlying trends are prevalent in today's vision and camera markets. Embedded vision is changing the game in many ways. As we move forward, keep in mind:

  • Embedded vision systems are appearing in diverse products such as drones, automobiles, portable dental scanners, consumer robots, and virtual reality systems. These demand low-cost components to reach prosumer and consumer markets.

  • Companies that offer camera, lighting, and PC-based camera interface products for machine vision systems will find it difficult to compete. While lower-cost camera modules and camera interface/processing modules can be used for such applications, vendor margins will be substantially reduced, making it likely that only a handful of established machine vision vendors will enter the market.

  • Established software vendors will need to lower the cost of their products to compete in these markets due to the proliferation of easy-to-configure (although unsupported) open source code.

  • Cloud-based computing will negate the necessity for processing hardware currently used in embedded systems, where at present, deterministic and low-latency products may not be required, for example, those used in automatic number plate recognition.

In the future, we can expect even more trends to emerge. One thing is for sure, embedded technologies are making machine vision systems smart, intuitive, and connected to other parts of the factory, which is critical to maintain success in manufacturing today.

This article was written by Alex Shikany, Vice President, and Winn Hardin, Contributing Editor, Association for Advancing Automation (AIA). For more information, contact Mr. Shikany at This email address is being protected from spambots. You need JavaScript enabled to view it. or visit here .