As warfare moves into a new era, military strategists tool up with unmanned aircraft systems (UAS) or drones to provide the visual surveillance the new combat environment requires. In urban warfare, where counterinsurgency and counterterrorist missions typically occur, troops rely on the Intelligence, Surveillance, and Reconnaissance (ISR) forces for persistent air surveillance, precision air strikes, and swift airlift support. ISR forces are able to sweep wide areas, detect activity, stare at key places for hours and days at a time, and complete a targeting cycle in minutes.

The reconnaissance capabilities of the forward camera on the Predator and Reaper, known as the “MTS Ball” or MQ-16 electro-optical/infrared camera, can read a driver’s license from 20,000+ feet.

Robotics, and in particular the UASs, are changing the nature of war-fighting. Unmanned aircraft systems are also referred to as drones or unmanned aerial vehicles (UAV). They are called remotely piloted aircraft (RPA) by the U.S. Air Force. The UASs can transmit a direct feed to a nearby ground control station or broadcast via satellite to command centers around the globe.

The key enabling technology responsible for the breakthrough capabilities of the UAS and the subsequent transformation of warfare is Full-Motion Video (FMV). FMV provides an on-demand, close-up view of the combat zone that would not otherwise be possible. It enables commanders to make decisions and execute missions from a safe distance without endangering the lives of their troops. Practically speaking, without FMV from the aircraft’s onboard cameras, pilots would not be able to navigate the drone remotely from the ground.

Full-motion video adds a fourth dimension to imagery: the ability to track activity over time. FMV provides outstanding event fidelity, seamless event progression, and a full context regarding the nature of the location and activities being viewed.

Exploiting the full benefits that FMV has to offer requires overcoming some challenges. First, the bandwidth requirements for broadcasting FMV, especially in high definition (HD), are tremendous. Second, the amount of digital storage space needed to retain the sheer volume of FMV from missions is staggering. Third, the human effort involved in viewing, analyzing, and disseminating the footage is overwhelming. Fourth, and most importantly, the quality and clarity of the captured images is often so poor that targets cannot be clearly identified.

Current FMV Image Quality Inconsistent

Producing high-quality imagery on a mobile platform such as the UAS poses some interesting challenges due to its motion and the resulting image perspectives. The quality of the video imagery can be compromised by narrow camera field-of-view, datalink degradations, poor environmental conditions (e.g., dawn/dusk/night, adverse weather, variable clouds), bandwidth limitations, or a highly cluttered visual scene (e.g., in urban areas or mountainous terrain).

Given the weight of these issues, the emergence of a new solution would be welcome. Fortunately, recent technology advances now make it possible to significantly enhance the image quality of FMV in real time and address these pressing problems. A variety of sophisticated image enhancement algorithms that have successfully been used on still images can now be applied to FMV. Image processing algorithms are computationally intensive, especially for FMV. Multiple video streams at 30 frames per second in real time, with zero latency, must be processed. The only way to do this is with an advanced platform capable of high-power parallel processing. A commercial off-the-shelf (COTS) platform that uses open architecture algorithms to enhance FMV image quality in real time is now available. This platform is designed to meet the requirements for applications such as the drone, and can accommodate multiple video input streams and route them to any combination of attached displays or network connections.


Image processing algorithms can (clockwise, from top left) brighten or darken for dusk and dawn viewing; dehaze images for fog, smoke, and sand storm environments; clarify images with contrast enhancement that continuously adapts to changing brightness and contrast; and identify anomalous shapes and highlight details.

A new concept, the Any-Image-Anywhere (AIA) System combines image enhancing algorithms, a super-fast field-programmable gate array (FPGA) parallel processing platform, and a high-speed video switching matrix in a flexible, expandable, open architecture. The AIA System accommodates multiple video input streams and routes them to any combination of attached displays or network connections. Multiple image streams can be presented on an individual LCD display simultaneously: for example, four cameras could be supported using one as the primary image in full screen and three other streams as picture-in-picture (PIP) windows. Operators can turn image functions on or off, or swap the primary and PIP windows using a touchscreen user interface. The AIA System executes image enhancement algorithms in real time for live video surveillance feeds such as those applicable to UASs, and offers the performance, flexibility, and ruggedness necessary for mission-critical applications.

For example, in the field, the AIA System could be installed at a UAS Ground Control Station (GCS) to apply image enhancement and edge detection algorithms to incoming video streams. The image enhancement algorithms will bring out the detail from images degraded by poor visibility or atmospheric interference. The edge detection algorithm will identify anomalous shapes and highlight details for surveillance and bomb damage assessment (BDA).

The Origins of Image Enhancement

While real-time enhanced video capability is new, many of the algorithms involved have been routinely used for still image processing in software applications such as VITec Electronic Light Table (ELT) or Adobe Photoshop. These algorithms are computationally intensive, and applications like Photoshop often push the limits of the general-purpose platforms they run on just to post-process a single image. The ability to process FMV in real time requires an extraordinarily fast processor capable of processing multiple input streams simultaneously.

New generation FPGAs make it possible to meet the demanding performance requirements for enhanced FMV. The advantage of FPGAs compared to other processors is their unmatched capacity for parallel processing. FPGAs benefit from an arbitrary number of data paths and operations, up to the limit of the device capacity. Larger FPGA devices are capable of performing hundreds or even thousands of multiple operations simultaneously on data of varying widths. Even with lower internal clock frequencies compared to dedicated processors, FPGAs provide better performance due to the high degree of parallelization that is achievable.

As fast as FPGAs are, achieving the goal of zero latency required for real-time FMV requires some ingenuity. The trick is to calculate the necessary adjustment based on the first frame, but apply it to the following frame, and so on. Calculations for the next frame are done in parallel with processing of the current frame, so no latency is introduced. Because the differences between successive frames are small, the adjustments are still applicable.

Image processing algorithms are the key technology for improving FMV image clarity and usefulness. The AIA System hosts a collection of image enhancing algorithms. A large number of algorithms that run on other platforms can be ported to the AIA System as well. An open architecture system, all necessary interfaces for creating and installing algorithms are published. The architecture allows new algorithms to be installed without disrupting the system. In addition to image enhancement algorithms, there are also rectification algorithms that correct for camera angles; mosaic or stitching algorithms that combine multiple images into a single, unified picture; and encoding/decoding and compress/decompress algorithms that make image data transmission and storage more efficient.

Inside the AIA System

Designed to meet the needs of deployed military applications, the AIA System brings a new level of performance and capability to real-time FMV. Sophisticated parallel processing of image streams, on-demand video matrix switching, and the ability to present and manipulate multiple streams on a single display provide the high quality and reliable imagery necessary for mission success.

The AIA System consists of separate hot-swappable modules for video input (VIP), video output (VOP), and algorithm processing that plug into a high-speed switching fabric contained in a compact and lightweight docking bay. The docking bay, with a less than 4U form factor, can accommodate combinations of up to 18 modules, and provides dedicated slots for two hot-swappable and redundant load-sharing power supplies. All modules are front-loading, and the entire system, including the fabric switch, can be configured for failover.

The AIA System can accommodate multiple video input streams and route them to any combination of attached displays or network connections. Each video input and output module has three DVI-D (HDMI) connectors and supports three input or output image streams, respectively. Each algorithm module supports three input streams and three output streams, which are connected through the switching fabric.

The switched fabric serves as a video matrix switch and allows any video input to be routed to any video output, including multiple outputs. Video inputs can be routed to any of the algorithm modules, or any combination of algorithms. Additionally, each algorithm module has two external Ethernet connections for sending video streams to networked viewers. Also, USB connections are associated with each video connection to enable KVM and touch panel support.

The AIA System is a flexible solution that, once deployed, can still be configured and reconfigured to meet changing needs minute-to-minute. The views on the various displays can be customized at any time interactively by an individual viewer. Video streams can be rerouted on-demand. After the algorithms are installed in the system, they are, for all practical purposes, part of the hardware. The “virtual buttons” on the displays permit control of the system and reconfiguration by enabling direct human interaction with system hardware and require no software. Much of the flexibility of the system is a result of using an FPGA design. In addition to its programmability, the FPGA allows a very high degree of hardware integration.

Live, full-motion video from unmanned aircraft systems has become indispensable in military and intelligence operations. Quality imagery is essential to mission success. Today, inconsistent quality and image degradation are a problem, and require immediate action.

This article was written by Jack Wade and Pauline Shulman of Z Microsystems, San Diego, CA. For more information, Click Here 

Imaging Technology Magazine

This article first appeared in the September, 2010 issue of Imaging Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.