The rapidly expanding world-wide use of unmanned aerial vehicles (UAV) — drones — is driving a growing market for specialized imaging technology. According to Chris Yiu, General Manager, High Performance BU at SmartSens Technology (Santa Clara, CA, Shanghai, China), there are two basic functions for imaging technology aboard drones: one is optical flow, collision-avoidance sensing; the other is video capture of objects on the ground. With the recent emphasis on self-flying drones, collision-avoidance is of major importance.

Figure 1. Left: Turning propeller photographed with a SmartSens global shutter; Right: Turning propeller photographed with a rolling shutter. (Image courtesy of SmartSens Technology)

Collision-Avoidance Imaging on Self-Flying Drones

Collision-avoidance for self-flying drones requires very precise and accurate data. This is complicated for several reasons. The drone has to adapt to the environment to avoid trees or a mountain, as well as other stationary and flying objects. It might be flying directly toward the sun and at other times, in and out of sunlight.

The cameras are typically located on the legs of the drones — six to eight legs arranged in a circle to protect the drone. They send raw image data to an application processor (AP) aboard the drone to analyze the scene and make decisions on how to react. Since this needs to be done literally “on-the-fly,” the image capture rate is critical for providing real-time information. Even though the processor is mounted on the drone, there is a time-delay for the data to travel to it from the cameras, so the sensor speed is critical. If you're flying 40 mph towards a mountain or a wall, the system had better respond fast enough to avert disaster.

Speed of Response — Global vs Rolling Shutter

One technique for maximizing speed of response is to use a global, instead of a rolling, shutter. A rolling shutter scans line by line, either vertically or horizontally, so that not all parts of the image are recorded at exactly the same instant — there is a time delay from the first to the last pixel being exposed. This can produce distortion when imaging a moving object — for example, a propeller; or when the camera is in motion. This distortion causes the propeller blades to apparently change shape. A global shutter, however, captures the entire image at the same instant, enabling the sensor to capture very clean shapes (Figure 1).

Dynamic Range

Drones need clear, unambiguous imaging of objects in order to provide good data for the machine vision algorithm. High dynamic range (HDR) is an important factor in achieving that.

Dynamic range is a measure of the ability to deal with extremes of contrast. A drone might fly through an extremely dark area, come out of it and then fly against the sun. A rapid transition like that would call for high dynamic range, typically measured in dB, to accurately capture the natural scene. Although in the U.S., drones are not allowed to fly in the dark, in Asia, where there are large numbers of drones, they are. So, sensitivity in low-light conditions, such as when flying in twilight, is important.

Figure 2a. SmartSens SC031GS sensor with HDR off.

Typical HDR technology uses multiple frames to correctly expose extremes of dark and bright regions. The system processor will combine these regions by using multiple-exposure technology. The problem with that technique is it produces distortion, especially when imaging high-speed objects. The time delay between the multiple exposures causes a phase shift, which produces ghosting — overlapping double images. This ghosting makes it hard for the machine-vision algorithm to correctly map the object because of the errors it introduces. The distortion makes it difficult to correctly detect edges of objects and can cause a wall to seem bigger or smaller or further or closer than it actually is. Because of these errors, a drone can make misjudgments leading to a collision. For AI-based image recognition, it is therefore important that the pattern of an object should be clear of artifacts, without ghosting, correctly representing the true edges and shapes of objects (Figures 2a and 2b).

Figure 2b. SmartSens SC031GS sensor with HDR on. (Images courtesy of SmartSens Technology)

Signal to Noise Ratio

Signal-to-noise ratio (SNR) is the key parameter for evaluating how well a sensor can turn photons into electrons and then to generate a detailed image. With a high SNR, a sensor can emphasize signals that have meaningful data, and diminish response to the noise that generates random errors in the image. This produces a more accurate representation of the actual object being imaged. However, the high sensitivity that enables a sensor to respond in low light conditions will also increase the sensitivity to noise, which introduces random errors to the image. Unfortunately, the very process in which the pixels change light into voltage can also introduce noise.

If a sensor has a good SNR, it is able to adapt well to different scenes. For example, a dark scene requires the sensor to have high gain. If the SNR is low, the noise will be amplified along with the output signal. However, a sensor with a good SNR will be able to amplify the signal but not the noise.

Power Consumption

Reducing power consumption is particularly important for drones, since they are battery-powered. A drone should ideally be able to travel for up to 45 minutes to an hour on a single charge.

SmartSens Global Shutter CMOS Image Sensor

SmartSens Technology has introduced the model SC031GS, a commercial-grade 300,000-pixel Global Shutter CMOS image sensor based on BSI (back-illuminated) pixel technology, which has an image transfer rate of 240 fps. It is designed to provide the imaging capabilities needed for drone collision-avoidance systems as well as optical flow, 3D depth information, gesture recognition, and finding people's face location.

Single Frame HDR

Rather than using time-phased multiple exposures to respond to light and dark areas, the SC031GS uses a single frame exposure, thereby eliminating the time delays introduced when combining the exposures from light and dark regions. “We allow the pixel to have two readings depending on the incoming light level,” said Yiu.

The two-level light sensing system uses a dual conversion gain (DCG) switch technology. When imaging in high light conditions, the DCG switch is turned on, connecting a physical capacitor that enables a low conversion gain mode, which can handle a large amount of signal charge. In low-light conditions, the DCG signal is turned off, disconnecting the capacitor, thereby enabling a high conversion gain mode. The lower capacitance results in higher conversion gain, higher sensitivity, and reduced read noise, at the expense of lower maximum charge handling capacity.

SNR

The sensor uses active pixel sensor (APS) technology with two strategies to maximize the SNR: the physical design of the sensor pixels themselves and the design of the embedded signal-processing circuitry.

The high-sensitivity pixel structure uses backside illumination (BSI) and 3.75 μm pixels. The signal processing circuitry combines a custom architecture involving the analog design for the basic pixel, and digital design for timing, logic, and data transfer.

Power Consumption

The power consumption is held to a maximum of 120 mw by reducing the amount of circuitry used to drive the pixels.

Cost

Although BSI technology is more costly than other approaches, it provides significantly improved performance, which is critical for drone applications.

Advanced Vision for UAVs

To sum it up, key imaging parameters for UAVs include:

  • Response time quick enough for realtime processing.

  • Accurate imaging despite relative motion between the camera and the subject. This can be addressed by high image capture rate, global shutter, and single-frame HDR.

  • Imaging unaffected by rapid and extreme changes in lighting environment. This requires high dynamic range and high signal to noise ratio.

  • Low power requirement achieved through circuit design that minimizes current drain.

This article was written by Ed Brown, Associate Editor of Photonics & Imaging Technology. For more information, visit here .