Camera technology keeps improving while costs keep going down. Sophisticated new sensors developed for surveillance applications are able to perform better than ever under unpredictable and dynamic lighting conditions. However, the cost for these higher performance sensors is becoming low enough that they can also be incorporated into a wide range of consumer devices.

Typical resolution for security video was initially 720 lines (1 MP), and by 2019 it had slowly graduated to 1080 (2 MP). 720P and 1080P use the same group of SOC sensors in the background for compression rate and for memory, so the system cost did not change. The next jump in resolution is four and five megapixels (MP), which are targeted to high-quality video applications. SmartSens Technology (Shanghai, China) has developed two new CMOS sensors to fill the gap between these two performance levels. At 3 MP they provide 50% more pixels — 50% additional resolution — in addition to the low-light performance needed for surveillance. And the system cost has not increased as significantly as it would for a 4 or 5 MP sensor. The total cost of ownership is essentially constant while the resolution is increased. They call these two new sensors Full HD Pro, distinguishing them from 1080P (HD), to account for the additional 50% of resolution.

“One of the benefits of this development is that some of our customers are pairing these sensors with larger field-of-view lenses, so the coverage is wider. Instead of just one person in the camera, you can see two. That additional resolution, when paired with the right lens, could give you the equivalent of 50% more coverage. So, that technology is enabling us to provide additional value to our customers by helping us to better match our technology to their different applications,” said SmartSens CMO Chris Yiu.

Yiu went on to explain the “dynamic line-based stagger timing mode,” which is available in the more sophisticated of their two new 3MP sensors, the SC3320. This is a means to achieve high dynamic range (HDR) imaging. The human eye can detect a larger range of brightness than an unaided camera. If there are very bright as well as very dark areas in an area of concern, a standard camera lens will not be able to accurately capture the entire image. Either the bright areas will appear white and devoid of detail or the dark areas will appear black and devoid of detail. The bright area requires a short exposure time and the dark area requires a long exposure time. The SC3320 enables the user to cope with this situation by enabling up to two different exposure times in a single frame. Figure 1 shows how the technology would work in a three-exposure scenario: each frame would contain a long exposure, a middle exposure, and a short exposure, in order to respond to a wide brightness range.

Figure 2 - Stagger timing system diagram. (Credit: SmartSens Technology)

Figure 2 shows the interaction between the backend chip and the raw sensor that supports the staggered HDR timing. The scan is controlled by the image signal processor (ISP), which triggers the read points of the sensor to control the three different exposure times. Because this is a line-based control system, the ISP can evaluate each line to determine which of the three exposure times is best. It does that by comparing the lines taken at the three different exposure lengths. It can determine if any of those lines is overexposed, based on the amplitude of the signal in comparison with true white.

An I2C bus carries the control signals to the sensor system and the sensor can output data signals through the MIPI data chain. The DDR is the RAM that stores the data for processing.

Figure 3 illustrates the effects of applying a stagger time mode. It shows images taken with three different exposure times and the final corrected image. The final image is created by using a virtual scissor to cut some portions of the three images on the left and stitch them together to form the fourth image, on the right. The short exposure time produces a good image of the sky, while everything else is too dark. The middle and long exposures provide images of the bridge. The image on the right was made by combining portions of the three left-hand images.

Figure 3 - Assembling a corrected image. (Credit: SmartSens Technology)

The whole process of assembling the final image is done at a rate of 60 frames per second (fps), which is more than double the maximum rate at which flicker is detectable by the human eye. The maximum frame rate is limited by the economics of the hardware expansion that would be needed to go beyond 60 fps. One of the limitations is the size of the RAM, which is used for the whole system. The size of the memory limits the number of frames or how long a process could be stored temporarily. The memory has to be big enough to hold three to five times the amount of data that you would need for 30 frames, because our eyes have a sweet spot of about 26 to 30 fps real time. The system would also cost more to be capable of outputting faster because the proceeding speed of the ISP would need to be increased for that kind of bandwidth.

However, everything has a drawback. The HDR mode produces kind of a washed-out effect - the image is not as crisp as it would be with a standard line scan “linear” shutter. So, the SC3320 incorporates a linear scan mode in addition to the HDR. If you don’t have both brightly lit and dark regions — if it’s a normal image — the ISP will sense that and dynamically switch from stagger timing to the linear mode. The major difference in performance can be understood by comparing dynamic range: for a linear scan, the range is 78 dB and for HDR, it is up to 100 dB.

What’s Ahead

More than just surveillance, the applications for HDR video cameras are multiplying. The SmartSens 3320 will be supporting the trend by providing enhanced performance at costs compatible with the market for consumer level cameras. People will be wanting video cameras, video streaming, video capturing, at higher performance levels than their smartphones can achieve. A phone can do basic things: capturing images or video chatting. But now people want streaming video - reporters are streaming, maybe using a big light in front of them, people are using backdrop devices to help mask their home’s unmade bed. “Then I’m thinking, we might have more devices that might have cameras on them. Maybe I want to have a device in the kitchen when I’m doing my cooking, and I want to share with people. The phone can only do that in a limited way. There are many different ideas inspired by the forced isolation during the covid-19 pandemic. I’m seeing a lot more gardening, for example. There are devices that can monitor soil moisture or sunlight. The commercial side has been using drones to monitor soil state and nutrition. Maybe there can be a smaller device to do that for your home garden,” said Yiu. The availability of reasonably priced HDR sensors will not just enable more applications, it will inspire the development of new ones that haven’t been thought of yet.

This article was written by Ed Brown, Associate Editor of Photonics & Imaging Technology.

Photonics & Imaging Technology Magazine

This article first appeared in the July, 2020 issue of Photonics & Imaging Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.