There is a widely acknowledged need for metrics to quantify the performance of video systems. Existing metrics are either difficult to measure or are largely theoretical. They do not reflect the full range of effects to which video may be subject, or do not relate easily to video performance in real-world tasks. The empirical Video Acuity metric is simple to measure and relates directly to task performance. Video acuity is determined by the smallest letters that can be automatically identified using a video system. It is expressed most conveniently in letters per degree of visual angle.

Video systems are used broadly for public safety, and range from very simple, inexpensive systems to very complex, powerful, and expensive systems. These systems are used by fire departments, police departments, homeland security, and a wide variety of commercial entities. They are used for a variety of tasks, including detection of smoke and fire, recognition of weapons, face identification, and event perception. In all of these contexts, the quality of the video system impacts the performance in the visual task. The Video Acuity metric matches the quality of the system to the demands of its tasks.

The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system comprises a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons.

The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task.

This work was done by Andrew Watson of Ames Research Center. NASA invites companies to inquire about partnering opportunities and licensing this patented technology. Contact the Ames Technology Partnerships Office at 1-855-627-2249 or ARCTechTransfer@ mail.nasa.gov. Refer to ARC-16661-1.