The use of geo-referenced imagery across the Internet is becoming prevalent thanks to the development of Web-based location servers such as Google Earth, TerraServer, and Yahoo Local. But users of these services are continually asking for more timely high-resolution data. Civil agencies such as firefighters, search and rescue teams, law enforcement, 911 emergency operations, border patrol operations, traffic monitoring systems, and geological survey crews — as well as the military — could benefit from a near-real-time, Web-based geospatial visualization capability.

To address this need for real-time geospatial awareness, NAVSYS Corp. — with funding from the National Geospatial-Intelligence Agency, Office of Naval Research, and the United States Marine Corps — has developed the GI-Eye product, which provides the capability to generate precision mensurated imagery directly on the aircraft collecting the data. The GI-Eye system integrates GPS (Global Positioning System), inertial, and digital camera data to provide autonomous registration capability for imagery without requiring access to any Ground Control Points (GCPs). This provides real-time, high-quality registered imagery at a 1-Hz rate.

Figure 1. The GI-Eye ARES UAV mechanical assembly. The camera and IMU can be either fixed in the aircraft, or mounted inside a gimbal to allow geo-pointing to targets of interest.

The GI-Eye system has been combined with an Enterprise Server, termed the GeoReferenced Information Manager (GRIM), which uses this imagery to auto-generate mosaics as the data is being collected. With this approach, a near-real-time geospatial view of the environment can be generated in a format that can be viewed using current Web-based geospatial visualization tools.

GI-Eye System Design

The GI-Eye product is offered to sensor manufacturers and systems integrators to provide an embedded precision auto-registration capability for electro-optic (EO), infrared (IR), or other focal plane array (FPA)-type sensors. It has been integrated with a variety of digital cameras and sensors.

GI-Eye provides the capability to precisely time-mark each camera image and uses NAVSYS’ proprietary InterNav kinematic alignment algorithm to measure the precise position and attitude of the camera using GPS and inertial sensor data. The GI-Eye system auto-registration capability provides the location and pointing angle of the sensor with each image, and also sensor calibration data from which the coordinates of each pixel in the image can be derived. This information can be used with a Digital Elevation Model (DEM) to extract the individual pixel coordinates of each image. It can also be used to derive a 3D DEM from multiple images through photogrammetry.

A self-calibration capability has been designed into the GI-Eye system that allows for estimation of camera misalignment, focal length, and lens distortion parameters. This allows geo-registration accuracies of 1-2 meters to be provided when flying at an altitude of 1,000 feet without requiring the use of any ground truth.

UAV Payload

The GI-Eye has also been configured for use in an unmanned air vehicle (UAV) payload. The approach is to use a modular design that enables sensor upgrades over time, with only software configuration changes. The GI-Eye system comprises the digital camera, power converter board, IMU interface board, single-board computer, and the BAE Multisensor Inertial Measurement Unit (MIMU).

The hard drive is positioned under the single-board computer. Figure 1 shows these components assembled in the ARES UAV payload. ARES is a small UAV developed by the Research and Engineering Center for Unmanned Vehicles (RECUV) at the University of Colorado at Boulder for use in flight testing advanced unmanned air system concepts. The camera and IMU can be either fixed in the aircraft or mounted inside a gimbal to allow geo-pointing to targets of interest.

Figure 2. Users can access the GRIM viewing and targeting tools through a Web browser for real-time access to the geo-registered imagery.

The digital camera uses a high-resolution color CMOS video sensor with a maximum resolution of 1280 × 1024 at 15 frames per second. The standard USB 2.0 interface is capable of transferring data at 480 Mbps and is integrated with a built-in frame buffer to prevent data loss. The camera is plug-and-play and supports Windows 2000/XP. The camera uses a C-mount lens with a ½" optical format. Flexibility in lens selection is a key aspect of the payload design because the lens characteristics define a number of performance attributes of the system. In operational scenarios where the long-distance target identification is important, a lens with a narrow field of view is required. If a significant overlap between images is important for image tracking or mosaic generation, a lens with a wider field of view may be required.

The SBC is the core processing and control element of the payload. It is a Pentium-M CPU-based computer in a PC/104-Plus form factor, which provides a relatively high-performance processing platform that is lightweight with little power consumption.

The companion product to the GI-Eye is the GeoReferenced Information Manager. GRIM is based on an Oracle Application Server architecture. GRIM provides tools to synchronize data between the UAV payload and the GRIM server, and to provide management and intelligent search and retrieval of the image data. GRIM leverages the GI-Eye metadata, which provides the precise location and attitude of the sensor images to simplify and streamline feature extraction from the images. The precise sensor metadata also eliminates the need for expensive and time-consuming image processing for generating products such as mosaics or digital elevation models.

GRIM adopts an Enterprise Architecture to manage the GI-Eye information. The Oracle database connection is used to synchronize the GI-Eye metadata and image thumbnails between the Oracle Express database in the GI-Eye and the GRIM database. This allows GRIM to use Web Services to prioritize the full image data for transfer across the downlink using FTP. Tools are included in the GRIM server to allow for location-based search and retrieval of the GI-Eye sensor data. Users can access the GRIM viewing and targeting tools through a Web browser for real-time access to the geo-registered imagery (see Figure 2). The user can also use GRIM to automatically select multiple images over a target. A Web-based user interface is provided to allow the user to select the target pixels in each image. The GRIM server will then automatically calculate the 3D target coordinates and error ellipse from this data and display it to the user.

The GRIM product has been designed to optionally include an auto-mosaicking function that can take the down-linked GI-Eye images and create a mosaic in near real time. This allows the downlinked imagery to be displayed as a registered “overhead” image using existing tools such as Falcon-View or Google Earth.

GRIM Features and Benefits

The GRIM system’s Oracle Application Server incorporates spatial processing in accordance with the open geospatial standards developed by the Open Geospatial Consortium (OGC). This allows the GRIM imagery products to be integrated with Web geospatial tools that are also OGC-compliant.

Using the GI-Eye and GRIM technology, it would be possible to generate in near real time a continually updated registered mosaic with an overhead view of the area covered. A network of low-cost persistent UAVs with GI-Eye payloads could be used to collect real-time registered imagery, which is then down-linked to a central GRIM server. The imagery can be processed in near real time, creating a registered overhead view in a format that can be used by location servers to provide a current view of the area covered by the UAVs.

Near-real-time dissemination of geo-referenced imagery by a location server can benefit a number of military and civilian applications. The military would benefit from having access to imagery for situational awareness in a format that ground troops can easily and readily access using existing Web-based viewing tools. For civil applications, the same type of situational awareness would benefit disaster recovery efforts. For example, in the event of a hurricane, the ability to provide responders with precise geo-registered positions of victims with accurate images depicting their condition and environment could save lives. Forest fire hot zones could be accurately identified and located quickly for special treatment. Homeland security, police standoffs, or hostage situations could be managed more efficiently with a better outcome because of real-time precision image availability that precisely locates all parties.

This article was written by Alison Brown, Chairman and Chief Visionary Officer; Heather Holland, Software Engineer; and Yan Lu, Research Engineer, at NAVSYS Corporation, Colorado Springs, CO. For more information, Click Here 


Imaging Technology Magazine

This article first appeared in the September, 2008 issue of Imaging Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.