Disaster relief workers, border patrol officers, wildfire fighters, and many others need up-to-date information about what they will see when they enter an area, not just sketchy topographic maps that may be years out of date. For them, things change in a hurry, and high-resolution details may be critical to their mission success.

Figure 1. PeARL point-cloud image of a new pressure dome growing in the crater left by the 1980 Mt. Saint Helens volcanic eruption. (Urban Robotics)
Urban Robotics (UR) has taken advantage of advanced machine-vision camera technology to develop a system that allows customers to map actual terrain in 3D as it is, rather than as it was, and have a high-resolution GIS-compatible digital data set in hand within 24 hours, rather than days or weeks. The system uses an array of cameras mounted on an aircraft to take a set of 2D images from different viewpoints. Advanced computer vision algorithms running across hundreds of computer processors then mathematically extract the height value of every pixel, creating a rich, dense 3D representation of the ground. The result is a realistic 3D map of the terrain, which can be viewed through any of a number of GIS viewers. The system makes it easy for end users to navigate through the terrain rapidly, and with foreknowledge of what will be encountered.

For example, the U.S. Border Patrol regularly maps patrol areas to provide pilots with accurate, high-resolution maps of terrain, especially helicopter landing zones (HLZs). Combined with onboard GPS, the maps allow helicopters to locate the nearest HLZ in an emergency. In the event of a problem, the pilot can call up the 3D map, locate the nearest HLZ, and vector to it in real time — soon enough to turn a crash scenario into a controlled emergency landing.

Reconfigurable depending on mission needs, the system starts with an array of up to twelve high-speed, high-resolution machine-vision cameras mounted on an aircraft. A fairly typical arrangement might be eight Imperx smart cameras, each with 29 Megapixels resolution capable of capturing 1-1.5 Gpx/sec of image data. They might, for example, be arranged in a 2 ft X 3 ft array (Figure 2) consisting of two rows mounted in a small aircraft, such as a King Air. Such aircraft might have room for a dozen passengers in airliner configuration.

For a PeARL 3D mapping application, however, the passenger seats would be removed to make room for various kinds of sensory equipment. There would be multiple dual six-core data servers mounted in computer blade cabinets packing 12 cores per computer-rack unit (U). These servers would capture incoming image data from the PeARL cameras, and store it in large solid-state digital memories.

How the camera arrays are set up depends on mission goals. For missions where large area coverage is paramount, the cameras would be set up with little overlap of their fields of view (FOV). Such an array could cover up to 600 square miles at 3 inch resolution in one data set!

When 3D information is paramount, however, the cameras are set up with much more overlap, as shown in Figure 3, reducing the swath of area covered in each pass, and leading to less total area covered. More typical of these missions is 200-300 square-mile coverage at 6-8 inch resolution.

Ideal cameras combine high resolution, moderate frame rate, and rapid image downloading over a network. One such camera is the Imperx B6620, which comes in Gigabit Ethernet (Gig-E) and Camera Link versions. Both have 29 Mpx image sensors with 6,576 × 4,384 resolution, up to 2.4 fps frame rate (which is not video rate, but certainly adequate for the application), 8, 10, 12, or 24 bit video output formats, and up to 10g mechanical-shock rating. The camera’s smart video control simplifies automatic control of multiple cameras.

Post Processing

Figure 2. Six-camera PeARL array mounted in an aircraft window. (Urban Robotics)
Once on the ground, there are several ways to download the image data — which can amount to 1-10 terabytes of data per day — from the data servers to the ground-based supercomputers UR uses to process the data.

Typically, UR is onsite with the customer when the data is downloaded, but data can be loaded into CDs, uploaded via the Internet, or sent by any other means of transporting a large data set.

A full data set for a given mission may include tens to hundreds of terabytes. All that data must be processed to provide three-dimensional point-cloud data sets, which the customer can “observe” using any of several Geospatial Information System (GIS) software packages, such as Google Earth.

Supercomputers used for processing these massive data sets consist of large numbers of processors networked into clusters, or cloud-based computing systems. A cluster consists of many processors networked in a proprietary system. Supervisory software breaks the computing task up, and assigns different parts of the job to different processors, which operate individually in a series/ parallel mode to process it in the most efficient way.

In series (pipeline) mode, the processors operate sequentially on the same part of the data set. Each processor performs a different task to partially process the data. Subsequent processors further process the data until the programmed operations result in the finished output needed.

In parallel mode, different processors perform the same operations on different parts of the data set. The end result is, again, a fully processed data set in significantly shorter time than possible with single-processor systems. Cloud computing is similar to cluster computing, except that the problem is run not on a defined set of processors owned by the user, but by “renting” slices of computer time on Internet-based servers. Cloud computing is ideal for customers whose usage levels or other considerations (such as security issues) cannot justify the investment in proprietary supercomputing resources.

The final result of the processing is typically a 3D point cloud, a 3D mesh model, and standard orthographic 2D image datasets. A point cloud is a dense array of points defining a three-dimensional surface. Each point has attribute data, such as x, y, and z coordinates, along with surface-optical information, such as color, transparency, and reflectance. Taken together, the point cloud forms a 3D map of the surface.

Software

Figure 3. Eight-camera PeARL array showing overlapping fields of view to provide parallax sensing of height information. (Urban Robotics)
Users employ GIS software to view the point cloud. One can view it as a static map, a display from different viewpoints, or even “fly” over it as an animated view. The fully processed data set can be delivered on a CD, over the Internet, or via a URL where the user can browse the full data set. Significantly, the user can view the data set as a complete unit — a complete map, as it were — rather than in bits and pieces.

What the customer ultimately buys is the data collection system — the camera arrays and airborne data servers — and a mapping service. The user then flies missions with the data collection system, and delivers the raw image data to UR. The company processes the data and returns the completed map and 3D point cloud within twenty-four hours. The customer can then view the map via third-party GIS software, such as ArcView (Esri) [www.esri.com/software/arcview], or FalconView, [www.falconview.org/trac/FalconView], which is a PC-based mapping application developed by the Georgia Tech Research Institute for the U.S. Department of Defense. Or the customer can obtain PeARL software from UR to run on their own supercomputer cluster or cloud.

The alternative to PeARL is combining lower-resolution video systems with light direction and ranging (LIDAR) systems to capture both lateral and vertical information. LIDAR systems operate similar to RADAR, but use light pulses instead of microwave pulses. The LIDAR system sends out a series of laser pulses to illuminate targets on the ground. The system then measures the time each pulse takes to reach the ground and return. That time multiplied by the speed of light equals the total travel-path length, which is twice the range to the target.

LIDAR systems are quite accurate, but necessarily sample relatively few points on the ground, leading to a low-resolution map. In addition, processing of the collected data can take several days to several weeks.

A number of agencies including the Department of Homeland Security and U.S. military services have found the PeARL system to be an ideal solution. They have applied the system to disaster response (such as storms and natural disasters), emergency response (such as industrial accidents), and battlefield reconnaissance missions. The characteristics of these applications include the need for up-to-date high-resolution maps of extended areas; the need for 3D information; recent events that render traditional topographic maps obsolete; and the need for quick turnaround.

This article was written by Geoff Peters, CEO, Urban Robotics (Portland, OR). For more information, contact Mr. Peters at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit http://info.hotims.com/34460-201 .