Disaster relief workers, border patrol officers, wildfire fighters, and many others need up-to-date information about what they will see when they enter an area, not just sketchy topographic maps that may be years out of date. For them, things change in a hurry, and high-resolution details may be critical to their mission success.
Urban Robotics (UR) has taken advantage of advanced machine-vision camera technology to develop a system that allows customers to map actual terrain in 3D as it is, rather than as it was, and have a high-resolution GIS-compatible digital data set in hand within 24 hours, rather than days or weeks. The system uses an array of cameras mounted on an aircraft to take a set of 2D images from different viewpoints. Advanced computer vision algorithms running across hundreds of computer processors then mathematically extract the height value of every pixel, creating a rich, dense 3D representation of the ground. The result is a realistic 3D map of the terrain, which can be viewed through any of a number of GIS viewers. The system makes it easy for end users to navigate through the terrain rapidly, and with foreknowledge of what will be encountered.
For example, the U.S. Border Patrol regularly maps patrol areas to provide pilots with accurate, high-resolution maps of terrain, especially helicopter landing zones (HLZs). Combined with onboard GPS, the maps allow helicopters to locate the nearest HLZ in an emergency. In the event of a problem, the pilot can call up the 3D map, locate the nearest HLZ, and vector to it in real time — soon enough to turn a crash scenario into a controlled emergency landing.
Reconfigurable depending on mission needs, the system starts with an array of up to twelve high-speed, high-resolution machine-vision cameras mounted on an aircraft. A fairly typical arrangement might be eight Imperx smart cameras, each with 29 Megapixels resolution capable of capturing 1-1.5 Gpx/sec of image data. They might, for example, be arranged in a 2 ft X 3 ft array (Figure 2) consisting of two rows mounted in a small aircraft, such as a King Air. Such aircraft might have room for a dozen passengers in airliner configuration.
For a PeARL 3D mapping application, however, the passenger seats would be removed to make room for various kinds of sensory equipment. There would be multiple dual six-core data servers mounted in computer blade cabinets packing 12 cores per computer-rack unit (U). These servers would capture incoming image data from the PeARL cameras, and store it in large solid-state digital memories.
How the camera arrays are set up depends on mission goals. For missions where large area coverage is paramount, the cameras would be set up with little overlap of their fields of view (FOV). Such an array could cover up to 600 square miles at 3 inch resolution in one data set!
When 3D information is paramount, however, the cameras are set up with much more overlap, as shown in Figure 3, reducing the swath of area covered in each pass, and leading to less total area covered. More typical of these missions is 200-300 square-mile coverage at 6-8 inch resolution.
Ideal cameras combine high resolution, moderate frame rate, and rapid image downloading over a network. One such camera is the Imperx B6620, which comes in Gigabit Ethernet (Gig-E) and Camera Link versions. Both have 29 Mpx image sensors with 6,576 × 4,384 resolution, up to 2.4 fps frame rate (which is not video rate, but certainly adequate for the application), 8, 10, 12, or 24 bit video output formats, and up to 10g mechanical-shock rating. The camera’s smart video control simplifies automatic control of multiple cameras.
Once on the ground, there are several ways to download the image data — which can amount to 1-10 terabytes of data per day — from the data servers to the ground-based supercomputers UR uses to process the data.
Typically, UR is onsite with the customer when the data is downloaded, but data can be loaded into CDs, uploaded via the Internet, or sent by any other means of transporting a large data set.