The past decade has seen an explosion of observations from airborne and satellite-based multiand hyperspectral sensors, as well as from synthetic-aperture radar and LiDAR. Distilling useful information from this wealth of raw data is the domain of geospatial analysis, the collection of analytical, statistical, and heuristic methods for extracting information from georeferenced data. This information is important in serving the needs of a diverse set of industries including environmental conservation, oil and gas exploration, defense and intelligence, agriculture, coastal monitoring, forestry, and mining.

Figure 1. Flat (left) and Gouraud (right) shading of a surface. (Image credit: Exelis VIS; created with IDL™)

3D visualization techniques play an important role in geospatial analysis. The ability to represent the 3D nature of a geospatial data product on a 2D computer screen — including the ability to manipulate the data product in a 3D coordinate system — is essential; it enhances a user’s ability to explore the data, aiding in discovery and insight into features of the data that may not be apparent from a 2D view.

Representing 3D in Computer Graphics

In computer graphics, a typical convention is to specify a right-handed 3D coordinate system such that when a viewer is facing the display, +x is directed to the right, +y is directed up, and +z is directed out of the display, toward the viewer. Points — and 3D objects, which are treated as groups of points — within this 3D coordinate system are represented by homogeneous coordinates, which are formed by adding a fourth coordinate to each point. Instead of being represented by a triple (x,y,z), each point is instead represented by a quadruple (x,y,z,w). Homogeneous coordinates simplify coordinate transformations (i.e., translation, rotation, and scaling) by allowing them to be treated as matrix multiplications.

To view an object from a 3D coordinate system on a 2D display, a view volume, a projection plane, and a viewport are needed. The view volume is a subset of the 3D coordinate system; for simplicity it is often a unit cube centered at the origin. This is where the action takes place: Any object within the view volume is visualized; any object that falls outside the view volume is not. Objects can be scaled, rotated, and translated to fit within the view volume.

Figure 2. Land cover data texture mapped onto a digital elevation model of the Front Range of Colorado. (Image credit: Exelis VIS; created with IDL™)

Objects within the 3D view volume are mapped into a 2D projection using a planar geometric projection, usually some form of perspective or parallel projection. The projection is defined by rays that emanate from a point, the center of projection, and pass through every point of the object to intersect with the projection plane. The contents of the projection plane are then mapped onto the viewport, a 2D window defined in the device coordinates of the display.

In computer graphics, complex 3D objects are constructed from a small number of primitive graphical items: points, line segments, and convex polygons. 3D curved surfaces are approximated by large numbers of small, flat polygons, typically triangles or quadrilaterals. Increasing the density of the polygons makes a smoother-looking surface.

Surfaces can be rendered using filled polygonal primitives drawn with a single color. This is known as flat shading. Surfaces can also be rendered using smooth or Gouraud shading, where the colors of the polygonal primitives are instead interpolated between the vertices. See Figure 1 for a comparison of the two techniques.

Applications of 3D in Geospatial Analysis

Digital elevation models (DEM), which give a 3D representation of the Earth’s surface, are used frequently in geospatial analysis. A DEM can be visualized in 3D as a polygonal mesh or a filled surface, with shading to heighten the 3D appearance of the model, or with colors proportional to height.

Figure 3. A hyperspectral image cube of AVIRIS data collected near Cuprite, Nevada. (Image credit: Exelis VIS; created with ENVI™)

The data density of the visualization can be heightened by overlaying, as an image, additional georeferenced data onto the 3D DEM surface through texture mapping. The additional image data could be sourced from, for example, meteorology (surface temperatures, ozone concentration), geology (mineral types identified by multi- or hyperspectral imaging), or urban planning (zoning or land use), as well as many others. As an example, Figure 2 shows a visualization of USGS GTOPO30, a U.S. Geological Survey global digital elevation model, over the Front Range of northeast Colorado. The image features an overlay, through texture mapping, of land use with the USGS National Land Cover Dataset 1992 product, a 21-class land cover classification scheme. Colors are keyed to land cover types; urban and residential areas, for example, are red and pink. A vertical exaggeration of 0.2 is used in the visualization.

Hyperspectral Imaging

Widespread use of hyperspectral imagery across industries is a relatively recent trend in geospatial analysis. Compared to multispectral sensors (e.g., Landsat, SPOT, AVHRR), which measure reflected radiation from the Earth’s surface at a few widely spaced wavelength bands, hyperspectral sensors measure reflectance over a series of hundreds of narrow and contiguous bands, providing the opportunity for more detailed spectral image analysis. Hyperspectral images are often referred to as image cubes because of their large spectral dimension, in addition to their two spatial dimensions. Figure 3 shows a visualization of an AVIRIS (Airborne Visible/ Infrared Imaging Spectrometer) hyperspectral image taken near Cuprite, Nevada. The visualization is an oblique parallel projection, with the spectral dimension visualized in the xz- and yzplanes (the top and right sides of the cube, respectively). The face of the cube, in the xy-plane, is a false color composite with red, green, and blue bands chosen to emphasize peaks in the reflectance spectra of minerals found in the image, such as buddingtonite, kaolinite, and various clays.

LiDAR

On Tuesday, January 12, 2010, a magnitude 7.0 earthquake struck just miles from Haiti’s capital city of Port-au- Prince. About 3 million people were affected by the quake. The government of Haiti estimated that 250,000 residences and 30,000 commercial buildings were severely damaged or destroyed.

LiDAR can be used to detect and measure objects like collapsed buildings and standing structures damaged by an earthquake. It can also be used in extracting road networks and route planning — information that can be critical for emergency responders trying to plan routes to find people who need help as quickly and efficiently as possible. A 3D visualization, reconstructed from a LiDAR point cloud, showed buildings and roads in Port-au- Prince that were damaged in the January 2010 earthquake.

The data used in producing this visualization were collected in a joint project funded by the World Bank, in conjunction with the Rochester Institute of Technology, the University of Buffalo, and ImageCat, Inc. A twin-engine Piper Navajo, operated by Kucera International, flew missions for seven consecutive days at 3000 feet over Port-au-Prince and other areas badly hit by the earthquake. LiDAR data at 1- and 10-m spatial resolutions were collected to map the disaster zone to aid in crisis management and the eventual reconstruction of the city.

To produce the visualization (see title image), the E3De™ LiDAR processing application was used to extract a digital surface model (DSM) from surface features such as buildings, trees, and cars. Further processing of the DSM gave building footprints and roof shape polygons. Next, a DEM was computed from the DSM using a combination of proprietary crawling and sensitivity algorithms.

Subtracting the DEM from the DSM gives the vertical obstruction layer. With additional image analysis in ENVI™, based on published algorithms in the LiDAR community, intact roads can be separated from structures and debris.

Leveraging 3D LiDAR data, as well as 3D visualization tools for the data, can be invaluable for disaster mitigation. The type of analysis described here can quickly help emergency responders find passable routes to people in need.

Conclusion

In geospatial analysis, 3D visualization techniques are invaluable for enhancing a user’s ability to explore, interpret, and understand data. In the future, as the use of hyperspectral and LiDAR data in disaster management continues to grow, 3D visualization will become increasingly relevant. While the synthesis of hyperspectral and LiDAR data can help emergency responders inventory buildings, land ground teams, find passable routes, and otherwise support crisis response efforts, proper 3D visualization of this data can aid all levels of disaster management, from basic building inventory to sophisticated network routing problems.

This article was written by Mark Piper, Solutions Engineer, Exelis Visual Information Solutions (Boulder, CO). For more information, Click Here .

References

  1. Foley, James D., Andries van Dam, Steven K. Feiner and John F. Hughes, 1990: Computer Graphics: Principles and Practice. Second edition. Addison-Wesley, Reading, MA.
  2. Priestnall, G., J. Jaafar and A. Duncan, 2000: Extracting urban features from LiDAR digital surface models. Computers, Environments and Urban Systems, 24, 65-78.
  3. Shippert, P., 2004. Why use hyperspectral imagery?, Photogrammetric Engineering & Remote Sensing, 70(4), 377–380.
  4. Shreiner, D., 2010: OpenGL Programming Guide: The Official Guide to Learning OpenGL, versions .0 and .1. Addison-Wesley, Upper Saddle River, NJ.