Everywhere we look we are bombarded with 3D. It’s in movies, in-home entertainment, digital camcorders, gaming systems, laptops, and even in our labs. What is it about 3D that is so fascinating and what are some of the technology drivers that are determining its future?
The concept of 3D is not a new idea. In fact, Greek mathematician Euclid is credited with discovering the principles of binocular vision in the 4th century BC. Stereopsis, or "depth sense," was discovered by Sir Charles Wheatstone in 1838 and the stereoscope was popularized by Oliver Wendell Holmes in the late 1800s to 1930s. The first 3D movie appeared in 1922 and became more prolific in the 1950s when theater audiences donned red and blue tinted paper glasses to view early big screen 3D movies. Chances are you have clicked through 3D scenes on a View-Master, first introduced in 1937 at the New York World’s Fair.
While all of these provided some sense of dimensionality, fun effects, and entertainment, none provided a genuinely immersive or realistic environment. Today there is a convergence of technologies that is making 3D more readily acceptable and affordable, not just for the mainstream consumer market but for research, surveillance, inspection, process control, and a wide variety of medical applications.
Perception vs Reality
What we experience in the real world through our own eyes and mind is quite different from the stereoscopic images created through a camera and display. The majority of our 3D perception occurs for objects within 20 feet (6.1 meters). For objects beyond this distance our depth perception relies on factors such as relative scale, horizon lines, and other visual cues. The axis of our eyes also rotates naturally to meet at the desired location when viewing an object in the real world — this is called convergence. The angle of convergence varies depending on the distance between our eyes and the object that is the center of our focus. Our eyes have less convergence when focusing on an object in the distance, compared with objects we focus on that are nearer to us. When creating 3D effects, the cameras and monitors need to artificially produce this angle of convergence disparity in order to fool our brain into perceiving an object at an artificial distance, relative to the display screen (Figure 1).
Three critical components are required to simulate 3D: content creation, processing, and display. Each plays a role in helping us recreate what our own eyes and brain do naturally. Achieving the highest level of realism involves a combination of technologies working together and delivered at a cost point which is viable for the intended application. It is true that 3D images can be completely synthesized within a software environment for research tools, computerized design platforms, and entertainment. This article, however, will discuss capturing images stereoscopically using video cameras to produce content.
Creating a realistic and immersive 3D experience depends on high quality content. This is best achieved through the use of digital high definition (HD) video cameras which feature high resolution CCD or CMOS sensors, wide dynamic range, good sensitivity, digital shutter, and adjustable color matrix. HD video is a specific format, typically 16:9 aspect ratio, 1920h x 1080v matrix. This format can be either interlaced, 1080i, whereby each video frame is displayed as two alternating fields of odd and even video lines, or progressive, 1080p, which is displayed out sequentially one video line after another.
In the United States, the Advanced Television Systems Committee (ATSC) defines several HD and standard definition formats for broadcasters. The most common HD format for cable and broadcasters is 720p which corresponds to 1024 x 720 progressive. When 3D content is broadcast by cable and satellite, the resolution is reduced by half to 1920 x 540, or 540p, to transmit the left and right views within the HD broadcast signal. Alternatively, Blu-ray Disc™ players are capable of delivering 3D content in full resolution 1920 x 1080 format.