A CAVE is an immersive virtual reality environment that allows one or more users to absorb themselves in a virtual environment. A common CAVE setup is a room-sized cube where the cube sides act as projection planes. By nature, all cubic CAVEs face a problem with edge matching at edges and corners of the display. Modern immersive displays have found ways to minimize seams by creating very tight edges, and rely on the user to ignore the seam. One significant deficiency of flat-walled CAVEs is that the sense of orientation and perspective within the scene is broken across adjacent walls. On any single wall, parallel lines properly converge at their vanishing point as they should, and the sense of perspective within the scene contained on only one wall has integrity. Unfortunately, parallel lines that lie on adjacent walls do not necessarily remain parallel. This results in inaccuracies in the scene that can distract the viewer and subtract from the immersive experience of the CAVE.
The cylindrical display overcomes the problem of distorted edges. Its smooth surface is perfectly equidistant from the viewer when he or she is positioned near the center. This eliminates the artifacts of a flat-walled CAVE where the viewing surface varies in distance from the viewer wherever he or she may stand within it. The display is a curved rear-projected screen comprising three-quarters of a 12-ft-diameter (≈3.7-m-diameter) cylinder. The projection surface is a high-contrast, unity gain, flexible screen material. The screen is about 6.5 ft (≈2 m) tall, and the height of the actual image displayed on the screen is approximately 5 ft (≈1.5 m). A single consumer video card outputs to three short-throw projectors that are mounted behind the screen. Each projector illuminates 90° of the screen and overlaps slightly with an adjacent projector. The resolution of the entire cylindrical display is about 3,500×1,024 pixels. The projectors are edge-blended and calibrated into a seamless display using Scalable Display Technologies’ camera-based calibration.
This system, known as Stage, is designed to address two critical visualization problems. First, people viewing imagery from surface spacecraft often incorrectly estimate the size of objects in the environment because imagery on a standard computer screen does not occupy the correct portion of their visual field. Second, people viewing panoramic images frequently fail to understand the relative positions of objects in the environment because the panoramic image is rolled out flat and presented in front of them instead of wrapping around them. These fundamental errors have well-documented and dramatic consequences. Viewers frequently believe an object is beside a robot when it is actually behind it, or think that a small rock is actually a large, hazardous obstacle that must be avoided. Stage addresses both of these problems by immersing viewers in an accurate representation of the operating environment.