There are a number of drawbacks to traditional two-dimensional environments. In 2D, images appear flat and only offer one perspective. In 3D, images have depth and offer multiple perspectives. A true virtual-holographic representation overcomes the limitations of even the very highest resolution 2D flat screen display. Part of the benefit of a 3D environment is the comfort of the experience; as in the real world, there are no conflicting cues and no need to envision how a 2D projection will appear in three dimensions — with 3D, you see it and operate with it exactly as it is.
The challenge with current 3D technologies is ensuring user comfort and natural ease of use when manipulating objects in 3D. To be realistic, 3D simulations should appear 100-percent solid in open space, with full color and high resolution, as if they were real physical objects.
Today, real 3D visualization is experienced through several components of a complex system: an interactive display, head tracking to monitor the user’s real-time movement, and a stylus to interact with 3D objects. To create the 3D effect, the display alternates left- and right-eye images filtered by polarized glasses. The glasses allow only the intended image to reach each eye and provide a reference point for head tracking, so one can move and look at an object from different angles and perspectives. The stylus beam allows the user to select and grasp the object, which appears free floating in space, in much the way that a mouse allows two-dimensional manipulation of flat drawings (see Figure 1). The object can appear in front of or behind the display. This experience lies at the heart of virtual holography.
3D experiences have the potential to move our physical world into a sensoryrich virtual one that anyone can naturally and intuitively navigate, significantly advancing the way we solve problems, learn, teach, and communicate. The user can manipulate any object with the same fluidity that he or she would in the real world. In the real world, if one needs to grasp an object behind another object, the actions seem obvious and natural; the same applies in 3D virtual holography. Consider three examples:
- Designing a vehicle, such as a car, robotic explorer, or space rover: Normally this would require many man-hours of design and mockup based on two-dimensional projections. In a 3D virtual-holographic environment, it is possible to draw, select, and assemble all of the individual parts just as if they were built by hand.
- Examining a MRI: Conventional technology requires that you scroll through slides or slices of the images, imagining the anatomy in 3D. In a virtual-holographic environment, anatomy is visible in 3D, and an organ can be lifted, brought forward, turned around, and examined in every detail.
- Designing a monitor for computer or entertainment applications, an example of industrial design: In a 3D virtual-holographic environment, data is entered into CAD programs, then the monitor is sculpted and manipulated in all three dimensions simultaneously, all as if physically handling the object.
As a corresponding example, volumetric data may be viewed and analyzed in a 3D virtual-holographic environment in a way that offers intuitive insights otherwise much more difficult to realize with traditional 2D projections of three-dimensional information.
How it Works
Much of 3D virtualization technology is still behind the scenes in development, though some concepts about the interworking can be revealed. First, stereoscopic 3D imaging simulates a three-dimensional scene in two dimensions using monocular cues, such as perspective, highlights, shadows, texture, and other rendering techniques. The resulting 2D image could be referred to as a monoscopic 3D image or simply as a “3D” image.
By contrast, a 2D image that provides both monocular and binocular cues is referred as a stereoscopic 3D image. The canvas of a monoscopic 3D image is a flat and static surface. The canvas for a stereoscopic 3D image becomes a window through which a dynamic 3D environment appears, due in part to the concept of parallax, where 3D objects can appear far away (behind the screen) or close by (in front of the screen) (see Figure 2). In order to generate a 2D image that adds binocular cues, one needs to supply the left and right eyes with separate images that closely correspond to the images the eyes would see were they looking at the real object. When the brain is presented with left and right eye images, it fuses them into a single image that has strong depth perception.
A current area of focus for the industry is integrating 3D virtualization into industry- leading CAD software products such as Autodesk Maya® and Showcase® to operate in true 3D imaging. Companies like Infinite Z produce a Software Developer Kit, or SDK, which can be used to leverage three-dimensional capability by an application. The SDK integrates stereoscopic 3D projection information into the application’s rendering system. It also provides some augmented information to give the application stereoscopic 3D projection. The process of tracking the user’s head, and integrating that information into the stereoscopic 3D calculation, is a function of enabling the technology’s head-tracking feature. The final element to this system is the access to the position and orientation of the stylus. The application can then use this information to integrate both existing and new direct interaction workflows. With these three elements, an existing or new 3D application can be made into a fully immersive virtual holographic application.
The Possibilities of 3D Virtual Holography
The possible uses for virtual holography remain endless. The research community could design a robotic explorer without building handcrafted models and expensive pre-prototype test constructions. This could reduce the cost of unmanned missions by a significant margin. The head-tracking ability of the display and the interactive feature of the stylus are two unique features. Using motion parallax — a depth cue that results from one’s movement relative to his or her environment — offers comfortable and natural 3D interaction with the design. Using a virtual-holographic system, a virtual camera within the software can closely examine small components, including an engine. Alternatively, the entire rover may be lifted into the air and rotated in an unrestricted fashion, pulled towards the designer, and examined in greater detail or pushed away and seen from a wider view (see Figure on page 66).
Other applications include remote medical procedures, education, architectural drafting in three dimensions, and imaging for the film and entertainment industry. By allowing the user to create three-dimensional objects in a virtual-holographic world and interact with virtual tools, tasks that once required hours of two-dimensional projections of real-world objects become greatly simplified and allow direct interaction. Recognizing where the user’s head is relative to the image, as in Infinite Z’s system, for example, allows for a natural and intuitive interface. Similarly, the use of a stylus enables ease in selecting objects, which would otherwise require a substantial amount of cognitive processing.
The key functionality that this new form of 3D visualization provides centers on the naturalness and comfort of the user experience. Imagine selecting and rotating an engine block in real space, thinking about fine-tuning the engine design itself. Thus, less time is spent simply trying to understand what the conventional projection will look like when realized as a three-dimensional object. Users can then spend more time focusing on the creative aspects of the project. While it is possible to describe a 3D virtual-holographic experience in words, one must interact with it first hand to appreciate its power and beauty.
This article was written by Dave Chavez, VP of Hardware Engineering, Infinite Z, Inc. and Andy Schaub, Senior Technical Writer, Infinite Z, Inc. For more information, Click Here .