Light field technology, also called integral imaging, is an emerging concept for imaging, the advantages of which are not widely understood, even among imaging industry professionals. As there are several ways to capture light field imaging, industry professionals have not quite agreed on which light field capturing technique is the best. and there is ongoing discussion on how to best capture images.

Essentially, in an intuitive way, we can define light field as the volumetric information of a scene. Since photography was invented, image capturing has involved acquiring information in a bidimensional projection of a scene. However, light field offers not only 2D projections but also adds another dimension, the angle of the light rays that arrive to that projection. Having information on both the light array direction and the bidimensional projection of the scene, it is possible, for example, to move the projection to a different focal distance. This enables the user to freely re-focus the image post acquisition. One can also change the point of view from which the scene has been captured, in addition to many other possibilities that will be explained in this article.

Active method camera(s) (iPhone-like)

Light field capturing has been performed in a passive way by using camera arrays where each camera acquires the scene at a different angle and a different point of view. Some of the challenges related to using a camera array is the need to synchronize the camera shutter and to avoid big differences of illumination between the different camera viewpoints, in addition to the heavy computing required to generate the light field video. A camera array means, in essence, getting a wide baseline and consequently generating a far hyperfocal distance. Of course, a large camera array makes portability a challenge in addition to the heavy computing power required to generate the final light field image from the vast amounts of data collected by each camera in the array.

The next step in light field imaging development that solved the portability challenge was the invention of the plenoptic camera. The introduction of large digital imaging sensors that place a micro lens array in between the lens and the image sensor allows a single camera to capture an array of images from different angles and with different viewpoints. Using such an optical arrangement, it is possible to obtain bidimensional resolution in the sensor with light field information. However, the price to pay is a significantly degraded resolution limited to the number of micro lenses in the array.

Array of cameras (6x6)

This camera arrangement was the core of the Lytro camera, which was the first consumer level light field camera ever released. Another plenoptic camera arrangement is the Shack-Hartmann optical layout, with the micro lens array placed in front of the optical lens. However, this sensor is more geared towards scientific usage such as light observation of distant objects in the universe (telescopes).

Whether using a camera array setup or a plenoptic camera, one can always generate a depth map from the data acquired, which allows for the generation of an infinite number of virtual viewpoints of the scene without being limited to the angles of the original capture. Obviously one can always generate a depth map by adding a depth sensor like structured light, ToF (Time of Flight) or a LiDAR sensor, but there are several drawbacks ranging from depth sensor to image sensor alignment to high power consumption, in addition to having much lower resolution than the image sensor.

An important feature provided by the plenoptic camera setup is to generate unfocused images. This knowledge is the key for the newly developed third option for capturing light field information – using a focal stack developed by Wooptix. One of the main advantages to using focal stack acquisition is that the micro lens array is not used, allowing for the final light field imaging to have the same resolution as the bidimensional image sensor. The depth map is generated by utilizing the information about the focus of each pixel, enabling a depth map with the same number of pixels as the bidimensional image sensor.

The final quality of the depth map depends on the algorithms used to generate the depth map of the scene. Using a high-speed camera and a fast variable lens accurately timed to the image sensor acquisition and enough light to lower the SNR (signal to noise ratio) to analyze each unfocused image, one can collect the focal stack fast enough to generate true light field video with 24 fps or faster.

Focal-stack camera (Wooptix-like)

To summarize, today there is no perfect light field capture technique. However, there are a few great candidates such as camera array, plenoptic camera, active camera with depth sensor, and finally, the newly developed focal-stack camera, each providing different advantages and disadvantages.

What Are the Advantages of Having Light Field Information?

We should also talk about: (1) What do you do with 2D light field and (2) what with “3D” light field. The last question is (3) what do you do with light field. In the end you are capturing only light field, but you can choose whether you prefer 2D, 3D, holography, or whichever format you need. Always after the image/video is taken (this is maybe the biggest advantage of capturing light field).

Acquiring light field, as explained, could be colloquially understood as “volumetric image capturing”. This volumetric image capturing can be used in bidimensional (2D), three dimensional (3D) and/or in volumetry or holography.

2D: The light field can be displayed in a conventional bidimensional screen, providing re-focus at will, all in-focus images, change the point of view, or to use the depth map for special effects. All of these uses can be done after the image/video has been acquired.

3D: If light field is displayed in a 3D display, it is possible to enjoy glasses-free 3D (or conventional stereo) and you can also use the same characteristics mentioned before in the 2D screen (but in 3D). In other words, after the image/video has been acquired.

LF/Holo: In the end, the best experience can be achieved if light field is displayed in a light field or holographic display, allowing all the information of the scene to be properly displayed utilizing every bit of information captured by the light field camera.

3D light field, as explained, is not the conventional 3D used in movies such as Avatar among others, but rather “integral 3D” imaging. With integral 3D imaging, the viewer will experience a fully natural 3D view including multi-focus and multi-views, something not present in current conventional 3D.

Light field enables capturing of all the information about each light ray in the scene allowing the viewer to freely focus on whichever plane the viewer wants to see in focus, the same way a person views natural, everyday objects. The technology offers all the rays in the scene and the spectator freely chooses where to focus. This ability to focus freely within the light field image is a major difference between 3D light field imaging and conventional 3D imaging.

There has been a slew of new developments in the area of light field displays. However, these displays are still at an experimental R&D level with consumer level displays expected to arrive within the next few years. On the other hand, there are currently no good ways of capturing high-quality light field video, and we believe that active lens technology using focal stacking is a very good candidate that could reach consumer level quality within a very short timeframe.

This article was written by Javier Elizalde, CMO, Wooptix (Madrid, Spain). For more information, visit here .


Photonics & Imaging Technology Magazine

This article first appeared in the March, 2021 issue of Photonics & Imaging Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.