In many applications, 2D cameras are used to produce 3D imaging for general purpose inspections. There are numerous 3D inspection applications. Some of them include reverse engineering, electronics inspection, food inspection, auto parts inspection, and recreational simulation.

There are several ways to use 2D cameras for 3D inspections:

• Laser triangulation using a 2D camera with a laser and dedicated hardware;

• Stereoscopy using two 2D cameras;

• Interferometer using a 2D camera and several optical devices;

• Scanners using a 2D camera;

• 3D dedicated software using a grey level image to obtain 3D measurements.

In this article we will concentrate on one of the most used techniques — laser triangulation.

Figure 1. A typical laser triangulation setup.
Laser triangulation often uses a laser line or laser pattern and a 2D camera mounted at an angle to produce a 3D vertical measurement obtained from a flat 2D image. How is this possible?

In Figure 1, the basic principle of optical triangulation is illustrated by a 2D image of a laser line being captured by a camera mounted at an angle. This technique gives a 3D representation of an object on a 2D image. Figure 2 shows how the camera visualizes the laser line with its two dimensional perspective.

What is 3D Calibration?

What is 3D calibration and how do we obtain real 3D measurements from a 2D image?

First we have to mention that a good calibration will not only produce accurate 3D measurements but also correct or compensate for many optical problems. Let’s take a quick look at some of them.

Figure 2. How the camera visualizes the laser line.
The main optical problems in laser triangulation systems are lens issues, camera rotation and baseline correction.

Lens issues:

• Vignetting: The lens must be large enough to not vignette the image, that is, it should not block the edges of the image. Even if it is large enough, you may get some intensity decrease (also called vignette) at the edges of the image. The laser line also decreases towards the end, so we need to test the combination of lens and laser to make sure the intensity of the image is enough after vignetting and laser line decrease.

• Optical distortion: All lenses have some optical distortion which can sometimes be seen as making a straight line into a curved line in the image. Optical distortion generally increases with shorter focal lengths. For example, with a lens focal length of 8 mm, grid lines (horizontal and vertical) appear to curve outward in the center of the camera’s image and be closer to the edges of the image, looking something like an old-fashioned wooden barrel’s staves and bindings — so this is called “barrel distortion.” C-mount lenses with focal lengths less than 25 mm generally have enough barrel distortion to cause depth measurement problems.

• Camera rotation: Even with good mechanical design, the camera’s sensor will usually be slightly rotated from the laser line. For example, a horizontal laser line will appear to be tilted by a fraction of a degree in the image. This rotation comes from not being able to exactly align the laser and camera and not being able to exactly align the sensor within the camera body. The usual way to deal with rotation is to take an image of the “baseline” (the conveyer belt or whatever the 3D objects will rest on), compute the rotation of that line, and factor it out in the height measurements. So camera rotation correction is part of most 3D calibrations.

• Baseline correction: Let’s say we have a conveyer belt with a 3D object on it. All 3D calibrations must take a measure of the baseline and subtract it (after accounting for laser to camera rotation) from the height measures. Some baselines, such as conveyer belts, will tend to vibrate slightly and this can change the apparent height. The cure for this problem is to always measure the baseline, so you should have your laser line view be wider than the 3D object, and then subtract the baseline height changes to remove the vibrations.

If we knew the exact distances and angles between the laser and the “baseline” (whatever the 3D object is resting on) and the laser and the camera’s focal point, then calibration could be done by trigonometry. However, the camera’s focal point is difficult to measure — it can be internal to the lens or behind the lens and, in addition, there are many other factors that confound calibration. Therefore, it is usually easier (and better) to calibrate using some calibrated standard heights rather than trying to measure the imaging geometry and compute out confounding factors.

How Is The Actual Calibration Done?

Figure 3. Example of a very small rotation but no curving of the laser line due to optical distortion.
The camera will usually have special dedicated 3D hardware to extract the laser line profile from the entire 2D image and produce what is called a 3D profile. The 3D profile is a collection of points representing vertical (height) depth positions.

This collection of data is coming out of the camera as raw un-calibrated height data. Then, the data has to be mathematically transformed into real measurements such as inches or millimeters, to produce real 3D values (or height values).

The most common way of transforming raw data coming from a 2D camera is a calibration table or matrix. The matrix can be in the form of a Look Up Table (LUT) or a mathematical formula. We have the choice depending on the precision we want.

In order to obtain real 3D measurements, the calibration matrix must be filled with numbers that will transform the raw profile points into real measurements. There are several ways to fill this calibration matrix. A good approach would be to place a calibrated object in front of the camera, take some pictures and transform the raw data into real 3D measurement by simply applying a mathematical multiplication of the raw data to obtain real measurements.

Consider, for example, a 3D camera that has a vertical resolution of 12 bits and 480 lines. We want to calibrate a field of view of 300 mm over a vertical span of 12 bits (0 to 4095).

The laser line will be cut into 480 lines (using a VGA sensor). The vertical resolution is much higher because we can apply sub pixel interpolation. By moving a flat target in front of the camera at several places we would obtain the transfer graph shown in Figure 4. The raw data generated by the camera is obtained by several discrete movements of a target at known positions. (Note: The graph shown here is only an example and does not represent all 3D systems.) By using these curves, we can apply to the raw data the multiplying factors obtained from these curves to output the real measurements. This is known as an LUT (Look Up Table).

A matrix table will produce better results than a linear table because it will take into account the lateral distortions and angles (camera rotation), optical distortions (lens issues), and geometrical mounting distortions (baseline etc.) When a matrix is used, interpolation between known values is a good way of filling the matrix to avoid many calibration steps. The other possibility is to use a math formula to fill the matrix, but this method is more complicated and could produce bad results if the mathematical model is not good enough.

Why Use 2D Cameras For 3D Inspection?

Figure 4. 3D calibration transfer curves on a 2D sensor.
Using 2D cameras to produce a 3D vision system is suitable for a number of applications because of the many options the 2D camera world has to offer. By combining additional dedicated 3D hardware technology with a 2D camera, real 3D vision systems are obtained at low cost and remain easy to use in any factory. In other words, if a specialized 3D camera is designed, it will be custom and may require specialized support. By using a standard 2D camera as the main component of the system, more options are available to the user in terms of resolution, speed, and cost.

Additionally, laser triangulation is easy to implement at a minimal cost. Other techniques are more technically complicated and simply cost more. Of course, more sophisticated techniques may yield better results for certain applications, but the difference may not be significant enough to justify the extra cost and hassle.

This article was written by Raymond Boridy, Product Manager at Teledyne DALSA (Waterloo, ON, Canada). For more information, contact Mr. Boridy at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit

Photonics Tech Briefs Magazine

This article first appeared in the May, 2013 issue of Photonics Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.