Over one billion cell phones with cameras are sold every year, and this number has been increasing annually at a rate of about 15 percent for the past 7 years. Approximately 80 percent of cell phones now have embedded cameras, with about 20 percent of new cell phones having two cameras – one on the back for taking photographs and one on the front for videoconferencing.

A cell phone camera lens being assembled in a Camera Module Align, Assembly & Test (CMAT) station.

Early adopters, especially in Europe, are using their cell phones as replacements for point-and-shoot cameras, but in order for this to really take hold in the mainstream, both bandwidth and picture quality have to improve. While the larger bandwidth of 3G networks enables the transfer of higher density image files, the real challenge is to manufacture cameras that can take advantage of 3G bandwidth and keep up with consumer demand for quality.

Image Quality

Figure 1. Diminishing impact of pixel density on camera phone image quality.

Until recently, the limiting factor to better camera quality was the number of light sensitive cells on an image sensor — otherwise known as the pixel count. In current generation cell phones, pixel densities are in the megapixels — 2, 4, even 6 megapixel camera phones are not uncommon. While high megapixel counts work well in the hands of a salesman, the truth is that the increases in pixel densities are no longer producing the significant improvements in picture quality they one did (see Figure 1). The early increase in pixel counts brought significant improvements to picture quality, but the benefits from further increasing the pixel count are diminishing.

Camera Module Components

The reason for this is that these camera modules are more than just sensors – they are systems, made up of a number of components assembled together. The main components of a camera system are the CMOS sensor, the lens system, the control logic and software, and most importantly, the alignment process used to assemble them. The picture quality of the camera system is only as good as its weakest part. That used to be the pixel count of the sensor, but with CMOS sensors now pushing 6 megapixels and the control logic and software advancing with software zooms and other advanced onboard algorithms, that is no longer the case.

Figure 2. Critical elements for proper image focus.

As the pixel density and total pixel count has increased, the precision of focusing the image onto the sensor plane becomes all the more critical. This requires high quality lenses, small lens size (to increase depth of field), and a fairly long focal length (to minimize the effect of lens or assembly variations). It also requires a high-precision assembly process (Figure 2).

New Factors to Consider

We need to look at the other important attributes for cell phone camera modules that are working against picture quality: low-cost, low-light ability (ISO speed), low current consumption, and small size.

Low-cost lenses exhibit significant variation in their optical characteristics within a batch. These variations cannot be determined by the physical appearance of the lens, but from the optical characteristics of each individual lens. The traditional assembly process of screwing the lens assembly into a housing to a set focal length does not take this variation in optical qualities into account. This misalignment did not matter so much when VGA sensors were predominant, as any misalignment of the lens was lost in the low image quality caused by the low pixel density. That is no longer true with multi-megapixel sensors.

This challenge will continue to be exacerbated as new miniaturized cameras are developed. While the CMOS sensor has traditionally been silicon wafer-based, breakthroughs in lens manufacturing technologies have enabled a new miniaturized camera — the waferlevel camera — to be developed. Waferlevel lenses are manufactured in large numbers in a wafer format. Wafers of different lenses are then stacked together to produce a complete lens which can be mounted directly onto the CMOS sensor. If the CMOS sensor also contains camera logic as well as the light sensitive pixels, then the result is a multimegapixel camera-on-a-chip within a small cube of just a few millimeters.

Wafer-level cameras have some significant advantages over traditional camera modules, which are made with barrel lenses. They are significantly smaller, which allows for a thinner format cell phone, they can be more cost-effective at around $1 to $3 for a VGA to 2 megapixel module, and they can be assembled as standard surface- mount components rather than manually assembled using ribbons and connectors, further adding to cost-savings.

Despite all these advantages, the variations in the lens manufacturing process across the lens wafer produces variations from lens to lens, and aligning and stacking these lenses on top of each other can result in significant centration misalignment and stackup tolerances. The very fact that these lenses are mounted directly onto the CMOS chip requires a very short focal length that is extremely sensitive to minor misalignments in assembly. It often requires an alignment of the lens to the chip that is significantly tighter than the tolerance of the lens itself.

These developments mean that alignment is critical to within microns of the focal plane. Yet any high-precision assembly process that relies on the physical dimensions of the sensor or lens produces poor performing cameras. This is only accentuated the smaller the camera system. While wafer-level lenses exhibit good optical qualities, their manufacturing process produces an even more significant variation from lens to lens. The sensor is high pixel density, the form factor is very small, the components are cost-effective. The problem is that the resulting misalignment produces a very poor picture quality. A wider lens (for better low light performance) just accentuates the problem.

Improvements in Camera Module Assembly

Figure 3. Impact of active versus passive alignment on picture quality.

At this point, it is the assembly process that is the gating factor for improving the camera image. The alignment and assembly of these wafer-level cameras needs to be optimized for every single sensor-lens pair, depending on its particular optical characteristics, to within about a micron of its optimal focal length. But don’t forget that the sensor is not a point – it is a plane. So the assembly process must align the lens using 5 degrees of freedom (3 linear and 2 rotational axes) to the optical characteristics of the lens, not the physical characteristics of the lens. This is done by either measuring the path of a laser reflecting off the sensor and through the lens, or by powering up the sensor and optimizing the image during the assembly process. The latter is called active camera module alignment. Once the optimal position is found, the lens must be affixed in that position relative to the sensor.

This method of alignment gives an enormous boost to the performance of the camera module (Figure 3), because the weakest link in the camera system has now been so significantly improved that the performance enhancements of greater pixel density and lens innovations can start to shine through.

This article was written by Justin Roe, Chief Operating Officer, Automation Engineering Incorporated (Wilmington, MA). For more information, contact Mr. Roe at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit http://info.hotims.com/28049-201  .



Magazine cover
Photonics Tech Briefs Magazine

This article first appeared in the January, 2010 issue of Photonics Tech Briefs Magazine (Vol. 34 No. 1).

Read more articles from this issue here.

Read more articles from the archives here.