Over one billion cell phones with cameras are sold every year, and this number has been increasing annually at a rate of about 15 percent for the past 7 years. Approximately 80 percent of cell phones now have embedded cameras, with about 20 percent of new cell phones having two cameras – one on the back for taking photographs and one on the front for videoconferencing.
Early adopters, especially in Europe, are using their cell phones as replacements for point-and-shoot cameras, but in order for this to really take hold in the mainstream, both bandwidth and picture quality have to improve. While the larger bandwidth of 3G networks enables the transfer of higher density image files, the real challenge is to manufacture cameras that can take advantage of 3G bandwidth and keep up with consumer demand for quality.
Until recently, the limiting factor to better camera quality was the number of light sensitive cells on an image sensor — otherwise known as the pixel count. In current generation cell phones, pixel densities are in the megapixels — 2, 4, even 6 megapixel camera phones are not uncommon. While high megapixel counts work well in the hands of a salesman, the truth is that the increases in pixel densities are no longer producing the significant improvements in picture quality they one did (see Figure 1). The early increase in pixel counts brought significant improvements to picture quality, but the benefits from further increasing the pixel count are diminishing.
Camera Module Components
The reason for this is that these camera modules are more than just sensors – they are systems, made up of a number of components assembled together. The main components of a camera system are the CMOS sensor, the lens system, the control logic and software, and most importantly, the alignment process used to assemble them. The picture quality of the camera system is only as good as its weakest part. That used to be the pixel count of the sensor, but with CMOS sensors now pushing 6 megapixels and the control logic and software advancing with software zooms and other advanced onboard algorithms, that is no longer the case.
As the pixel density and total pixel count has increased, the precision of focusing the image onto the sensor plane becomes all the more critical. This requires high quality lenses, small lens size (to increase depth of field), and a fairly long focal length (to minimize the effect of lens or assembly variations). It also requires a high-precision assembly process (Figure 2).