A wide range of commercial applications use cameras with a zooming mechanism. Perhaps the most ubiquitous is the camera phone. Camera phones may have an optical zoom, a digital zoom, or both. What’s the difference? An optical zoom actually changes the effective focal length of the camera lens such that the original image is magnified and could be captured by the image sensor (CCD or CMOS). With greater magnification, the light is spread across the entire image sensor and all of the pixels can be used. An optical zoom could be interpreted as a true zoom that will improve the quality of pictures captured.

Figure 1. Typical zooming mechanism in a camera phone using an AEDR-8400 encoder.
Digital zoom, on the other hand, is a bit different. In this case, a software algorithm is applied rather than the actual hardware movement (i.e., lenses positioning) to magnify the image. Such magnification involves only a certain portion of the captured image. This is known as the interpolation technique. Using such a technique, or algorithm, additional information needs to be added in order to enlarge the corresponding image portion. It may seem that the image captured is being magnified; however, there is only a certain portion of the real image information being utilized and the rest of the image is coming from the interpolation outputs.

Figure 2. The AEDR-8400 encoder for the zooming mechanism in a camera phone.
One thing worth mentioning is that the higher the digital zoom, the smaller the portion of real information is taken. Therefore many of the originally captured pieces of information on the image sensor will be discarded and more interpolated image data will be incorporated into the resultant image.

Consequently, optical zooming is an important mechanism in determining the true zooming power of a camera phone that isn’t losing any image data. Having accurate lens positioning control in optical zoom is crucial to help ensure quality in an enlarged image. Figure 1 illustrates a typical example of the zooming mechanism in a camera module inside a camera phone. Lenses are aligned such that an image could be focused onto the image sensor (CMOS or CCD). The zooming mechanism involves synchronization movement between two or more lenses. By varying the distance between the lenses, the actual effective focal length of the camera lens changes accordingly. Hence, a magnified image would be captured by a CCD or CMOS image sensor.

To simplify the wiring process, encoders are mounted at the camera module shell and remain in a fixed position. The moving portion is the codestrip, which acts as the translator for the lens’ linear movement. Casting the window and bar image back to the encoder provides feedback on all the necessary information for prompt and accurate lens positioning. With a conventional zooming mechanism, a combination of mechanical cam and gearing is a common approach for lens position controlling. However, such an approach will suffer unavoidable wear and tear issues and the accuracy of lens positioning will degrade over time and directly impact the quality of zoomed images.

An AEDR-8400 encoder by Avago Technologies can help resolve these zooming issues. The feedback from the encoder provides necessary information for real-time calibration whenever there is any back-lashing from gears and mechanical cams. This can help ensure precise and accurate lens positioning. Furthermore, in some customized camera module designs, removing the mechanical cam is possible (Figure 2).

Incorporating the AEDR-8400 encoder into a piezo-actuator camera module, for example, can essentially eliminate the use of mechanical cams. And, because there is no mechanical cam involvement, there is no fixed zooming position and the new camera module system can now have a continuous zooming function (Figure 3).

In terms of power consumption, piezo-actuator systems tend to consume less power compared to voice coil and servo solutions. Also, a piezo-actuator solution could help keep the noise and vibration level to a minimum, which a stepper motor or voice coil solution cannot achieve.

Encoder Operating Principle

Figure 3. Zooming mechanism with encoder feedback.
The miniature incremental encoder, AEDR-8400, comes in a surface-mount leadless package, measuring 3.00mm x 3.28mm x 1.26mm, making it the smallest optical encoder with digital outputs. It incorporates both an LED light source and photo detector IC in a single SO-6 (Small Outline, 6 Pin) package and employs reflective technology to sense rotary or linear motions. The small size and reflective technology allows the encoder to be used in a wide range of commercial applications, particularly where space and weight are primary concerns, such as the zooming mechanism in a camera phone. The encoder offers 254 lines per inch (LPI) resolution, which is equivalent to 10 lines per mm (LPmm) with two channel digital outputs. The encoder can operate in a temperature range of -20°C to 85°C. One of the critical criteria in the camera module of a camera phone is being able to operate at a lower voltage level. With the typical operating voltage of 2.8V, the AEDR-8400 encoder will comfortably suit the needs of this application.

Figure 4. Optical arrangement of a reflective encoder.
Figure 4 shows the optical arrangement of an AEDR-8400 encoder used with a reflective codestrip where the lens focuses the light from the LED onto the window and bar of the codestrip. The reflected images of the window and bar are focused on the photodiodes. As the codestrip moves, an alternating pattern of light and shadow cast by the window and bar, respectively, falls upon the photodiodes. The detector IC converts this pattern into digital TTL-compatible outputs representing the codestrip linear motion and hence the lens’ movements. An important parameter is resolution, which is defined as the density of window/bar in a unit distance and is typically defined as lines per inch (LPI) or lines per mm (LPmm). Higher resolution means “finer” control of the linear motion.

Figure 5. Optical alignment of emitter / detector with respect to window / bar, as viewed from top.
The AEDR-8400 encoder is designed so that the LED and detector IC of the encoder should be placed parallel to the window/bar orientation. As such, the encoder is robust against radial play. This concept is illustrated in Figure 5.

Figure 6. Quadrature characteristics of channel A and B, using the encoder outputs.
The overall camera module design can be shrunk compared to a stepper motor solution or a voice coil solution. The motor size is comparable to the piezo-actuator; however, the removal of mechanical cams and gearing enables the overall camera module dimension to be decreased further to meet existing market demands. The AEDR-8400 encoder helps to provide precise positioning control between both lenses and results in a better quality image. In addition, the synchronization of the lens movement can be performed fast and accurately.

Figure 7. Phase lead and lag between channel A and B indicates direction of rotation.
The encoder outputs, namely Channel A and Channel B, are characterized by their quadrature relationship. As shown in Figure 6, there is a phase shift of 90 electrical degrees between the channels. In addition, the channels are also characterized by their four states (i.e., State 1 to State 4), each spanning a nominal 90 electrical degrees. Information about linear motion, such as movement speed and distance traveled, can be derived from the parameters of the output such as pulse period and number of pulses. Meanwhile, the direction of linear movement is determined by the phase relationship between the two outputs.

When the codestrip moves in one direction, Channel A leads Channel B by 90 electrical degrees. When the codestrip moves in the other direction, Channel B will lead Channel A by the same amount. This concept is illustrated in Figure 7. Resolution higher than that of the codestrip is achievable via quadrature decoding of the encoder outputs, where different levels of decoding exist. Counting every rising edge of one channel (e.g., Channel A) is called 1X decoding. The codestrip resolution can be doubled by counting every rising and falling edge of one channel to further increase the resolution. This is called 2X decoding. When every transition of both Channel A and Channel B is utilized (or every logic state), 4X decoding is achievable.

The Codestrip

The codestrip surface must be reflective and specular (mirror-like) so that the image of the pattern is reflected back onto the photo-diodes of the AEDR-8400 en coder. Potential materials include metal and reflective film. One method to determine whether the code strip will work with the reflective optical encoder is by using a Scatterometer.

Reflective surfaces with a specular reflectance of 60 percent or higher, as measured by the device, are compatible with the reflective encoder. The non-reflective areas should have a reflectance of less than 10 percent.

When testing for specular reflectance, reflective surfaces should be tested separately from non-reflective surfaces. It is recommended to test the reflective surface by itself and to then test the non-reflective surface, and to not perform tests on the patterned surface since this will only provide an average reflectance across the pattern.

Future Encoder Technology

New encoder technology is currently being developed that integrates an index channel to the two existing channels of digital output. This index channel will help eliminate the need for photo-interrupters to indicate the limit or end of travel range for the lenses. In addition, the next-generation encoder will feature a built-in interpolator that allows users to set the interpolation factors to one, two or four times the base resolution of 304LPI.

This article was written by Foo-Hong Thong, Worldwide Marketing Manager, Motion Control Products Division, Avago Technologies (San Jose, CA). For more information, contact Mr. Thong at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit http://info.hotims.com/34450-201.


Photonics Tech Briefs Magazine

This article first appeared in the January, 2011 issue of Photonics Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.