Time-of-flight (ToF) technology enables new applications in multiple markets, resulting in a market boom for time-of-flight CMOS sensors over the last few years. This is mainly driven by the consumer and automotive markets, but also by prosumers — amateurs who purchase equipment with quality or features suitable for professional use. Overall, the technology is being used in industries such as robotics, logistics, construction mapping, and more recently, intelligent transportation systems (ITS).
There are multiple technologies based on the time-of-flight concept. Generally, all of them are synchronized with a light source and estimate the distance by calculating the time taken for the light to travel from the camera to the object and then back again.
ToF systems can be categorized into two main groups:
- Direct ToF: where the distance is directly calculated by measuring the light travel time
- Indirect ToF: where the distance is measured based on the phase of the reflected light pulse.
This article will focus on indirect time-of-flight (iToF) since in our opinion, it is a more flexible technology that addresses different markets and applications with working distances lower than 50m and with challenging requirements.
Diverse Applications Drive Many Requirements and Challenges
The requirements coming from these markets are very diverse, including field of view, distance range, reflectivity range, and 3D frame rate, to name just a few. These are critical to the design of ToF systems and ToF sensors. However, thanks to its flexibility, iToF can provide accurate measurements for indoor applications, such as factory safety, and outdoor applications, such as surveillance or ITS. It works for applications where there is a small range of reflectivity like pick and place robots and for applications with a wide range of reflectivity such as warehouse/logistics management. iToF is also suitable for short-range applications like automotive in-cabin applications and long-range ones such as autonomous driving. Also, it fits with small field-of-view applications such as robot navigation and with large field-of-view applications like construction/building mapping.
How does iToF work?
In iToF, multiple images (or phases) are required to estimate the distance of the objects. As shown in Figure 1, the first two images are obtained with a 90-degree phase, so that the ratio of charge captured in each phase allows the distance to be estimated, while the third one is used to remove the background.
With all that in mind, what does “reliable distance measurement” mean in the context of 3D applications? It certainly means very precise and accurate measurement, but it goes beyond that. It is necessary to ensure that the measurements are precisely and accurately obtained but also have a good angular resolution, with minimum motion blur/ artifacts, and that both dark and bright objects at both minimum and maximum distance can be sensed at a reasonable performance, etc. Therefore, a reliable distance measurement is tightly linked with very high flexibility— a more flexible sensor will allow more reliable measurements to be made.
Reliable Measurement Without Motion Artifact
As explained above, in iToF, several images are necessary to estimate the distance of the objects. In the example described, three images are necessary (two phases + the background). If using a 1-tap pixel sensor, which is the most common today, it is required to expose and read out the multiple phases sequentially. Shoot the light once to get phase 0, another time to get phase 1, and then run a third acquisition without light and readout to acquire the background. Only then can the 3D image be computed. Therefore, if there is a moving object, motion artifacts will appear since the object will be in a different position within each capture. Also, the light needs to be shot twice, once for each phase.
Using a multiple-tap pixel sensor, for instance, a 3-tap pixel for the example shown, all exposures and readouts are made in an interleaved way, so that all phases are acquired virtually in parallel, minimizing motion artifacts. Furthermore, since the phases can be captured with a single train of light pulses, it reduces the average light power, which is important both from an eye-safety and power consumption point of view.
Both cases are described in Figure 2.
Note that the difference between a motion artifact and motion blur is similar to the difference between rolling and global shutters in 2D vision. Motion blur can slightly warp the objects, which are moving fast but it does not provide false information, while motion artifacts can drastically change the appearance of the objects and provide false measurement, which can have major consequences in some applications.
Reliable Measurement with High Dynamic Range
Another crucial matter in ToF is dynamic range. ToF is intrinsically a very high dynamic range application, due to the combined contribution of the reflectivity of the objects and the distance range required. To illustrate this point, assume an original application targeting the detection of objects with reflectivity between 15 percent and 85 percent and a distance from 0.5m to 6m that needs to be improved to detect objects with reflectivity between 1.8 percent and 95 percent reflectivity and up to 10m distance. In these conditions, the new case requires more than 25-times more dynamic range than the original one. This is large enough to need much more than a good Full Well Capacity in the pixel.
Teledyne e2v’s Hydra3D ToF CMOS image sensor embeds specific techniques to manage this huge dynamic range, such as non-destructive readouts based on multiple captures combined with a high frame rate, making it suitable for most application cases.
Reliable Measurement in All Conditions with High Flexibility
Having said that, brute force is normally not a good idea in ToF. Having high flexibility to adapt to a very wide range of situations in terms of distance range, reflectivity, dynamics, etc., is clearly a significant asset. In Hydra3D, a single trigger initiates a sequence of acquisitions and readouts and is very easy to program, resulting in a tool that can adapt to the conditions of each application. This illustrated in Figure 5.
The first blue outlined rectangle is the exposure and readout of the three phases, resulting in one 3D image. Multiple acquisitions (blue outline rectangles) can be done in each measurement sequence (green outline rectangles), either to increase the dynamic range or to improve the precision. Also, having multiple measuring sequences allows measurement at different distance ranges, or with different levels of precision, or to perform 2D captures. This happens all with a single trigger. On top of that, the sequence is changeable from frame-to-frame, live, without halting the sensor.
This high configurability can be used to find the best trade-off for every application between distance range, reflectivity range, precision, frame rate, light power, etc.
Examples of Flexibility
For an application where the distance range and the reflectivity range are small, a configuration with a single 3D acquisition (only the first blue outline rectangle) will be enough. In this case, the dynamic range won’t be the largest, but it is possible to reach a 100 fps frame rate, with a total lack of motion blur.
For an application where it is required to use the HDR capabilities due to the large distance range and/or reflectivity range, several acquisitions and readouts can be performed (several blue outline rectangles) to increase the dynamic range. With such an application, around 25 fps can be achieved with a 10m range and targets between 15 percent and 85 percent.
For an application covering a 10m distance range but using three different ranges to keep high precision in the whole range, several measuring sequences (green outline rectangles) can be used switching from one to another in real-time, depending on where the targeted object is. Therefore, the precision is achievable with a small distance range, but the covered distance range is larger.
Reliable Measurement with Robustness to the Environment
The last challenge is to address multi-system interference. Since ToF requires active illumination, one system can suffer from interference caused by the light emitted by another system working simultaneously in the same area. This can lead to incorrect distance measurements. To be robust to this interference, multi-system management is embedded on-chip in the Hydra3D sensor. Because of this, systems can run without causing any mutual interference and without any connection between them.
Sensor Level Innovations
Innovations at the sensor level help to overcome the challenges that made many time-of-flight applications too difficult to implement effectively. A sensor like Teledyne e2v’s Hydra3D implements all these innovations: it has above average spatial resolution, with 832 x 600 pixels, supporting a high field of view with a good angular resolution. It’s a 3-tap pixel, suitable for the three-phase iToF technique, allowing reliable 3D detection without motion artifacts even with fast-moving objects. It also reduces power consumption and improves eye safety. In addition, it is extremely flexible in the way it manages high dynamic range (which is really crucial in ToF), making it adaptable to all environments. And it can be configured for distance measurement, including interleaving 2D and 3D captures. Finally, it incorporates an on-chip function providing robustness to interference caused by other systems working in the same area, so camera-makers don’t need to solve this issue when designing their systems.
As industries – such as robotics, logistics, construction mapping, and ITS – require more precise and reliable measurement, indirect time of flight techniques will enable that measurement with robustness, flexibility, and without artifacts.
This article was written by Yoann Lochardet, Marketing Manager 3D, Teledyne e2v, and Sergio Morillas, Business Manager for 3D products and applications, Teledyne e2v. For more information, visit here .