Duke researchers have shown that a new approach to LiDAR can be sensitive enough to capture millimeter-scale features such as those on a human face. (Image: Duke University)

One of the imaging technologies that many robotics companies are integrating into their sensor packages is Light Detection and Ranging (LiDAR). Now Duke engineers have developed a new LiDAR system that can potentially improve the vision of autonomous systems such as driverless cars and robotic manufacturing plants.

Traditional time-of-flight LiDAR, however, has many drawbacks that make it difficult to use in many 3D vision applications. Because it requires detection of very weak reflected light signals, other LiDAR systems or even ambient sunlight can easily overwhelm the detector. It also has limited depth resolution and can take a dangerously long time to densely scan a large area such as a highway or factory floor. To tackle these challenges, researchers are turning to a form of Li-DAR called frequency-modulated continuous wave (FMCW) LiDAR.

In a paper published in the journal Nature Communications, the Duke team demonstrated how a few tricks learned from their OCT research can improve on previous FMCW LiDAR data-throughput by 25 times while still achieving submillimeter depth accuracy.

OCT is the optical analogue of ultrasound, which works by sending sound waves into objects and measuring how long they take to come back. To time the light waves’ return times, OCT devices measure how much their phase has shifted compared to identical light waves that have travelled the same distance but have not interacted with another object.

FMCW LiDAR takes a similar approach with a few tweaks. The technology sends out a laser beam that continually shifts between different frequencies. When the detector gathers light to measure its reflection time, it can distinguish between the specific frequency pattern and any other light source, allowing it to work in all kinds of lighting conditions with very high speed. It then measures any phase shift against unimpeded beams, which is a much more accurate way to determine distance than current LiDAR systems.

Most previous work using LiDAR has relied on rotating mirrors to scan the laser over the landscape. While this approach works well, it is fundamentally limited by the speed of the mechanical mirror, no matter how powerful the laser it’s using.

The Duke team instead uses a diffraction grating that works like a prism, breaking the laser into a rainbow of frequencies that spread out as they travel away from the source. Because the original laser is still quickly sweeping through a range of frequencies, this translates into sweeping the LiDAR beam much faster than a mechanical mirror can rotate. This allows the system to quickly cover a wide area without losing much depth or location accuracy.

While OCT devices are used to profile microscopic structures up to several millimeters deep within an object, robotic 3D vision systems only need to locate the surfaces of human-scale objects. To accomplish this, the researchers narrowed the range of frequencies used by OCT, and only looked for the peak signal generated from the surfaces of objects. This costs the system a little bit of resolution, but with much greater imaging range and speed than traditional LiDAR.

The result is an FMCW LiDAR system that achieves submillimeter localization accuracy with data-throughput 25 times greater than previous demonstrations. The results show that the approach is fast and accurate enough to capture the details of moving human body parts in real-time.

For more information, contact University Communications department at This email address is being protected from spambots. You need JavaScript enabled to view it.; 919-684-2823.



Magazine cover
Tech Briefs Magazine

This article first appeared in the December, 2022 issue of Tech Briefs Magazine (Vol. 46 No. 12).

Read more articles from this issue here.

Read more articles from the archives here.