High-resolution solid-state lidar using an array of MEMS switches will reduce its cost to match that of inexpensive, chip-based cameras and radar systems — removing a major barrier to adopting lidar for autonomous vehicles.
Although inexpensive, chip-based cameras and radar systems have moved into the mainstream for collision avoidance and autonomous highway driving, lidar navigation systems remain unwieldy mechanical devices that cost thousands of dollars.
That may be about to change, thanks to a new type of high-resolution lidar chip developed by Ming Wu, professor of electrical engineering and computer sciences and co-director of the Berkeley Sensor and Actuator Center at the University of California, Berkeley.
Wu’s lidar is based on a focal plane switch array (FPSA), a semiconductor-based matrix of micron-scale antennas that gathers light like the sensors found in digital cameras. Its resolution of 16,384 pixels may not sound impressive when compared to the millions of pixels found on smartphone cameras, but it dwarfs the 512 pixels or less found on FPSAs until now, Wu said.
The design is scalable to megapixel sizes using the same complementary metal-oxide-semiconductor (CMOS) technology used to produce computer processors, Wu said. This could lead to a new generation of powerful, low-cost 3D sensors not only for autonomous cars, but also for drones, robots, and smartphones.
Mechanical lidar systems use lasers to visualize objects hundreds of yards away, even in the dark. They also generate 3D maps with high enough resolution for a vehicle’s artificial intelligence to distinguish between vehicles, bicycles, pedestrians, and other obstacles.
Yet, putting these capabilities on a chip has stymied researchers for more than a decade.
“We want to illuminate a very large area,” Wu said. “But if we try to do that, the light becomes too weak to reach a sufficient distance. So, as a design trade-off to maintain light intensity, we reduce the area that we illuminate with our laser light.”
This lidar consists of an FPSA matrix of tiny optical transmitters and MEMS switches that rapidly turn on and off to physically move the waveguides from one position to another, channeling all available laser power through one single antenna at a time.
MEMS switches are a known technology used to route light in communications networks. This is the first time they have been applied to lidar, however. Compared with thermo-optic switches, they are much smaller, use far less power, switch faster, and have very low light losses.
They are the reason Wu can cram 16,384 pixels on a 1-centimeter-square chip. When the switch turns a pixel on, it emits a laser beam and captures the reflected light. Each pixel is equivalent to 0.6 degrees of the array’s 70-degree field of view. By cycling rapidly through the array, Wu’s FPSA builds up a 3D picture of the world around it. Mounting several of them in a circular configuration would produce a 360-degree view around a vehicle.
Wu needs to increase FPSA resolution and range before his system is ready for commercialization. “While the optical antennas are hard to make smaller, the switches are still the largest components, and we think we can make them a lot smaller,” he said.
He also needs to increase the system’s range, which is only 10 meters. “We are certain we can get to 100 meters and believe we could get to 300 meters with continual improvement,” said Wu.
If he can, conventional CMOS production technology promises to make inexpensive chip-sized lidar part of our future.
“Just look at how we use cameras,” Wu said. “They’re embedded in vehicles, robots, vacuum cleaners, surveillance equipment, biometrics, and doors. There will be so many more potential applications once we shrink lidar to the size of a smartphone camera.”