In an interview, Anand Gopalan, CEO of Velodyne Lidar, Inc. (San Jose, CA), discussed lidar software for ADAS and autonomous vehicles. Primary goals are to increase the speed of response and optimize efficient use of computational resources. Another important challenge is what he calls the tyranny of corner cases for autonomous vehicles — for example, a taxi pulling back into traffic after dropping a passenger at the curb. Then there’s the problem of near-field sensing — say a bicyclist alongside a turning vehicle or a ball rolling into the street. These kinds of situations require software to provide improved classification of objects and prediction of behavior. This raises the question of where to use artificial intelligence vs deterministic algorithms.

Tech Briefs: Although our general topic is autonomous vehicles and ADAS, I thought it would be interesting to approach it from the point of view of software. I think that is an important consideration for using lidar.

Anand Gopalan: Absolutely. The autonomous vehicle project is definitely one of the most challenging problems where computer hardware and software have to work together to solve problems.

Tech Briefs: How is the software evolving — what, in your opinion, are some of the critical issues?

Gopalan: There’s a difference between levels 4 and 5 autonomous vehicles and the ADAS project. On the autonomous side, there are two things that are very challenging. The first is that you are dealing with the tyranny of corner cases. There are a lot of critical corner scenarios that autonomous vehicles have to deal with, which require a lot more innovation in software, sensor, and computing hardware. For example, say you have an autonomous robo-taxi that has dropped a pedestrian at a curbside and now needs to pull back into the main traffic. It needs to make sure everything around the vehicle is safe: the passenger has moved away from the vehicle, there are no bicyclists zooming by, vehicles trying to pull in — all sorts of things you might not encounter in just riding down the street. People are dealing with what I call the tyranny of corner cases by sometimes modifying software and in some cases going back to the drawing board in terms of hardware.

The second aspect is speed. Fleets of vehicles are being deployed in some very dense urban environments, driving at 30 miles per hour or so. But in order to make a viable car you need to go to at least 40 to 45 miles per hour. This introduces many new challenges in terms of perception as well as speed of reaction.

Tech Briefs: I was wondering how the software could distinguish between, say, a ball rolling into the road, or a bicycle, a stationary object, or the speeds of the different vehicles all around you. It seems to me very complicated.

Gopalan: Yes, it is complicated. There is a class of lidar products that we have been working on that is focused on the problem of near field sensing. You want to be able to see all the way up to the tire of the vehicle — how do you enable that to, as you say, recognize a ball that’s rolling onto the street or a bicyclist who’s riding right alongside a turning vehicle, and so on. Some of this can be solved by sensor innovations, but it’s also contingent on how you deal with these things in software: improved classification of objects and prediction of behavior.

Tech Briefs: Will that have to be done with artificial intelligence (AI)?

Gopalan: I believe that you will always see a combination of traditional algorithms to solve a large class of the problems and then some machine learning to solve some levels of specific types of problems. An important application for AI (or machine learning) is object classification, for example to be able to distinguish a bicycle from a baby carriage.

There are players out there who are trying to solve all of the problems simply by using deep learning and machine learning. My personal opinion is that’s dangerous because determinism and predictability are really important — to be able to validate and test these systems completely. If you throw the entire problem onto a deep learning neural network, you suffer from the inability to test and verify and predict what a vehicle will do in a given situation.

I think a blended approach is the right way to go.

Tech Briefs: You said that autonomous vehicle applications are very different from ADAS. Could you elaborate on that?

Gopalan: In the ADAS world, you are still talking about enhanced level 2 systems, maybe tending toward level 3 at some point in the future. Some of the key tools in the toolbox that were developed for the autonomous vehicle project are being used quite effectively to make the average consumer-purchased vehicle safer. I don’t think these systems are on the road yet, but a lot of people are working on them, so I think they will be coming on the road in the next two to three years. For example, you are seeing a significant uptick in the use of lidar for consumer-purchased vehicles. Much of what has been learned from the autonomous vehicle project on the usefulness of lidar has led software groups and OEMs to recognize that using lidar would allow you to enhance the functionality of an ADAS vehicle. In conjunction with localization and mapping, which a lidar allows you to do, you will start to see some fairly sophisticated functionality — even where the vehicle could prevent you from going off the road or making a severe mistake. Under some limited conditions this would be tending toward a level of autopilot functionality. I would term that as part of level 3 behavior. I think that can work as long as the vehicle is able to clearly define the conditions under which it can provide a feature and communicate its confidence level to the driver. That allows you to build a much safer vehicle without necessarily having to solve for all the corner conditions.

Tech Briefs: I can see ADAS and lidar working rather well when you’re out on a highway, but in the city, it’s much more complicated. How useful can ADAS be in complicated city traffic?

Gopalan: That’s a good question. It’s true that if you have just forward-looking lidar and some side cameras, which enables you to navigate highway traffic quite well, but maybe not city traffic. For city traffic, you need a broader sensor package. And you need a higher level of sophistication in your algorithms. I think that’s still an open question. I don’t think anyone’s necessarily figured out whether an ADAS system will extend all the way into city traffic.

Tech Briefs: It would seem to me that once you’re in city traffic, the difference between ADAS and autonomous vehicles tends to disappear — the same problems would apply to both.

Figure 2. A lidar sensor can seamlessly integrate within a vehicle’s body or behind the windshield. (Image courtesy of Velodyne Lidar)

Gopalan: I think that’s true with the significant difference being that the ADAS system has the choice of communicating to the driver that it’s not very confident about taking charge of the vehicle. So, I think that the ADAS system has more outs in terms of dealing with corner conditions because ultimately, it is just assisting the driver. For example, in a city situation, where suddenly, because of an accident or something, traffic is directed by police to go in the opposite direction on a one-way, that’s a very complicated behavior for autonomous control to deal with. But for an ADAS system, it’s as simple as saying: okay, I recognize that this is a situation I am unable to function in so the driver will have to take control. So, the presence of a driver and the ability to communicate confidence to the driver is a big difference.

Tech Briefs: Let’s talk about embedding more of the analytics in the sensor. Where are we with that?

Gopalan: In the context of autonomous vehicles, and to some extent, ADAS, the compute cycles are very precious. We definitely see that autonomous vehicles are running out of compute cycles to make decisions, especially as the vehicles go faster and faster. So, there is a real case to be made for more computing-on-the-edge. However, the question is, what’s the right level of edge computing and how much loss of control we are willing to tolerate by allowing the sensor to make decisions. So, I don’t think there’s one set answer to that question yet. People want a more perfect sensor, but all sensors have artifacts — situations where there is lower confidence. In the past, we would use the central computer to handle those by writing an additional piece of software. The tolerance for that is completely gone now — the expectation is that the sensor should be able to deal with its own artifacts and produce as high fidelity an image as is possible, limited only by the laws of physics. So, that in itself requires you to add a lot of software on the computing edge. The other thing you are starting to see, and will see more of, is the addition of more sensing modalities within the sensor. For example, the lidar sensor may also give you a camera image, and a thermal image and so on. This fusion of sensing modalities at the sensor level, also requires higher levels of processing on the edge. So, we are seeing the level of processing on the lidar itself going up with each successive generation.

Tech Briefs: I would imagine that speed of processing is an important factor. As things get more complicated and speeds get faster, this must be pushing processors to make faster and faster decisions.

Gopalan: This is an area where there’s a lot more innovation needed. The first approach when people started building autonomous vehicles, was to essentially just throw a big server computer in the trunk of the vehicle. They’ve since gotten a bit more sophisticated in distributing workload among GPUs. But fundamentally, you’re using a processor and a processor architecture that was meant for a very different type of function. In the past year or so, however we’re starting to see some processor architectures and hardware that are specific to solving the problems of these particular algorithms. One part of this is neural network accelerators — there’s some interesting work there. But even to solve the problems of the more traditional deterministic algorithms, we believe there’s a better type of approach and architecture out there than even the traditional GPU.

Figure 3. High range and resolution allow for fast object identification and long braking distance at highway speeds. (Image courtesy of Velodyne Lidar)

At Velodyne, we are huge believers in FPGAs and use them for quite a bit of the data processing. This gives us the ability to do things directly at the hardware level and get a huge speedup as well as a much lower power consumption.

To give you an example, a fairly standard algorithm in most autonomous vehicles: simultaneous localization and mapping (SLAM). We took one implementation of a SLAM algorithm and ran it on a GPU. It ran at a certain speed and consumed a certain amount of power. And then we took the same algorithm and implemented it into a higher-level logic in an FPGA and we were basically able to get about five times the speed at one tenth the power. So, there’s definitely a lot of room for improvement, both with speed and power, merely by creating custom software and software architectures for this type of workload.

Tech Briefs: Is there a role for cloud storage for vehicle computing?

Gopalan: I think there is, obviously not to make any immediate decisions, but to allow the vehicle to become more aware of changes in the world. If you could imagine vehicles working off some sort of map that’s being dynamically updated — those updates have to come from the cloud. There would be lower speed and higher latency, but 5G when it arrives, could definitely help.

Tech Briefs: Do you see some kind of dividing line between where artificial intelligence or machine learning would be more useful and where deterministic algorithms would be more useful?

Gopalan: It’s hard to place a dividing line because I think everyone puts that line in a slightly different place. But I believe that at least 70 – 80% of the problems for autonomous vehicles can be solved using traditional algorithms.

Tech Briefs I think machine learning can be scary in safety-related functions.

Gopalan: Exactly. And I think that although machine learning can solve some specific problems like object detection — object classification. I also think that machine learning can be very powerful in creating a feedback mechanism for these systems to slowly make them more intelligent. There are some approaches to do that without impacting the real-time operation.

Tech Briefs: So, like collecting analytics so you can make better decisions.

Gopalan: Exactly analytics and learning from collective mistakes.

This article was written by Ed Brown, Associate Editor of Photonics & Imaging Technology. For more information, visit here .