To find out the latest information about sensors for automated driving systems, I interviewed Alberto Marinoni, Director of Product Marketing, TDK/Invensense (San Jose, CA).

The commonly used term, Advanced Driver Assistance Systems (ADAS) essentially refers to SAE Level 2 (L2), Partial Driving Automation. At that level, the driver must be in the car and must be vigilant — they cannot read a book, for example — that would require Level 3 or beyond. (The highest level — 5 — is a fully automated vehicle that doesn’t even need one of us humans to be in the vehicle at all.)

In Level 2, the car can be automatically controlled longitudinally (acceleration/deceleration) or laterally (steering), depending on the application. However, the driver must be present, keep eyes on the street, and be vigilant in order to take control if needed. In contrast, Level 1 can do automated braking/acceleration OR lateral steering, but not both.

For L2, there are multiple sensors, including cameras, radar, and inertial measurement units (IMUs). A global navigation satellite system (GNSS), such as a GPS, is also included.

For certain L2 applications, although not the majority, lidar is also available, although that’s mainly for Level 3. It’s not always included in L2 because of its high cost relative to the other technologies. Marinoni explained that radar is a long-range detector — its use is for obstacle detection at a distance — to alert the car that something is present in front it. Lidar adds to the automation mix by recognizing objects in detail nearer to the car. It can also scan the surroundings to obtain information about the immediate environment. This information can be geo-referenced with respect to the earth in order to pinpoint the absolute position of the car by using an inertial navigation system (INS) composed of an IMU and GNSS. A 3D map can be built to precisely locate objects by combining the absolute position information from the INS and the image of the relative surroundings based on the lidar.

Inertial Measurement Unit

Figure 2. Integrated single-chip six-axis MEMS gyroscope and accelerometer. (Image courtesy of TDK/Invensense)

The TDK/InvenSens IMU has two MEMS components in the same housing: a 3-axis accelerometer and a 3-axis gyroscope. The accelerometer is sensitive to both static (e.g. gravity) and dynamic accelerations in all three axes and can be used to determine the tilt angle of the IMU. The gyroscope is mainly used for dynamic conditions in which there is angular velocity in addition to gravity. The outputs of these two sensors are combined mathematically to determine the orientation of the system.

The general trend today is to place an IMU next to each sensor to increase the accuracy of detection.

Acceleration/Deceleration

According to Marinoni, the most important deceleration functions are emergency braking and collision avoidance. For these applications, sensors such as radar scan the front of the car, looking for an object or person. The scanning data is sent to a central processing unit, which can decide whether the vehicle needs to come to a stop. If so, it outputs a signal to actuators that act in the same way as a driver would, by pushing the brake pedal in order to stop the car before a crash.

The IMU plays an important role here. The radar sensor is typically mounted in the vehicle’s bumper and can work perfectly if it is parallel to the street. However, if for some reason the bumper has been deformed, the radar information will be unreliable. An IMU mounted alongside the radar sensor can dynamically monitor the tilt to provide corrective information. The same concept is applied to the camera modules.

Steering

Figure 3. Vision system image stabilized by an IMU. (Image courtesy of TDK/InvenSense)

In today’s vehicles, there are multiple cameras, 10 or more, for ADAS. However, since there is a lot of vibration when you are driving, the image captured by the camera module can be blurry. If you put an IMU close to each camera, you can easily measure the vibration applied to the camera in the exact moment it takes a picture. With this information, you can stabilize the image and clear up the noise for a clear view.

A typical camera-based application is active lane-keeping assistance. For this, there is usually a camera located near the rear-view mirror, which is used to detect the street lines and to perform image processing. The image quality is important for this application because you have to recognize the line and whether the car is crossing it. By mounting an IMU next to the camera to stabilize it, you produce a clearer image, which reduces the computational load on the central processor. For some lane-keeping applications, the driver is alerted, so they can take control of the steering in order to stay on track. There are other applications in which this information is used by the car to directly control the steering to automatically keep it in lane.

Sensor Fusion

I then asked Marinoni about the role of sensor fusion in ADAS. He explained that it’s an algorithm capable of combining information coming from multiple sensors in order to provide an output that’s better than the sum of each individual sensor.

One example would be an INS, in which a GNSS receives information from a satellite to determine the absolute location of the vehicle. However, there are conditions under which GNSS information is not reliable, for example, in a tunnel, in urban canyoning, or in a multilevel parking lot. You would therefore need an IMU close to the GNSS in order to calculate the position of the system when the GNSS is not available. A sensor fusion algorithm running in the GNSS module would combine the information from the IMU and the GNSS to generate a position that is reliable under all conditions. This optimizes the system because the IMU and GNSS complement each other due to their respective strengths and weaknesses. The fusion algorithm keeps the information coming from the IMU when the GNSS is not reliable and uses the information from the GNSS system when the car is in an open sky condition. When there is a good GNSS signal, the fusion algorithm also enables the GNSS data to calibrate the IMU for those times when the GNSS is not available.

Dead Reckoning

When the GNSS signal is not available, the IMU navigates with dead reckoning by starting from the absolute latest position it received. At that point it starts integrating the gyroscope information over time, to update the position. If there is good gyroscope information and good timing, you have good results. If, However, the gyroscope output is good, but the timing is not, you have poor results. If both are poor you have completely bad results. Because you are integrating, the error is accumulated and after a certain amount of time, dead reckoning results might no longer be acceptable.

If you are driving through a tunnel or in a city, where the GNSS signal is bad for a long enough time, dead reckoning based on the IMU, would not be reliable. Under those conditions, it would be up to the car manufacturer. They could initiate a driver alert; if the driver doesn’t react to the alert, a second alert could be generated. If that too were ignored, then the ADAS could take control and decrease and the speed — but not stop the car, which would be dangerous. One additional action could be to generate a call, such as to OnStar, in order to check whether the driver is safe. There are multiple ways to manage the situation.

Reliability of the ADAS System

Reliability of the ADAS system itself is obviously critical. Data integrity must be guaranteed under all conditions. According to Marinoni, the TDK/ Invensense 6-axis IMUs for ADAS includes an embedded diagnostic that has been developed for systems meeting requirements up to Automotive Safety Integrity (ASIL) level B. If communication with the central unit is not reliable, for example, it can generate an alarm to alert the driver. The embedded safety chip includes a mechanism that continuously checks the functionality of all the blocks of the system. If the component detects a malfunction in the accelerometer, gyroscope, digital logic, or in the communication bus, it sends an alarm to the system, telling it to be aware that something went wrong and the information coming from the sensor is no longer reliable. Self-diagnosis is mandatory in automotive safety applications, especially if you are controlling the velocity, breaking, or steering. These problems are addressed in the ASIL specification. However, even if it is not for a Level 2 application, a system such as electronic stability control must also be 100% reliable.

Where Are We Now and Where Are We Going?

Figure 4. Dead reckoning kicks in when entering a tunnel. (Image courtesy of TDK/InvenSense)

I asked Marinoni where he sees this technology is now and what to expect in the future. “At this moment, Level 2 is a reality — it’s already on the street,” he said. “But it’s ramping right now in the sense that we expect the IMU volumes to increase from now to 2030, from less than 10 million vehicles to more than 40 million.” For the next step, Level 3, the main change will likely be the introduction of new technologies like lidar. “From the point of view of our IMU, we are already set for L3 applications, thanks to our 6-axis integration,” he said.

He went on to say that the next innovation point in this field could be the reduction of power consumption. Some ADAS applications are required to be on even when the engine is off. For this reason, the power consumption of each component in the application counts. In the past, when applications ran only if the engine was on, that that wasn’t an issue. But now the manufacturers are changing their specifications to include power consumption.

And last but not least, since dead reckoning integration is affected by the accumulated error over time, the other important point, especially with self-driving cars, is to further reduce the noise of the component, to improve the performance of the sensor, to implement longer dead reckoning integration, keeping the total error under control.

I next asked Marinoni when he thought Level 3 might be hitting the streets. His guess is that we won’t be seeing a lot of movement in the L3 market until 2025.

“Another important topic, although not based on theory — it’s more a rule of thumb — is that for L2 systems, two parallel technologies are enough. For L3 systems, in order to guarantee accuracy, stability, and performance, you need to combine at least three technologies, and most probably for L4, you need four. This should give you a sense of the complexity of guarantying performance and safety,” he said. That will place greater demands on the required algorithms and computer resources. That’s where 5G is likely to come into play, to enable moving much of the computation to the cloud. Of course, that opens the door to possible hacking.

This article was written by Ed Brown, Editor of Sensor Technology. For more information, visit here .