Two of the fastest growing areas in automotive engineering are Advanced Driver Assistance Systems (ADAS) and autonomous vehicles. To learn about the contribution of Inertial Measurement Units (IMU) to automobile navigation systems for these two applications, we interviewed Mike Horton, CTO of ACEINNA, Inc. (Boston, MA).

Tech Briefs: I’d like to start with an overview of what, in general, are the functions of IMUs in autonomous vehicles and ADAS.

Mike Horton: IMUs are used in a lot of different things in cars. A bunch are related to safety and some are related to sensor fusion. Sensor fusion is where you want to collect data from multiple different sources to help localize the vehicle or to maintain information about the state of the vehicle itself — where it is, where it’s pointed, and what error characteristics the sensors have. For example, you might be fusing data from radar, cameras, and lidar together to determine what errors the radar or the lidar are giving you at a point in time. The IMU is a key sensor for fusion because it’s able to blend the characteristics of these sensors together with a non-environmentally dependent model of the vehicle. It directly measures the accelerations and angular rates and then, using some pretty simple math, those measurements can be integrated to calculate velocity, position, and attitude.

Tech Briefs: But doesn’t it need information from the environment for the initial conditions?

Figure 2. Inertial Measurement Unit (IMUs) sensors are a valuable navigation addition to many of the other sensing solutions used in autonomous vehicles. (Image courtesy of ACEINNA)

Mike Horton: It does need it for the initial conditions, but after that, it doesn’t need environmental information to update the trajectory. In a lot of ways this is the perfect sensor for dynamically fusing observations from different sensors. Because it has a truly independent view of the world — it doesn’t have any environmental dependence, except of course for initial conditions. You might take the data from lidar to correct the drift in the IMU. At the same time, if you have an IMU-based trajectory that’s telling you something very different from your lidar trajectory, you can identify conditions where things have gone wrong. If you have different cameras, they’re each on a different coordinate frame than, say, the lidar is. You can use the IMU to dynamically put those two coordinate frames together. So, it has a major role in all of the sensor fusion algorithms doing high-rate updates among inputs from those sensors. Many sensors, such as cameras have a whole lot of signal processing associated with them. You don’t necessarily want to run your camera localization algorithm at a 100 Hz update rate, because it’s just too much computation. You can use the trajectory provided by the IMU to reduce that data rate. Same thing with the lidar — it’ll take the lidar data and convolve it with HD maps to get the position orientation that best fits the two sets of data. That’s a pretty expensive computational operation, so once you’ve got a good fix from that, you can go for a short period of time on the IMU without rerunning all that math.

The IMU has always played an important role in safety, and in autonomous vehicles, it’s even more critical. In most of the sensor architectures, the IMU is the sensor of last resort. When the other sensors are failing and the vehicle needs to come to a safe stop, it depends on the IMU. It’ll use the IMU for a period of anywhere from 10 to 60 seconds (depending on who you talk to) to reach a safe stop. Traditionally in safety, an IMU also provides data for vehicle stability and airbag deployment, or a subset of these. An IMU today generally means six sensors: three rotational rates and three accelerations.

Tech Briefs: For fusion, what is used as the central controller to combine the information from the various sensors, and the input from the IMU?

Mike Horton: It depends on the vehicle architecture. A lot of people are using a major Engine Control Unit (ECU), which connects via the CAN bus. It usually includes a CPU that’s used for localization and perception. It might have an Nvidia, or like Tesla, their own auto-pilot CPU that sits at a level up above where we work.

Figure 3. The tiny MEMS-based ACEINNA OpenIMU 330 is designed for developers creating guidance and navigation systems for a wide range of autonomous vehicles such as cars, trucks, robots, and drones, as well as for industrial, construction, and agricultural machinery. (Image courtesy of ACEINNA)

Tech Briefs: All of the IMU’s measurements require a starting point — location, velocity. How does it deal with the error in the starting point? I assume that the starting information comes from a GNSS.

Mike Horton: It can come from a GNSS and that’s something we’re actively pushing a lot and I think it’s gaining more traction in autonomous driving. However, just two or three years ago, high precision GNSS had an accuracy of only 2 – 3 meters, and even that was questionable. If there were a lot of buildings or trees around, it could be 10 meters. From an autonomous guy’s point of view, that was useless. So, they said “no thank you” to GNSS. Instead of using GNSS, systems like the Tesla autopilot, or Waymo, use lidar and cameras to build a map. Then the target vehicle uses the lidar and the camera to collect a set of real-time data and convolves it with the map, to obtain a best estimate of where it’s at. Basically, it uses the dataset — this point cloud — and matches it to the HD map in the cloud. From that match it knows: “Ah, I must be here, and I must be pointed in this direction.” They wouldn’t use GNSS at all because of its unreliability.

That is all changing right now. People are starting to integrate GNSS more directly in their architecture: one, because there are now lower cost and more OEM-friendly versions of high-precision GNSS and the correction services that go with that. So, if you have a special kind of GNSS called a dual-frequency GNSS, it can correct much better for the ionosphere. And if you couple that with information that comes in from the internet, or 4G, or L-band on satellite, to correct clock and orbit errors, you can get sub-30 cm, even down to single digit centimeter, accuracy from GNSS. This is becoming much more integral to the design of autonomous systems going forward because GNSS actually, in the big picture, is very reliable if you know how to deal with it, if you can do the dual frequency and the correction services and deeply integrate it with an IMU. And the chipset cost is very low — it doesn’t have lenses; it doesn’t have a requirement for HD maps. Since the IMUs are also getting better, you can do tight coupling between the IMU and the GNSS to maintain better localization. We’re also seeing a lot of support for GNSS from semiconductor companies. For example, we integrated our IMU with the ST Teseo 5 dual-frequency silicon. There’s also U-blox with the F9 and dual-frequency silicon from Qualcomm. These chipsets are coming to be used for what’s called lane-level positioning. This is starting to really take hold in autonomous car applications as a key sensor. GNSS plus IMU is a really classic integration — how to do it and do it well is very well understood — it was just missing low-cost, robust, high precision receivers.

Tech Briefs: Are these being used in actual automobiles yet?

Mike Horton: Putting six lidars on a car and five Invidia CPUs is not in the cards for a consumer car. Although GM has Cruise with 6 lidars on a car, that’s for applications like ride-share services, who view cars as infrastructure so they’re willing to spend a little more money on them than a consumer is.

Tech Briefs: The accelerometer gives you linear acceleration and when you integrate that, what happens if the velocity isn’t changing?

Mike Horton: When you integrate it, and there’s maybe a little drift but the velocity doesn’t increase — you’re at a constant velocity. You could be standing still, or you could just be going at a constant velocity. But, before the vehicle reached a constant velocity, it had to accelerate from stationery. The IMU tracks and knows the vehicle velocity that way. An IMU tracks relative position change by integrating changes in rate and acceleration. Absolute position must come from GNSS or matching image/LIDAR scan with HD Map.

Tech Briefs: How does the ACEINNA IMU solve some of the problems?

Mike Horton: Because the IMU is in a safety-critical function, there is a strong desire for a very high reliability. Standards have emerged in automotive to quantify that reliability, most notably ISO 26262. If you go through that process, you will get what is called an ASIL rating, which is anywhere from A to D, with D being the highest-level rating. The IMU on the vehicle is typically required to be D — it’s a very high reliability component. Especially with sensor fusion, there’s the very high-performance requirement for being able to navigate the car safely to a stop. In a second case, if you’re going through tunnels or under highway underpasses, where the visualization sensors may not have much to look at and the GNSS may be blocked, you are critically dependent on the IMU to navigate through. So, there are both high-performance and high reliability requirements.

We introduced a product to address both of those cases at the same time — it’s called “Open IMU 330.” It’s unique in that it has a triple redundant architecture, where three IMUs are used in coordination and each one has its own individual calibration. The data from those three is put together to give you the best estimate of the IMU output. In addition, the three datums are compared to trigger a fallback mode, where only two IMUs are used if any one of them has an unusual output. An issue could result from temperature, or shock, or a sensor failure, or any number of reasons. Since fault detection can occur within 10 ms, if there is any discrepancy in the data, the bad signal is so short it won’t get incorporated into the solution for very long. Since you have to do this integration a few times: integration of acceleration to get velocity, and of velocity to get position, an error integrates very quickly to an increasingly larger error. Our new open IMU330 addresses that very problem.

Tech Briefs: How does your IMU calculate location?

Mike Horton: It gets it from the accelerometers. You start off with an initial condition, and then the three accelerations integrate to velocity and position. Actually, it’s a little more complicated, because those three accelerations are in the local body frame and you care about translations to the earth frame. There is also gravity, which impinges upon this all the time. So, the first thing you have to do is figure out your orientation, then you figure out where gravity is, you remove gravity and start the computation. It’s not super-complicated math, but there’s a bit of math involved. Part of the math process that goes on to make the IMU useful in sensor fusion requires very good precision. The precision needed for sensor fusion and navigation applications is much closer to what’s been done for a long time in aerospace applications and is now making its way into consumer vehicles.

Tech Briefs: What is the function of your Kalman filter?

Mike Horton: The open IMU330 provides the basic IMU output and it also has a CPU in it so that customers can integrate their own code. We provide some standard sensor fusion algorithms that can run local inside the IMU. The Kalman filter is one — it is used to take other sensor inputs, such as the odometer or GPS or simply a knowledge of vehicle dynamics and use that to help dynamically correct the errors that occur in the sensors, such as bias or drift errors. Our Kalman filter is made open source so that customers can directly use it and modify it. It runs on top of the IMU algorithm. Sometimes customers want to run this on their own CPU, but other times they’re looking for a system independent of the main CPU that can do a dead reckoning function or GNSS-denied dead reckoning: meaning no lidar, no radar, just get me from point A to point B for a short period of time.

Tech Briefs: What kind of period of time would it take before the error starts getting too big?

Mike Horton: Without an odometer, if you have no real speed sensor, then 10 seconds is a good point of time it could do very well on; it could maintain less than 30 cm error for 10 seconds. If you integrate with an odometer, you could get out to a couple of minutes. However, there are issues with odometers too, issues with tire pressure, slip and slide, things like that. Commonly, in these dead reckoning scenarios, the odometer signal is used. If you use a combination of the odometer with the IMU, you can navigate a tunnel with pretty reasonable accuracy.

Tech Briefs: Can you give me an idea of the IMU330’s size and ballpark price range.

Mike Horton: About 15 ¥ 10 mm and a couple of millimeters thick. The ballpark price is under $100.

This article was written by Ed Brown, Editor of Sensor Technology. For more information, visit here  .