Edge computing refers to processing done near the source of the data, like a vehicle. Cloud computing, by contrast, does its work in a data center farther up the road.

Autonomous driving capabilities require more real-time, or edge, capabilities, while less safety-critical tasks like mapping and traffic alerts may call for the cloud.

Patrick Dietrich, CTO, Connect Tech

In a live Tech Briefs presentation titled The Path to High-Level Autonomy, a reader asks: "What is the principal criteria behind assigning something to edge or cloud computing? And what is the infrastructure needed to support that implementation?"

Read the edited response below from Patrick Dietrich, Chief Technology Officer at Connect Tech, a Canada-based hardware manufacturer.

Patrick Dietrich: We find it's always going to come down to the same four things: bandwidth, latency, privacy, and connectivity. Then, it really just depends on your application, and which one you're going to weight higher than the others.

When making that decision, you're going to want to look at the overall bandwidth that's required in the application, as well as the latency. If you're making decisions in real-time, you can imagine [the risks of] having a safety-critical system being cloud-connected. With anything safety-critical, you're going to want some real-time decision making that, for safety reasons, could never rely on the cloud connection.

There's also privacy. Confidentiality of some of this data might be required to stay on premise or on the vehicle.

Then, the last factor is connectivity. Are we taking this vehicle into areas that have maybe sub-3G networks? All of those decisions will push you towards edge-based computing or cloud-based computing.

If you happen to be in a highly, very connectivity-rich environment, with 4G and 5G availability, and you don't have privacy concerns, then maybe it makes a lot of sense, to have data freely transfer to the cloud. If bandwidth or latency maybe aren't such an issue, then a lot of things can be done in the cloud.

And what we're seeing, more often than not, is that it's definitely a split. You have look at each of those requirements. Things that definitely need safety-critical, real-time decisions where latency must be low requires an edge-compute solution. When there's just maybe a bigger data offload that doesn't have to happen in a concrete amount of time, that makes more sense to be taken to the cloud.

What do you think? Share your questions and comments below.