The faster drones fly, the more unstable they become and at high speeds, their aerodynamics can be too complicated to predict. Crashes, therefore, are a common occurrence. Faster and more nimble drones could be put to use in time-critical operations; for instance, to search for survivors in a natural disaster.
Aerospace engineers devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space. The researchers found that a drone trained with the algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms.
Training drones to fly around obstacles is relatively straightforward if they are meant to fly slowly. That’s because aerodynamics such as drag don’t generally come into play at low speeds and they can be left out of any modeling of a drone’s behavior. But at high speeds, such effects are far more pronounced and how the vehicles will handle is much harder to predict.
There could be delays in sending a signal to a motor or a sudden voltage drop that could cause other dynamics problems. These effects can’t be modeled with traditional planning approaches.
To get an understanding for how high-speed aerodynamics affect drones in flight, researchers have to run many experiments in the lab, setting drones at various speeds and trajectories to see which fly fast without crashing — an expensive and often crash-inducing training process. Instead, the team developed a high-speed flight-planning algorithm that combines simulations and experiments in a way that minimizes the number of experiments required to identify fast and safe flight paths.
The researchers started with a physics-based flight planning model, which they developed to first simulate how a drone is likely to behave while flying through a virtual obstacle course. They simulated thousands of racing scenarios, each with a different flight path and speed pattern. They then charted whether each scenario was feasible (safe) or infeasible (resulting in a crash). From this chart, they could quickly zero in on a handful of the most promising scenarios, or racing trajectories, to try out in the lab.
To demonstrate their new approach, the researchers simulated a drone flying through a simple course with five large, square-shaped obstacles arranged in a staggered configuration. They set up this same configuration in a physical training space and programmed a drone to fly through the course at speeds and trajectories that they previously picked out from their simulations. They also ran the same course with a drone trained on a more conventional algorithm that does not incorporate experiments into its planning.
Overall, the drone trained on the new algorithm completed the course in a shorter time than the conventionally trained drone. In some scenarios, the winning drone finished the course 20 percent faster than its competitor, even though it took a trajectory with a slower start; for instance, taking a bit more time to bank around a turn. This kind of subtle adjustment was not taken by the conventionally trained drone; likely because its trajectories, based solely on simulations, could not entirely account for aerodynamic effects that the team’s experiments revealed in the real world.
The team plans to fly more experiments at faster speeds and through more complex environments to further improve their algorithm. They also may incorporate flight data from human pilots who race drones remotely and whose decisions and maneuvers might help zero in on even faster yet still feasible flight plans.