In recent years, a host of Hollywood blockbusters, including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street,” have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.

The key to the system is that it continuously estimates the velocities of all the moving objects in the drone's environment and projects their locations a second or two into the future. (Image: Christine Daniloff/MIT)

The researchers have a system that allows a director to specify a shot's framing — which figures or faces appear where, and at what distance. Then it generates control signals on the fly for a camera-equipped autonomous drone, which preserve the framing as the actors move. As long as the drone's information about its environment is accurate, the system also guarantees that it won't collide with either stationary or moving obstacles.

The user can specify how much of the screen a face or figure should occupy, what part of the screen it should occupy, and what the subject's orientation toward the camera should be — straight on, profile, three-quarter view from either side, or over the shoulder. Those parameters can be set separately for any number of subjects. In tests at MIT, the researchers used compositions involving up to three subjects.

Usually, the maintenance of the framing will be approximate. Unless the actors are extremely well-choreographed, the distances between them, the orientations of their bodies, and their distance from obstacles will vary, making it impossible to meet all constraints simultaneously. However, the user can specify how the different factors should be weighed against each other.

Preserving the actors’ relative locations onscreen, for instance, might be more important than maintaining a precise distance, or vice versa. The user can also assign a weight to minimize occlusion, ensuring that one actor doesn't end up blocking another from the camera.

The key to the system, is that it continuously estimates the velocities of all the moving objects in the drone's environment and projects their locations a second or two into the future. This buys it a little time to compute optimal flight trajectories and also ensures that it can recover smoothly if the drone needs to take evasive action to avoid collision.

The system updates its position projections about 50 times a second. Usually, the updates will have little effect on the drone's trajectory, but the frequent updates ensure that the system can handle sudden changes of velocity.

The researchers conducted their tests at CSAIL's motion-capture studio, using a quadrotor (four propeller) drone. The motion-capture system provided highly accurate position data about the subjects, the studio walls, and the drone itself. In one set of experiments, the subjects actively tried to collide with the drone, marching briskly toward it as it attempted to keep them framed within the shot. In all such cases, it avoided collision and immediately tried to resume the prescribed framing.

For more information, contact Larry Hardesty 617-253-4735, This email address is being protected from spambots. You need JavaScript enabled to view it..