UW researchers have created a deep learning method that can animate flowing material, such as waterfalls, smoke or clouds. (Photo: Sarah McQuate/University of Washington)

Sometimes photos cannot truly capture a scene. How much more epic would that vacation photo of Niagara Falls be if the water were moving?

Researchers at the University of Washington have developed a deep learning method that can do just that. If given a single photo of a waterfall, the system creates a video showing the water cascading down. All that’s missing is the roar of the water and the feeling of the spray on your face. The team’s method can animate any flowing material, including smoke and clouds. This technique produces a short video that loops seamlessly, giving the impression of endless movement.

“What’s special about our method is that it doesn’t require any user input or extra information,” said Aleksander Hołyński, a doctoral student in the Paul G. Allen School of Computer Science & Engineering. “All you need is a picture. And it produces as output a high-resolution, seamlessly looping video that quite often looks like a real video.”

The team’s system consists of two parts: First, it predicts how things were moving when a photo was taken, and then uses that information to create the animation. To estimate motion, the team trained a neural network with thousands of videos of waterfalls, rivers, oceans and other material with fluid motion. The training process consisted of asking the network to guess the motion of a video when only given the first frame. After comparing its prediction with the actual video, the network learned to identify clues — ripples in a stream, for example — to help it predict what happened next. Then the team’s system uses that information to determine if and how each pixel should move.

The researchers tried to use a technique called “splatting” to animate the photo. This method moves each pixel according to its predicted motion. But this created a problem.

“Think about a flowing waterfall,” Hołyński said. “If you just move the pixels down the waterfall, after a few frames of the video, you’ll have no pixels at the top!” So the team created “symmetric splatting.” Essentially, the method predicts both the future and the past for an image and then combines them into one animation.

“Looking back at the waterfall example, if we move into the past, the pixels will move up the waterfall. So we will start to see a hole near the bottom,” Hołyński said. “We integrate information from both of these animations so there are never any glaringly large holes in our warped images.”

Finally, the researchers wanted their animation to loop seamlessly to create a look of continuous movement. The animation network follows a few tricks to keep things clean, including transitioning different parts of the frame at different times and deciding how quickly or slowly to blend each pixel depending on its surroundings.

The team’s method works best for objects with predictable fluid motion. Currently, the technology struggles to predict how reflections should move or how water distorts the appearance of objects beneath it.

Source