University of Washington electrical engineers have developed a way to automatically track people across moving and still cameras by using an algorithm that trains the networked cameras to learn one another’s differences. The cameras first identify a person in a video frame, then follow that same person across multiple camera views.

“Tracking humans automatically across cameras in a three-dimensional space is new,” said lead researcher Jenq-Neng Hwang, a UW professor of electrical engineering. “As the cameras talk to each other, we are able to describe the real world in a more dynamic sense.”

Imagine a typical GPS display that maps the streets, buildings and signs in a neighborhood as your car moves forward, then add humans to the picture. With the new technology, a car with a mounted camera could take video of the scene, then identify and track humans and overlay them into the virtual 3-D map on your GPS screen. The UW researchers are developing this to work in real time, which could help pick out people crossing in busy intersections, or track a specific person who is dodging the police.

“Our idea is to enable the dynamic visualization of the realistic situation of humans walking on the road and sidewalks, so eventually people can see the animated version of the real-time dynamics of city streets on a platform like Google Earth,” Hwang said.

Source 

Also: Learn about Machine Vision for High-Precision Volume Measurement.