The system first detects 2D keypoints on the subject’s body and then turns the poses into a series of 3D “skeletons” that are stitched together to generate a motion sculpture. (MIT CSAIL)

Traditional videos and photos for studying motion are two-dimensional, and don’t show the underlying 3D structure of the person or subject of interest. So, researchers are using an algorithm that can take 2D videos and turn them into 3D-printed “motion sculptures” that show how a human body moves through space.

The “MoSculp” system uses a computer interface to navigate around structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint. Given an input video, the system first detects 2D key points on the subject’s body, then takes the best possible poses from those points to be turned into 3D “skeletons.”