Imaging devices and environmental context. (a) On-glasses camera configuration using a Tobii Pro Glasses 2 eye tracker. (b) Lower limb data acquisition device with a camera and an IMU chip. (c) and (d) Example frames from the cameras for the two data acquisition configurations. (e) and (f) Example images of the data collection environment and terrains considered in the experiments. (Image: North Carolina State University)

Researchers have developed new software that can enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control and includes robust artificial intelligence (AI) algorithms that allow the software to better account for uncertainty.

Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on. The new framework allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.

The researchers focused on distinguishing among six different terrains that require adjustments in a robotic prosthetic’s behavior: tile, brick, concrete, grass, “upstairs,” and “downstairs.” The new “environmental context” framework incorporates both hardware and software elements. The researchers designed the framework for use with any lower-limb robotic exoskeleton or robotic prosthetic device but with one additional piece of hardware: a camera. They used cameras worn on eyeglasses and cameras mounted on the lower-limb prosthesis itself and evaluated how the AI was able to make use of computer vision data from both types of camera, separately and when used together.

The AI teaches deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision-making. While it is relevant for robotic prosthetics, the work could be applied to any type of deep-learning system.

To train the AI system, researchers connected the cameras to able-bodied individuals, who then walked through a variety of indoor and outdoor environments. The researchers then did a proof-of-concept evaluation by having a person with lower-limb amputation wear the cameras while traversing the same environments. They found that the model can be appropriately transferred so the system can operate with subjects from different populations — the AI works well even though it was trained by one group of people and used by somebody else.

For more information, contact Edgar Lobaton at This email address is being protected from spambots. You need JavaScript enabled to view it..