Lower-limb robotic prosthetics operate more effectively when they adjust to terrain.

If a wearer approaches stairs or a grassy hill, for example, a robotic prosthetic must offer more spring or mechanical resistance on the foot.

A new framework being developed at North Carolina State University sets the stage for A.I.-enabled robotic prostheses that predict user terrain and initiate mechanical changes accordingly.

The artificial intelligence spots six environmental conditions: tile, brick, concrete, grass, “upstairs,” and “downstairs.”

Using training data supplied by students and researchers, the software determines which of the six types of terrain users will be stepping on, quantifies the uncertainties associated with that prediction, and incorporates that uncertainty into decision-making.

“If the degree of uncertainty is too high, the A.I. isn’t forced to make a questionable decision – it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” said Boxuan Zhong, lead researcher and a recent Ph.D. graduate from N.C. State .

With cameras on smart glasses and the lower-limb prosthesis itself, the N.C. state team evaluated how the artificial-intelligence software handled the computer-vision data coming from both imaging sources.

"Using only the camera mounted on the lower limb worked pretty well – particularly for near-term predictions, such as what the terrain would be like for the next step or two," said Helen Huang, fellow researcher and co-author of the paper, “Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification, ” published in IEEE Transactions on Automation Science and Engineering.

To train the A.I. system, researchers connected the cameras to able-bodied individuals, who then walked through a variety of indoor and outdoor environments. Following the first test, the researchers began a proof-of-concept evaluation by having a person with lower-limb amputation wear the cameras while traversing the same environments.

“We found that the model can be appropriately transferred so the system can operate with subjects from different populations,” said Edgar Lobaton, co-author of the paper and an associate professor of electrical and computer engineering at North Carolina State University. “That means that the A.I. worked well even though it was trained by one group of people and used by somebody different.”

In a short Q&A with Tech Briefs below, Lobaton explains where the team's research will go next.

Tech Briefs: What features are the cameras spotting that determines, say, “Brick” from “Grass?”

Prof. Edgar Lobaton: The cameras that we are using are visual RGB sensors, or regular video cameras. They are basically picking up on the patterns observed in the images to determine the type of terrain. We are planning to use other sensors, or depth cameras, in the future as well.

The models themselves, however, are a little more complex. They also consider a few frames of information over time, which can pick up details related to how much motion is present, like differentiating between standing or walking.

Tech Briefs: What exactly is the model?

Prof. Edgar Lobaton: The model is the A.I. software that takes as input the images and provides as output the prediction of the terrain and a measure of confidence. The model is based on deep-learning. We did test models that analyze one image at the time, but we noticed that the models that incorporated temporal information gave us the best performance.

Motion was particularly captured in two different ways. The first was using an inertial measurement unit (an accelerometer and gyroscope) to track a person's walking gait and then determine which images in the gait gave us the most information. We used these to determine when to pick frames for prediction. Motion was also incorporated through the temporal models on the video feeds. These models look at the dynamics of the images, which can give an idea of overall speed of a person's walking.

Tech Briefs: How much effort, or literal walking, was involved in the training?

Prof. Edgar Lobaton: For getting the data, we enlisted individuals wearing cameras on their legs (as shown in the above picture) and the camera on the glasses. The participants walked around different locations. We had several sessions and multiple hours of data collection. The actual training on the workstation that we used in my lab took around 4 hours. For the models, we had the effort associated with developing the algorithm and collecting the data.

Tech Briefs: How accurate is the software in detecting one surface from the other? What is the most difficult aspect to detect?

Prof. Edgar Lobaton: In terms of accuracy, we did notice that we had a harder time identifying stairs — either going upstairs or downstairs. This was somewhat expected since our data had the fewest instances of this class, and there was also more variation since our participants went through different stair types.

Another challenge was the blurriness that we observed from the view on the leg. As part of a solution, we proposed a sampling strategy for the image frames based on a location on the walking gait that was identified as the most informative.

Tech Briefs: Once the surface is detected, what then happens to the prosthetic? Do the detections then initiate different forms for the prosthetic?

Prof. Edgar Lobaton: In this paper, we focused on characterizing the accuracy of the detection of the terrain together with a quantification of the confidence of the predictions in order to guarantee safety. In the future, we plan to work on the integration with the mechanical system. The next step would be to use our predictions as a trigger to switch mechanical modes of operation in the prosthetic.

Tech Briefs: How did this idea come about? Has A.I. been used for this kind of application before?

Prof. Edgar Lobaton: Our lab has been interested in providing mechanisms for robust prediction for computer vision and robotic applications. This particular application came as a collaboration with Dr. Helen Huang from NC State/UNC. There has been a big interest in using similar techniques for other applications such as autonomous driving.

Tech Briefs: What is most exciting to you about this kind of work?

Prof. Edgar Lobaton: I always had a passion for robotics so I always enjoy working in projects that go beyond software and are closely integrated with a physical system. I like to push A.I. to the physical world.