A new method that improves control of robotic hands — in particular, for amputees — combines individual finger control and automation for improved grasping and manipulation. The technology merges two concepts from two different fields. One concept, from neuroengineering, involves deciphering intended finger movement from muscular activity on the amputee’s stump for individual finger control of the prosthetic hand. The other, from robotics, allows the robotic hand to help take hold of objects and maintain contact with them for robust grasping.
When humans hold an object and it starts to slip, there are a couple of milliseconds to react. The robotic hand has the ability to react within 400 milliseconds. Equipped with pressure sensors along the fingers, it can react and stabilize the object before the brain can perceive that the object is slipping.
The algorithm first learns how to decode user intention and translates this into finger movement of the prosthetic hand. The amputee must perform a series of hand movements in order to train the algorithm, which uses machine learning. Sensors placed on the amputee’s stump detect muscular activity and the algorithm learns which hand movements correspond to which patterns of muscular activity. Once the user’s intended finger movements are understood, this information can be used to control individual fingers of the prosthetic hand.
The algorithm was engineered so that robotic automation kicks in when the user tries to grasp an object. The algorithm tells the prosthetic hand to close its fingers when an object is in contact with sensors on the surface of the prosthetic hand. This automatic grasping is an adaptation from a previous study for robotic arms designed to deduce the shape of objects and grasp them based on tactile information alone, without the help of visual signals.
The shared approach to control robotic hands could be used in several neuroprosthetic applications such as bionic hand prostheses and brain-to-machine interfaces.