A new method that improves control of robotic hands — in particular, for amputees — combines individual finger control and automation for improved grasping and manipulation. The technology merges two concepts from two different fields. One concept, from neuroengineering, involves deciphering intended finger movement from muscular activity on the amputee’s stump for individual finger control of the prosthetic hand. The other, from robotics, allows the robotic hand to help take hold of objects and maintain contact with them for robust grasping.
When humans hold an object and it starts to slip, there are a couple of milliseconds to react. The robotic hand has the ability to react within 400 milliseconds. Equipped with pressure sensors along the fingers, it can react and stabilize the object before the brain can perceive that the object is slipping.
The algorithm first learns how to decode user intention and translates this into finger movement of the prosthetic hand. The amputee must perform a series of hand movements in order to train the algorithm, which uses machine learning. Sensors placed on the amputee’s stump detect muscular activity and the algorithm learns which hand movements correspond to which patterns of muscular activity. Once the user’s intended finger movements are understood, this information can be used to control individual fingers of the prosthetic hand.
The algorithm was engineered so that robotic automation kicks in when the user tries to grasp an object. The algorithm tells the prosthetic hand to close its fingers when an object is in contact with sensors on the surface of the prosthetic hand. This automatic grasping is an adaptation from a previous study for robotic arms designed to deduce the shape of objects and grasp them based on tactile information alone, without the help of visual signals.
The shared approach to control robotic hands could be used in several neuroprosthetic applications such as bionic hand prostheses and brain-to-machine interfaces.
Watch a video demo of the technology on Tech Briefs TV here . For more information, contact
Transcript
00:00:07 What we're doing is we're developing a very smart prosthetic hand that allows an amputee to control each finger individually and also benefit from the aid of robotic assistance to grasp easier. Our research is a really good example actually of shared control which is a concept that we're very excited about in the field of robotics and that is merging user intention, that of an amputee in this case, with robotic automation. So typically when you hold objects in your hands and it starts slipping from your hands you only have a few millisecond to react. That's where this hand in particular, which has the possibility to react in 400 milliseconds, with all these tactile sensors, which can really react and move the object and restabilize it before the brain could
00:00:56 actually perceive that it's slipping. For an amputee, it's actually very hard to contract the muscles many many different ways to control all of the ways that our fingers move. What we do is we put these sensors on their remaining stump and then record them and try to interpret what the movement signals are. Because these signals can be a bit noisy, what we need is this machine learning algorithm that extracts meaningful activity from those muscles and interpret them into movements. And these movements are what control each finger of the robotic hands. On top of that because these predictions of the finger movements may not be 100% accurate we had this robotic automation which allows the hand to automatically start closing around an object once
00:01:44 contact is made. Now if the user wants to release an object all he or she has to do is try to open up their hand and the robotic controller turns off and all of the control goes back to the user. This is exactly an implementation of shared control because what we have is user intention, basically the finger movements, and also the robotic automation which closes the hand around an object and keeps it there if the user wants so that a grasp is more robust. We did not design the arm and and the sensor. We designed the contribution of all of those and primarily the algorithm behind that. The algorithm to react very rapidly to sense that things are slipping, to decide how to place the finger to react and restabilize the object. So we are behind, if
00:02:34 you want, this is more the intelligence behind that but of hardware is provided by an external party.