Matei Ciocarlie, Associate Professor of Mechanical Engineering at Columbia University, has developed, with his team, a robotic finger that has a sense of touch that can be localized with high precision over a large, multi-curved surface.

Tech Briefs: How did this project evolve?

Matei Ciocarlie: I’m a roboticist and I’ve always been interested in manipulation — in robot hands — ever since I started working on it for my PhD thesis 15 years ago. It’s a very complicated motor skill, people do it so effortlessly that it’s taken for granted, but it’s a remarkably complex task to manipulate objects you’re seeing for the first time. [3:20] Pick them up, move them around in your fingers, use them for tasks, tool use — it’s a very complicated motor skill. In robotics we’re surprisingly far from being able to replicate such motor skills. There’s a big question of why? Is it the hardware that’s behind, the sensing, is it the brain — the intelligence, the computational part — I’ve always thought it’s all of the above.

Sensing is certainly a big part of it. If you think about the sensing capabilities of the human hand, it’s marvelous the tactile sensing and the force sensing capabilities that we have. We are so far from replicating that in robotics. If you think about human skin, we have a thousand individual touch sensors in every fingertip. Our skin is sensorized everywhere, even on the backs of our joints, where the skin stretches and bends along with the joint — there are no blind spots. All of these individual sensors are bundled up and wired together to the central nervous system by individual nerve fibers and the signals don’t corrupt each other. Somehow all that information gets to where it needs to go. It’s so far beyond what we can do in robotics. Cameras in robotics are as good as, or better than, the human eye, but with touch sensing, we’re still far behind.

So, we set off to try to build a robot finger that had good capabilities and with very rich tactile data and also no blind spots. Good coverage of curved areas with no corners and no edges and also something that could be easily integrated into a robotic hand —not needing a bundle of a thousand wires. There is a huge distance between a tactile sensor and a tactile finger. We have many tactile sensors that work by themselves on a workbench, but very few tactile fingers that could be integrated into a robot hand.

{youtube}https://www.youtube.com/watch?v=PVw8Qy7BHU0 {/youtube}

From this need, we started a collaboration with John Kymissis and his electrical engineering lab at Columbia. We had the idea to use LEDs and photodiodes and to use light bouncing around the inside of the finger as the signal that we’re sensing that would allow us to sense deformation and touch. The finger we have now is the result of a good four to five years of development. It has many of the things we were looking for. It has very rich data, telling us something about what the finger is touching, about deformation. It’s nicely packaged, no blind spots, no hard edges or corners that are not covered. It’s easy to integrate into a hand, with a 14-wire FFC connector to interface to the finger. So, we’re finally at the point where we’re integrating it with robot hands and we’re starting to push a lot more on the algorithms that are going to make use of this data.

Tech Briefs: It strikes me that the algorithms will have to be a lot of work to figure out how to use the data.

Professor Ciocarlie: We’ve had a mini revolution in robotics over the last couple of years. Machine learning came in and swept the field. The great thing about machine learning, is that now all of a sudden, we don’t need sensors to produce data that makes sense to a person looking at it. Before, we used to calibrate our sensors and clean up the data just so it would make sense to a person — to convey information that a person could infer. Now, we can use machine learning, and as long as information is there, our higher-level motor controllers and planners can learn how to make use of it without ever necessarily converting it to a format we could digest. With our finger, we’ve shown that we can train neural networks to extract information from the data. Sources of data that were unusable 10 years ago, now are usable, because the information can be extracted by machine learning algorithms and used for motor control and motor planning. I think one parallel is with oil. We used to have oil reserves that nobody would tap because it was prohibitively expensive. So, getting the oil out of the reserve was more expensive than the oil was worth. As technology gets better and it becomes cheaper to extract the oil, now all of a sudden, those reserves are very, very, valuable. It’s kind of the same with sensors. We’ve had sensing technologies that we didn’t use in the past because we couldn’t extract the information from the data. And now, with machine learning, those reserves are extremely valuable because now, it is possible to extract the information from the data. It’s unlocked new ways to do sensing for us. This finger is an example of that.

Tech Briefs: Can you describe some kinds of applications you see for this.

Professor Ciocarlie: The goal for us, specifically for this work, is dexterous manipulation, so a robot able to do assembly and disassembly and pick and place in very cluttered environments — extract objects from clutter and ordering them in the desired way. There are many dexterous manipulation tasks in robotics that we’re not able to do right now. I think tactile sensing is one of the key enablers for those. There are many manufacturing applications where we’d like robots to be more dexterous. Right now, they’re not — they’re using very simple grippers instead of dexterous hands, because we don’t really know how to achieve dexterity. Equally in logistics and e-commerce there are applications for robots doing sorting and picking and packing. These are application domains where we could use dexterous robots right now. Taking it a step further we’d like to get to a point where robots are getting better in, say, health care domains. We have so much manual labor right now in health care that’s done by nurses and other clinical staff even though it is way, way, below their skill level. They’re doing it because there’s nobody else around to do it. Every time a nurse has to act as an overqualified go-fer, that’s a huge loss for everybody involved.

Matei Ciocarlie

Finally, these days we’re seeing how much value there is in a person being able to do remote work — to assist other people without putting themselves in danger. That’s another direction where we’d like to see robotic manipulation going.

We have applications now and in five years if it keeps improving, we’re not going to be short of places to put dexterous robot hands.

Tech Briefs: How does the feedback from the finger make a robotic hand dexterous?

Professor Ciocarlie: It’s one of the components that makes a robotic hand dexterous. Imagine reaching into your pocket — maybe you have your cell phone, wallet, keys and you want to take one of those out without everything else falling. The way you do it is, you very carefully control your fingers and you regulate the movement and the forces applied by the fingers in response to what you’re touching. You’re simultaneously, from the sense of touch, inferring where the object you’re looking for is, but also how to apply forces to that object such that you hold it stably in your hand, so you get just that object and nothing else. It is a feedback loop based on touch sensing and we’d like to be able to close that loop.

Tech Briefs: So, there’d be motors in the robotic hand?

Professor Ciocarlie: Absolutely. The sensing is just one piece of the puzzle. The actuation, the motors are another piece, the actual kinematics, are an important piece — how many fingers, how many joints, what’s the configuration of the joints? One might say it should be like the human hand, right? No, not necessarily, because the human hand is so difficult to copy so maybe you’re better off not even trying. The exact shape and configuration of the hand is still a big part of it. So, there’s the sensing, the actuation, the kinematics, and then the controller — all of these have to work in concert. If any of these is defective, it will be hard to get dexterity.

Tech Briefs: I see you use flexible circuit boards. Isn’t that a new technology?

Professor Ciocarlie: Yes, it’s only in the last couple of years that it’s been a commercially available technology. That’s a massive advantage for us compared to where we were five years ago.

Tech Briefs: How did you arrive at your particular pattern of two LEDs side by side and a photodiode in between each pair?

Professor Ciocarlie: We tried to get a nice even coverage of the entire finger, every part of the finger we care about, to get it very well covered with LEDs and photodiodes. So, we packed as many LEDs and photodiodes in there as we could. It’s possible we can make do with fewer, that’s part of what we’re discovering now by working with this finger — we couldn’t have known that in advance. We packed as many as we could, knowing that it’s okay if we get a lot of data because now, we have the learning methods that can digest that much data.

So, we always have two LEDs side by side. For us, that’s equivalent to a single LED, but with a larger surface area. Imagine if we had a single LED, which is very small — it’s either occluded or not occluded, so if it’s a tiny LED you’d get just a binary signal. With a larger surface area, you can get something in between — partially occluded, so it makes the signal richer and more continuous. That’s why we use the LED pairs to give us the equivalent of a larger surface area. It makes the signal richer and more continuous. It’s also why our photodiodes are relatively large. We wanted big ones so we could get varying levels of occlusion, which gives us a nicely modulated signal.

The way a finger works, is whenever you touch anything, the geometry of the finger changes, so when light moves between all the LEDs and all the photodiodes, it gets distorted. The photodiodes pick up on that change, and relay that signal to us.

Tech Briefs: Does the transparent silicone layer act like a light conductor?

Professor Ciocarlie: Yes — a light conductor, or a waveguide, which is another term that’s commonly used.

Tech Briefs: What’s the outer layer?

Professor Ciocarlie: It’s the same silicone but infused with silver particles to keep outside light outside so it doesn’t affect our signal. We’d also like the light inside of our finger, coming from our own LEDs to stay trapped in the waveguide. The outer layer is thin: about 1 mm.

Tech Briefs: Are you working on the algorithms for using this data?

Professor Ciocarlie: Absolutely — every day — that is one of our main areas of research right now. We have a couple of hands in our lab that are equipped with these fingertips and we are very actively working on motor control algorithms using this tactile data.

Tech Briefs: How many fingers on the hands?

Professor Ciocarlie: One has three and the other, four.

Tech Briefs: Do you think this could ever be used as a prosthesis?

Professor Ciocarlie: That’s an excellent question. There is a possibility. Prosthetics is always a very interesting problem, where if you make the prosthetic more complex, which often means more fragile, more difficult to use, it should provide enough functionality compared to, say, a split hook to where the additional complexity is worth it to the user. So, in that sense, the bar is quite high. We’d love to be able to one day show a prosthesis with a sense of touch, both for a static hand to react autonomously to what it’s touching or maybe even to convey the sense of touch back to the user. There are groups out there that are working on stimulating the peripheral nervous system to try to render the sensation of touch back to the user. We’d love to get there one day, but it’s very hard, pound for pound, to provide more usable functionality than a simpler prosthesis, that’s why it’s a harder problem than it seems, so it’s not imminent that we’ll have widespread tactile prostheses.

Tech Briefs: But theoretically, someday down the road, maybe?

Professor Ciocarlie: Absolutely. We have another project in the lab, where we’re trying to build, not a prosthesis but an orthosis, a robotic hand orthosis for stroke patients. This is a collaboration with Doctor Joel Stein of the rehabilitation medicine department at the Columbia Medical Center, and working with him and his group, we’ve learned about stroke and the aftereffects of stroke. A very common aftereffect of stroke is excessive muscle tone, where a person’s hand is permanently clenched in a fist. Their flexors are overpowering the extensors and keeping the fist clenched. So, we’re building a robotic hand orthosis that someone wears on the forearm and on the hand, which has a motor on it and a network of tendons and provides help opening the fingers so the person can grasp or release and object. This has been a very active project for us in our lab for the last couple of years.

Tech Briefs: It sounds like a number of your projected uses are for industrial applications. Are you working with anyone in those areas?

Professor Ciocarlie: In terms of autonomous robotic hands, we’re always talking to e-commerce companies or manufacturers about their use cases and trying to learn from them. In the past we’ve had relationships where we were able to get example parts people would like to manipulate on the assembly line. We’re always looking to work with companies because we want to know exactly what people’s use cases are, rather than us trying to imagine them. We have collaborations, both in manufacturing and in e-commerce.

An edited version of this interview appeared in the June 2020 issue of Tech Briefs.