Lost something on the beach? A "Digger Finger" from MIT digs through sand and gravel to detect a buried object.

Equipped with tactile sensing, the slender, digit-like device could someday be mounted on a robotic arm and used to spot underground cables or even explosives.

The research from the MIT team will be presented at the next International Symposium on Experimental Robotics .

To detect various 3D-printed objects in sand and coarse-grained rice, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) slimmed-down their existing tactile sensor called GelSight , built in 2017.

The original (and bulkier) GelSight features a clear gel covered with a reflective membrane that deforms when objects press against it (see an example of the deformation in the above video). Three colors of LED lights and a camera sit behind the sensor.

The lights shine through the gel and onto the membrane, while the camera collects the membrane’s pattern of reflection. Computer vision algorithms then extract the 3D shape of the contact area where the soft finger touches the object.

To reduce the size of the GelSight sensor to suit the robotic Digger Finger, the MIT team began with a new design: The researchers made the structure more cylindrical, and with a beveled tip.

Then, the engineers replaced two-thirds of the LED lights with a combination of blue LEDs and colored fluorescent paint. The end result: a finger-tip sized device with a tactile sensing membrane of about 2 square centimeters.

The team used mechanical vibrations to help "fluidize" the rice and sand so that the Digger could dig without any of grainy material clogging the machinery.

“We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” said Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) . “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.”

Trapped sand was more difficult to clear than rice, according to the inventors, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.”

In a short Q&A below, Patel tells Tech Briefs how his team plans to optimize the Digger Finger’s ability to navigate various media.

Tech Briefs: Can a robotic finger have a human-like sense of touch?

Radhen Patel: The human sense of touch is highly dimensional. It consists of several types of sensing modalities, operating at different speeds and resolutions. Amazingly, only a set of them are actually utilized for a given set of tasks. We don't use all of them at all the times. So there is a lot of distracting information at disposal that is required to be processed for a particular task.

Robot fingers, too, constantly sense extraneous information via their sense of touch — for instance when reaching for or placing objects in cluttered spaces. In regard to finding buried objects, the "distracting information" is naturally the feeling of the granular media particles on the fingertips that the objects are buried in.

Tech Briefs: From a technology perspective, how does the Digger Finger “know" a target object from, say, a rock, or a clump of rice?

Radhen Patel: We trained a deep learning model (convolutional neural network) on the image data [an RGB image from a camera inside the Digger Finger] to identify or classify the target objects among clumps of rice and sand.

Tech Briefs: How is this design especially different from previous robotic search-and-rescue robotic alternatives?

Radhen Patel: The essential components of the Digger Finger include a camera, an illumination system (LEDs and florescent paint), a mirror, a gel membrane that is transparent from one side and has a reflective paint on the other, and a transparent cylindrical wedge core made of acrylic that holds all the above components.

We see our current design of the Digger Finger as an add-on to the existing search-and-rescue robots that will enable their appendages to enter tight spaces and perceive contact at a fine resolution.

Tech Briefs: What’s next? What will you be working on with this Digger Finger?

Radhen Patel: There is a lot of work that still needs to be done. On the design side, we want to make the Digger Finger more robust against the abrasions resulting from the process of digging, especially on the gel membrane. We'd also like to design a gripper with multiple Digger Fingers as the tactile appendages and explore various tactile exploration strategies to better identify and manipulate buried objects.

Tech Briefs: What application is most exciting to you, when you think of how the Digger Finger could be used?

Radhen Patel: We find identifying and discerning contacts in cluttered spaces as the most exciting application. This includes disarming explosives that are buried underground or picking and placing objects in cluttered spaces like grocery bags.

Additional researchers in the study included CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

What do you think? Share your questions and comments below.