The robotic sensor the researchers used has a camera in its ‘fingertip’ and reads by using a combination of the information from the camera and the sensors. (Image: University of Cambridge)

A University of Cambridge team used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90 percent accuracy.

Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips. The results are reported in the journal IEEE Robotics and Automation Letters.

Reproducing that level of sensitivity in a robotic hand, in an energy-efficient way, is a big engineering challenge.

“The softness of human fingertips is one of the reasons we’re able to grip things with the right amount of pressure,” said first author Parth Potdar. “For robotics, softness is a useful characteristic, but you also need lots of sensor information, and it’s tricky to have both at once, especially when dealing with flexible or deformable surfaces.”

Braille is an ideal test for a robot ‘fingertip’ as reading it requires high sensitivity, since the dots in each representative letter pattern are so close together. The researchers used an off-the-shelf sensor to develop a robotic braille reader that more accurately replicates human reading behavior.

“There are existing robotic braille readers, but they only read one letter at a time, which is not how humans read,” said Co-Author David Hardman. “Existing robotic braille readers work in a static way: they touch one letter pattern, read it, pull up from the surface, move over, lower onto the next letter pattern, and so on. We want something that’s more realistic and far more efficient.”

The robotic sensor the researchers used has a camera in its ‘fingertip’ and reads by using a combination of the information from the camera and the sensors. “This is a hard problem for roboticists as there’s a lot of image processing that needs to be done to remove motion blur, which is time and energy-consuming,” said Potdar.

The team developed machine learning algorithms so the robotic reader would be able to ‘deblur’ the images before the sensor attempted to recognize the letters. They trained the algorithm on a set of sharp images of braille with fake blur applied. After the algorithm had learned to deblur the letters, they used a computer vision model to detect and classify each character.

Once the algorithms were incorporated, the researchers tested their reader by sliding it quickly along rows of braille characters. The robotic braille reader could read at 315 words per minute at 87 percent accuracy, which is twice as fast and about as accurate as a human braille reader.

“Considering that we used fake blur to train the algorithm, it was surprising how accurate it was at reading braille,” said Hardman. “We found a nice trade-off between speed and accuracy, which is also the case with human readers.”

“Braille reading speed is a great way to measure the dynamic performance of tactile sensing systems, so our findings could be applicable beyond braille, for applications like detecting surface textures or slippage in robotic manipulation,” said Potdar.

In future, the researchers are hoping to scale the technology to the size of a humanoid hand or skin.

For more information, contact Sarah Collins at This email address is being protected from spambots. You need JavaScript enabled to view it..