The sight-to-touch translator (ST3) is a conceptual electronic apparatus that would generate a tactile representation of visible objects in its vicinity. The ST3 would be worn by a blind person; the tactile display would help the wearer to visualize nearby obstacles and act to avoid them. Thus, the ST3 might serve as an electronic alternative to a guide dog under some circumstances.
The ST3 concept has become feasible in recent years, through advances in miniaturized image-detecting electronic devices, processing of image data, and microelectronics in general. The ST3 (see figure) would include a state-of-the-art active-pixel sensor (APS) as its image detector. The output of the APS would be digitized and fed to a minicomputer, wherein the image data would be processed through edge-enhancement and gray-scale-based clutter-detection algorithms. The computer output would comprise data on the outlines of obstacles and other objects in front of the wearer.
The outline data would command the generation of a tactile dot representation of the outline of the object onto a rectangular electromechanical tactile-display device that would look like a giant dot-matrix printing mechanism. The tactile-display device would have dimensions of about 6 by 10 cm and would be mounted on the wearer's chest, forehead, or other convenient sensitive skin area that would provide a direction reference.
The ST3 would include a range finder similar to the range finders on autofocus cameras. Inasmuch as only nearby objects would be of interest for avoiding obstacles, the output of the range finder would be used to limit the processing of image data to those parts of the scene that lie within a distance of about 5 m.
Alternatively or in addition to tactile dot representations of the outlines of obstacles, the apparatus could generate standard tactile dot patterns, analogous to Braille characters, to represent stairs, curbs, doorways, vertical obstacles, and other common objects that the image-processing software would recognize by correlation with previously acquired image data stored in the computer memory. The apparatus could also be made to generate audible signals based on the detection of specified colors in specified configurations (e.g. from traffic lights and illuminated exit signs).
This work was done by Philip I. Moynihan and Maurice L. Langevin of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com under the Electronic Systems category, or circle no. 111 on the TSP Order Card in this issue to receive a copy by mail ($5 charge).