Smartphone and tablet computer owners have become adept at using finger taps and and drags to control their touchscreens. Carnegie Mellon University researchers have found that this interaction can be enhanced by taking greater advantage of the finger's anatomy and dexterity. By attaching a microphone to a touchscreen, the scientists showed they can tell the difference between the tap of a fingertip, the pad of the finger, a fingernail, and a knuckle.

This technology - called TapSense - enables richer touchscreen interactions. While typing on a virtual keyboard, for instance, users might capitalize letters simply by tapping with a fingernail instead of a finger tip, or might switch to numerals by using the pad of a finger, rather toggling to a different set of keys.

"TapSense basically doubles the input bandwidth for a touchscreen," said Chris Harrison, a Ph.D. student in Carnegie Mellon's Human-Computer Interaction Institute (HCII). "This is particularly important for smaller touchscreens, where screen real estate is limited. If we can remove mode buttons from the screen, we can make room for more content or can make the remaining buttons larger."

The technology also can use sound to discriminate between passive tools (i.e., no batteries) made from such materials as wood, acrylic, and polystyrene foam. This would enable people using styluses made from different materials to collaboratively sketch or take notes on the same surface, with each person's contributions appearing in a different color or otherwise noted. The researchers found that their proof-of-concept system was able to distinguish between the four types of finger inputs with 95 percent accuracy, and could distinguish between a pen and a finger with 99 percent accuracy.

(Carnegie Mellon University)