A new gesture-recognition technology from Lancaster University can make a remote control out of your coffee mug — or most everyday objects, for that matter.
Imagine a rolling car that adjusts the volume of your television, or the wave of a kitchen spatula that pauses a video.
The novel technique, led by PhD student Christopher Clarke, allows human motions, or the movement of objects, to initiate interaction with screens.
With just a webcam, the "MatchPoint" technology displays targets that orbit a small circular widget in the corner of the screen. The system then watches for rotating movement.
Each target on the screen corresponds to a specific function, such as the adjusting of volume or the viewing of a menu. Using a hand, head, or object as input, a user synchronizes the direction of the target’s movement to achieve what researchers call "spontaneous spatial coupling," which activates the desired function.
Tech Briefs spoke with inventor Christopher Clarke about how MatchPoint can be used in other industries — and for more than just changing the channel.
Tech Briefs: What is "spontaneous spatial coupling?”
Christopher Clarke: Moving targets are displayed to the user; for example, one target selects the volume, while another selects a channel. The user then "activates" a control by mimicking the motion of the target, with any part of their body or while holding an object.
Once the control is activated, the system "tracks" the item — the body part or the object — that triggered the control, which now acts as a pointing device. When a user has finished the interaction (i.e., changed the volume or channel), the person can exit the interaction, and the body part or object is no longer tracked by the system. This way, the user can “pick up” a pointing device (figuratively) as needed, but can also assign controls on a semi-permanent basis by coupling a control with an object.
MatchPoint is a specific implementation of Spontaneous Spatial Coupling in which we use a webcam to detect motion, but we could equally use other devices, such as an accelerometer in a phone, or a depth sensor.
Tech Briefs: How do you ensure that the natural movement of an action doesn't conflict with the motion that initiates a selection or function like changing a channel?
Clarke: We use a circular motion which we would not expect to occur often in the background, and the user has to complete at least half a circle to initiate the interaction. We configured the parameters of the system based on a range of user movements, to avoid false activations from background or unintentional movement.
Tech Briefs: Can you take us through a scenario of how this gesture-technology is used?
Clarke: I think the video does this much better than I can describe:
Tech Briefs: How is this system different from existing gesture control technology?
Clarke: Existing gesture techniques focus on a specific body part, like the hands or the head, and often struggle when the user is holding an object, or if they are not in an "ideal" position where the device can detect it. In MatchPoint, we remove the need for object identification, and the technique allows users to "pick up" a pointer as they need it.
Tech Briefs: What are the necessary technology components?
Clarke: For the specific implementation of MatchPoint, the user would require a webcam and a processing unit to perform the matching. The on-screen controls could be generated by the same device that does the matching, or could be generated from a different device.
Objects need to be visible to the camera. If a user is holding the object, then the object doesn’t matter, because the system would detect the hand movement. If the object was used as a more permanent remote, then there are some size limitations to ensure that the system can accurately track the object.
Tech Briefs: What applications can you envision for this gesture-recognition system?
Clarke: The applications we find most interesting involve “sterile” applications, such as surgery or working in the kitchen, where it is desirable to use any type of object [for gesture control], and to avoid touching things (and cross-contaminating objects). Also: with MatchPoint, everyone has a remote control, and the ability for multiple pointers at the same time opens up some interesting multi-user possibilities.
Tech Briefs: What is most exciting to you about this technology?
Clarke: I’m really excited about a gesture technique that removes most of the constraints placed on users, such as what posture they must adopt and how they can use the system. The ability to use any object or body part, while being in any position, provides much greater flexibility when interacting.
What do you think? Would you use this kind of gesture-recognition? Share your comments below.
- Learn about Gesture-Based Controls for Army Robots.
- A Robotic Rubber 'Skin’ Senses Temperatures. What’s Next?