Commands for robot correlating human gestures. (Image:

Humans are adept at using audio and visual cues for communication while carrying out collaborative tasks. However, humans can rely on non-verbal communication — e.g., visual gestures — to coordinate in a noisy environment. Now researchers at Indian Institute for Science are aiming to implement gestural interaction in a networked system of robots.

In the research, two robots are used to demonstrate a vision-based gestural interaction framework to carry out package-handling tasks with human cooperation. This approach aims to enable each robot to be independent of a centralized controller/server, and a system of two different robots demonstrates that the framework can incorporate different robot types in the system.

Schematic for a collaborative vision-based gestural interaction framework. The shaded region represents the contribution of this paper. (Image:

The system consists of package-handling robots, capable of carrying packages and detecting both human and robot gestures, and messenger robots for conveying information to the package-handling robots and carrying out supervision and detecting only human gestures. This mode of bio-inspired passive action recognition is based on how bees communicate with other bees in their hive.

An analogical example of how bees communicate to convey distance and direction of food source. The framework used in this paper involves a robot that moves in a triangle to convey direction and distance toward the target destination for the other robot in cooperation with humans. (Image:

Past work in vision-based multi robot interaction has generally focused on limited explicit communication and marker-based communication or color-based interaction. This research adds a gestural interaction framework to the existing literature.

Here is an interview with Professor Abhra Roy Chowdhury, Center for Product Design and Manufacturing, Division of Mechanical Engineering, Indian Institute of Science (IISc), Bangalore, India.

Tech Briefs: What's the next step in your research?

Chowdhury: To facilitate a collaboration strategy between robot-robot to carry out tasks in scenarios with limited networked communication and a cluttered environment.

Tech Briefs: When will this technology be readily available?

Chowdhury: We are working on the scope and timeframe.

Tech Briefs: Will it catch on? What are the pros? Cons?

Chowdhury: The advantage of this method is that it depends on a single modality of vision, and, hence, it is compatible with robots of various sizes and configurations and is therefore scalable. We are working toward making this technology more accurate and robust to handle more complex messages and tasks before commercialization can happen.

Tech Briefs: How will this change the robot-communication game?

Chowdhury: The major recommendations to be noted from this work are the applications of this in gesture-based robot and human cooperation for robust task execution specifically in places where communication network coverage is insufficient and intermittent (e.g., industry, military, space, etc.)

Tech Briefs: Are you working on other such advances? Projects?

Chowdhury: There are some projects in the pipeline for addressing the dire societal needs for human welfare.