| Robotics, Automation & Control

How the Dragonfly’s Brain Offers Insights for Robotic Vision

By carefully studying the neurons of the dragonfly, University of Adelaide PhD student Joseph Fabian discovered the predator’s keen way of catching its prey. Fabian and his fellow researchers hope to translate the insect’s complex neural processes into advances that support new applications in robotic vision and autonomous systems.

Tech Briefs: How does a dragonfly’s brain anticipate movement?

Joseph Fabian: Psychological studies on humans show that we use the recent movements of an object to predict its future location. Objects do not move randomly; instead they often drift on long, continuous, predictable paths (for example, the ball in a game of tennis). Like us, the dragonfly takes advantage of this concept, and uses predictions of objects’ future movements to speed up decision-making. Until now, however, we did not understand how brains actually perform this prediction, especially not on the scale of individual identifiable neurons.

Tech Briefs: What did you discover? What is special about the dragonfly’s neuron cells?

Joseph Fabian: Individual neurons in visual areas of the dragonfly brain (small target motion detectors, or STMDs) selectively respond to small moving objects. We recorded the electrical activity of these neurons in live dragonflies, one neuron at a time, with an electrode 5000 times thinner than a human hair. While recording, we presented small black moving dots on a computer monitor. We show that as these neurons track an object, they “remember” its past location and direction of movement. This information results in fast and dynamic changes in the neurons’ physiological properties, maximizing their ability to detect future objects in the predicted upcoming position.

Tech Briefs: How can technology reflect this kind of dragonfly ability?

Joseph Fabian: The processing in dragonfly brains is significantly less computationally complex than the existing models used in computer vision, but provide at least equal, if not better, performance. Models that incorporate algorithms inspired by these results can be readily implemented into hardware. This could apply to bionic vision or autonomous robotics systems.

Tech Briefs: How can the findings support new applications in vision systems or self-driving cars?

Joseph Fabian: Current computer-vision solutions for detecting moving targets are very computationally expensive. The dragonfly brain has a volume of just 1 mm3, and contains 30,000 times less neurons than the brain of a human. Due to its limited processing resources, the dragonfly brain has developed extremely efficient and simple solutions to complex problems. These solutions can be (and are being as we speak) accurately modeled in software, and implemented into autonomous robotic platforms.

Tech Briefs: What will be studied next?

Joseph Fabian: Next, we must learn exactly how complex these predictions are. For example, we know they can predict the location and the direction of movement. But we do not know whether an object’s speed or size affect the prediction, or how the accuracy of predictions changes in different conditions. In parallel, the engineers and robotics experts in our lab are building computational models that simulate the performance of dragonfly STMD neurons. These models will be tested in a simulated environment, before implementation on our autonomous robotics platform for real-world evaluation.

The project is an international collaboration funded by the Swedish Research Council, the Australian Research Council (ARC) and STINT, the Swedish Foundation for International Cooperation in Research and Higher Education.

Other researchers included Steven D Wiederman, Adelaide Medical School, The University of Adelaide, Adelaide, Australia; David C O’Carroll, Department of Biology, Lund University, Lund, Sweden; and James Dunbier, Adelaide Medical School.

Related Content: