In the early days of quarantine, Notre Dame professor and robotics engineer Yasemin Ozkan-Aydin used the time at home to put together robots.

Ozkan-Aydin developed collaborative legged systems that maneuver over complex terrain — as a team.

To move around on rough terrain and in tight spaces, Ozkan-Aydin proposed that a physical connection between robots could enhance mobility. If an individual robot, for example, could not move an object on its own, why not have the robots form a larger, multi-legged system to complete the task?

That's what ants do, after all.

“When ants collect or transport objects, if one comes upon an obstacle, the group works collectively to overcome that obstacle. If there’s a gap in the path, for example, they will form a bridge so the other ants can travel across — and that is the inspiration for this study,” said Ozkan-Aydin in a recent news release . “Through robotics we’re able to gain a better understanding of the dynamics and collective behaviors of these biological systems and explore how we might be able to use this kind of technology in the future.”

With a 3D printer, Ozkan-Aydin built four-legged robots measuring 15 to 20 centimeters in length.

Each robot included a lithium polymer battery, a microcontroller, and three sensors. Along with a light sensor, two magnetic touch sensors at the front and back of each robot allow the systems to connect.

Four flexible legs reduced the need for additional sensors and parts and gave the robots a level of mechanical intelligence, which helped when interacting with rough or uneven terrain.

“You don’t need additional sensors to detect obstacles because the flexibility in the legs helps the robot to move right past them,” said Ozkan-Aydin. “They can test for gaps in a path, building a bridge with their bodies; move objects individually; or connect to move objects collectively in different types of environments, not dissimilar to ants.”

Ozkan-Aydin began her research for the study in early 2020, when much of the country was shut down due to the COVID-19 pandemic. After printing each robot, Ozkan-Aydin tested the insect-inspired systems out in her yard or at the playground with her son.

The Notre Dame professor conducted experiments on a variety of terrains, both natural and manufactured. The robots maneuvered through and around grass, mulch, leaves, and acorns, as well as foam stairs, shag carpeting, and the rough terrain of rectangular wooden blocks glued to particle board.

When an individual unit becomes stuck, it sends a light to additional robots to show a request for assistance. Upon sensing the light, the helper robots connect and provide support — a push as they walk together — to successfully traverse obstacles while working collectively.

"After the helper robot finds the searcher robot by following the light gradient, it is attached to it from the back and the touch sensors on both robots, and informs the robot about the connection state," Prof. Ozkan-Aydin told Tech Briefs.

The research team recently published their results in Science Robotics .

Upcoming research will focus on improving the control, sensing and power capabilities of the system.

Yasemin Ozkan-Aydin

In a short Q&A with Tech Briefs below, Ozkan-Aydin explains what swarms can do once those features are advanced.

Tech Briefs: How do the magnetic touch sensors work — what do they do, how are they controlled?

Yasemin Ozkan-Aydin: Each robot has two magnetic connectors, which include two Neodymium rare-earth magnets with N-S polarity, at the front and back of the robot. The magnetic connector at the back is attached to the tail and its polarity can be reversed (S-N) by moving the tail up. So, when the tail is up, two robots can connect to each other and when the tail is down, they can detach.

Tech Briefs: When a “help” signal is sent to a robot, how does the help robot know what to do and what actions to take?

Yasemin Ozkan-Aydin: The help signal — turning on the bright LED light on the back of the robot — is sent to the helper robots when the searcher robot gets stuck on stairs or rough terrain. The stuck [status] is detected by the light intensity measured by the searcher robot. When the robot gets stuck, it cannot move towards the target (light source), and the light intensity doesn't change. The helper robots always wait for the signal from the searcher robot.

Of course, there are limitations in our system. For example, if a helper robot falls outside the beam of a searcher robot, the helper robot cannot find it. In the future design, the communication between robots should be enhanced using other types of sensors such as GPS. However, as the complexity of the system increases, the robots become more difficult to control. Here, mechanical intelligence plays an important role.

Tech Briefs: What does “mechanical intelligence” mean here? How exactly do they interact? What happens when one robot gets hung up on an obstacle? How is a signal sent to additional robots?

Yasemin Ozkan-Aydin: Mechanical intelligence means that a mechanism responds to the environment, adapts to new external situations, or automatically performs some functions without any sensory feedback or guidance from a controller. Each robot has four directionally flexible legs and a tail. When the leg, or tail, hits an obstacle, it bends rearward and crosses the obstacles. After it passes the obstacle, a return spring pulls the leg to its original position. This passive bending also increases the area of contact, which allows an individual leg or the tail to deal with a change of terrain roughness, losing ground contact during the stance phase, or stepping on or hitting an obstacle during the air phase.

All robots have two switches like touch sensors to detect the connection state: one at the front and one at the backside of the robot. When two robots are connected, the dome-shaped pusher attached to the tail touches both the sensors at the tail of the front robot and at the head of the back robot. Although there is no high-level communication (for example, sending GPS coordinates wirelessly) between robots, the touch sensors allow each robot to know whether it is connected with the other robots. Other than the touch sensors, there is a light sensor, or phototransistor, at the front bottom part of each robot. This sensor is used to measure the light intensity of the environment and to provide local communication between robots.

Tech Briefs: Could you amplify one or two of the real-world practical applications — what could the swarm accomplish, and how?

Yasemin Ozkan-Aydin: Swarm of legged robots can perform real-world cooperative tasks such as search and rescue operations, agriculture applications (like planting and harvesting, environmental monitoring and crop inspection, etc.), collective object transport, and space exploration.

In a very early experiment, three robots collectively tried to attach to and carry a chocolate metal tin across a carpet. (Image Credit: Ozkan-Aydin)

Tech Briefs: Regarding powering them, could you envision some sort of energy harvesting, say, based on the motion?

Yasemin Ozkan-Aydin: This is a very important point that needs to be improved in future design. Maybe an energy harvest mechanism (like piezoelectric materials) can be attached to the legs of the robots, and they can harvest energy during walking, or each robot can have a solar panel to charge their batteries. Another option is that only one of the robots can be equipped with an energy harvesting mechanism, to reduce the total cost, and it can transmit the power to the other robots.

Tech Briefs: What inspired this effort, especially natural models?

Yasemin Ozkan-Aydin: This study is inspired by multi-legged animals, such as centipedes or millipedes, that can move effectively in diverse terrains with flexible bodies and limbs and ant collectives that can self-organize and create structures, such as bridges, to solve problems.

Tech Briefs: How are you looking to improve the robots?

Yasemin Ozkan-Aydin: Currently, the robots are constrained by limited communication range. With improved communication between individuals, we expect that the units (quadrupeds) in the swarm could coordinate properly and change their gaits according to environmental conditions or tasks they perform. In addition, the dimensions of the robots can be scaled according to the tasks to be performed.

What do you think? Share your questions and comments below.