Robots That Think With Their Bodies

Researchers are pushing the boundaries of morphological computation, turning the humble popper toy into a powerful model for autonomous behavior. By harnessing bistability and metastability, they encode logic directly into compliant structures, letting geometry—not electronics—govern control. The result: grippers that perform on-the-fly classification by cycling through preset apertures until they detect contact, and walkers whose gait patterns—turning, reversing, striding—are fully embedded in their leg mechanics.

This physical programming opens the door to robots that remain operational where conventional electronics falter: deep-sea environments, nuclear facilities, even space. For roboticists seeking resilient autonomy without computational overhead, these bistable and metastable architectures hint at a future where the robot’s body is not just the actuator, but the controller itself.



Transcript

00:00:00 Bistability is the the ability of a system to  actually exhibit two stable states. For example,   this popper toy, it's able to be in this  particular state, but then if I press, I'm   able to get it into a second stable configuration.  That means that it has two stable configurations.   That's why it's called bistability. Depending  on how many domes do I invert, it will get into   a specific position or a specific curvature. And  this allows us to put it in different systems such   as grippers or walkers or other type of robots.  The metastable one is quite interesting because   if, for example, this dome is metastable. So when  I press it and I hold it for a couple of seconds,   it will stay there for a little bit and then  it will go back after a certain amount of time.   So what this allows us to do is to kind of  program a time-dependent response in our systems,   just informed by the geometry of our  units themselves. So one of the main  

00:00:58 results we actually show is that we can perform  classification tasks. So basically the robot goes,   tries to grab an object. If it's not able to  grab it, then it will go to the next level of   aperture and then it will try again, and so on,  so forth until it detects a small contact. And   the second system that we have is this walker.  We now have one direction that goes into kind   of to the right, and then another direction  that allows us to move the legs like this,   right? So that creates a walking motion. All  of that is programmed in the robot's legs   itself. It's not on an external system. It's  already programmed in the legs, the ability   to turn and to go forward and backwards. and of  course with an enhanced performance due to that   metastability. The main goal to achieve this  autonomy in robots by using physical means is   towards using this system in more difficult or  rough environments. Let's say electronics are  

00:01:57 extremely good in some aspects, but they  don't shine in extreme environments like,   I don't know, for example in space or nuclear  reactors. Exploration in oceans and marine life is   quite difficult. When you rely a little bit more  on the physical systems, then those systems will   still work on these hard environments. And I will  dream to see my research actually achieve this.