Motion in Space: Astrobee Breakthrough
NASA’s Astrobee free-flying robots are taking motion control to the next frontier. Using reinforcement learning, a Naval Research Lab team trained Astrobee to perform autonomous docking in microgravity—without human intervention. This first-of-its-kind demonstration proves that complex control algorithms can guide robots in space, paving the way for autonomous robotic teammates in orbit, underwater, and on land.
Transcript
00:00:00 In space, there's no margin for error. Every movement, every decision must be perfect. But what if the next generation of robots could learn and decide for themselves?
>> So, my dream is to make giant beautiful structures in space. I love robots and I love space. and trying to figure out how you can you combine those to be able to make robots that can do autonomous tasks
00:00:29 and be able to make bigger and better structures in space.
>> Dr. Chapen is part of a small research team at the Naval Research Laboratory that has a passion for autonomous robotics.
>> That passion launched a breakthrough collaboration between NRL and NASA's Astrob program. So, the Astrob is um they're these
00:00:54 little cube robots that were designed and developed by NASA as a way for students and guest scientists to do robotics experiments on the ISS.
>> Today, most space robots are controlled by humans remotely. Right now, robotics is actually kind of in the like dinosaur ages in terms of how complex the autonomy is in space because um space is a really riskaverse environment because
00:01:19 it's very expensive to get things into space. And so people traditionally like the International Space Station right now, they actually use a teley operation. So they'll have a human either on the ground or an astronaut in space actually just kind of almost like a game controller commanding it. But if we want to make really big things in space and really push the final
00:01:36 frontier, we need robotics to have a higher level of autonomy and have robots be able to do those things by themselves.
>> That's why Astrob was such a good fit.
>> It was really great cuz the uh platform already existed in space and NASA had the entire infrastructure and team to be able to um run testing on the ground and simulation testing before we actually
00:01:55 test it in space. The NRL team trained a robot using reinforcement learning, a method that rewards trial and error.
>> Reinforcement learning is just being able to have a robot learn through essentially trial and error how to better do a task. So, I might not exactly tell a robot, okay, move exactly here, and then grab this thing and move it over to my next place, but instead, I
00:02:15 can give it an overall larger goal, and then it's trying to accomplish that goal, and I give a reward if it's um completing that task or subtasks. and then it kind of learns the best way to do it.
>> After just three months of training a simulation of Astrob, they were ready to do something bold, actually testing it in space.
00:02:36
>> We were given actually 20 minutes, but because of other problems, we ended up testing for only 5 minutes. We were planning to do at least three maneuvers, and we ended up being able to just run one in the last 5 minutes. it was autonomous. We lost the vision for part of it. So, we were not able to see the robot docking back successfully. But when uh the video stream came back, it
00:03:04 was there and it was docked successfully autonomously without humans looking at it or any other human input and which was a big success for us.
>> This wasn't just a test. It was the first known instance of using reinforcement learning to control a freeflying robot in space. So this uh test is such a big deal because it really allows us to show that we can use
00:03:28 these complex algorithms and that they work in space and it's only the beginning.
>> So reinforcement learning is a very flexible technique for controlling many different kinds of robots in many different kinds of domains. So it works in space but it also works on the ground, it can work on ships, it can work underwater, it can work with UAVs.
00:03:49 We demonstrated that we can use machine learning to solve, you know, a fairly simple spacecraft problem, but we did it very, very quickly. And if we can expand that, if we can scale it well enough, we can apply it to more complicated robots in more complicated areas on land, on board ship, uh, undersea. In theory, machine learning will let a robot become smart enough so that it can go out and
00:04:14 be a teammate instead of a tool for a war fighter.

