For a sophisticated autonomous robot to find its way through unknown buildings and uneven terrain, it needs to know where to place its hands and feet to maintain stability and balance.
A new algorithm from the University of Michigan speeds up path planning and limb placement for these kinds of robots with arm-like appendages. In experiments, the path-planning algorithm found successful routes three times as often as standard algorithms, while needing much less processing time.
A robot won’t always be able to balance itself and move forward with just its feet, says Dmitry Berenson, associate professor of electrical and computer engineering and core faculty at the Robotics Institute .
"When the robot needs to traverse difficult environments, it will need to use its hands to help it balance," Prof. Berenson told Tech Briefs in a Q&A below.
Palm contacts, however, add to the decision-making process and lead to impractical planning times for large environments.
To speed things up, the University of Michigan-developed system just uses its complex, learning-based method when necessary.
Berenson and his team's research enables robots to determine terrain difficulty before calculating a successful path forward. The more time-consuming, library-centric approach — one that searches through a history of trained examples — is only initiated when traversal becomes difficult.
Terrain difficulty is determined based on the number of contact possibilities for feet and hands. A location with many contact points has "high traversability."
If a path is challenging, with less points to grab on to, the robot may, for example, pull from the library and begin bracing on the wall with one or two hands while taking the next step forward.
A discrete-search planner, which sets up footstep sequences and the joint movements that enable them, is used on simpler paths.
"Easily-traversable segments are assigned a discrete-search planner, while other segments are assigned a library-based method that fits existing motion plans to the environment near the given segment," says the opening abstract of their recent report .
Testing the Plan
The team turned to machine learning to train the robot on the different ways that it can place its hands and feet to maintain balance and move forward.
When placed in a new, complex environment, the robot uses the learned options to find its way.
Berenson and Yu-Chi Lin, a recent robotics Ph.D. graduate and software engineer at Nuro Inc., employed both virtual and real-world ways to test their system. First, the two made a geometric model of a humanoid robot in a corridor of rubble.
In 50 trials, the team's method reached their goal 84% of the time, compared to 26% for the basic path planner. The system took just over two minutes to create a plan, compared to over three minutes for the basic path planner.
The researchers also performed a physical test, using a wheeled robot with a torso and two arms.
With the base of the robot placed on a steep ramp, the system had to use its “hands” to brace itself on an uneven surface as it moved. The robot planned a path in just over a tenth of a second, compared to over 3.5 seconds with the basic path planner.
In future work, the team hopes to incorporate dynamically stable motion, similar to the natural movement of humans and animals. Such movement would free the robot from having to be constantly in balance.
In a short Q&A with Tech Briefs below, Prof. Berenson explains more about the path-planning algorithm, and the most challenging aspects of terrain for an autonomous robot.
Tech Briefs: Why does path planning traditionally take such a long time?
Prof. Dmitry Berenson: Path planning is usually quite fast (often less than a second) when the number of degrees-of-freedom of the robot is low. For example, if you have a point in 2D, you have two degrees of freedom: x and y.
Typically, a humanoid robot has 28 joints and thus 28 degrees-of-freedom. Planning for all of these together is very slow. Previous work related to ours does footstep planning, which means planning a sequence of foot placements and then figuring out how the robot can move its joints to achieve those foot placements later. However, when the robot needs to traverse difficult environments, it will need to use its hands to help it balance.
The problem of planning for both the feet and the hands is too high-dimensional; previous planning algorithms will be very slow in handling the many degrees-of-freedom.
Tech Briefs: What is the major aspect of your technology that speeds up the process?
Prof. Dmitry Berenson: What we created to overcome this problem was 1) a method that figures out where foot/hand contact-rich parts of the environment are (high traversability areas) so that we can bias the search toward those, and 2) a method that divides the path to the goal based on traversability, to make sure the appropriate planning method is applied to each part.
This second part effectively splits the big difficult planning problem into several smaller, easier planning problems.
Tech Briefs: What criteria is the robot using to determine terrain difficulty?
Prof. Dmitry Berenson: To determine how traversable the terrain is, the robot learns a function that looks at a given region of the environment and estimates how many feasible hand/foot placements there are. This "traversability estimator" is trained on thousands of foot and hand placements in randomly generated environments in simulation. When the robot encounters a new environment, it can use this traversability estimator to quickly determine which areas of the environment are easier for it to traverse.
Tech Briefs: When the robot decides to switch to the simpler mode of path planning, is the robot less capable? How does the robot operate in “simpler mode?”
Prof. Dmitry Berenson: Yes, in the simpler mode of planning, which we call "planning from scratch," the robot uses a typical footstep planning method. This kind of method is too slow to handle both foot and hand contacts, so we can only use foot contacts when planning from scratch.
Tech Briefs: If you put your robot in a new environment, what is the most challenging aspect of the terrain for the robot to understand and factor into the traversability decision?
Prof. Dmitry Berenson: The most challenging part of a new terrain would be a piece of the terrain that doesn't look anything like what we saw in our training data — for example, if there was a tree in the middle of a hallway, which looks nothing like what we saw in our randomly-generated simulation environments.
Neural networks, like the kind we used for the traversability estimator, do not perform well when you give them inputs that are very different from what they were trained on, so the traversability estimate they output would not be reliable. There are ways to estimate the uncertainty of a neural network's output to help mitigate this problem, but we have not applied those to this work yet.
Tech Briefs: What are you working on next?
Prof. Dmitry Berenson: Our lab is working on methods that reason about what robots can and can't do with the knowledge they have . We are currently building on this work.