LINC Helps Vehicles Adapt to Unknowns
The Learning Introspective Control (LINC) program aims to develop machine learning-based introspection technologies that enable physical systems — with specific interest in ground vehicles, ships, drone swarms, and robotic systems — to respond to events not predicted at design time. Learn more .
Transcript
00:00:08 link stands for learning introspective control it's basically the idea that you have a system a robot a vehicle an aircraft and it has a bunch of sensors and as it's operating things can change around it it can get broken it can be damaged how does it figure out how to keep on going human beings are very very good at this human beings we can pick up
00:00:34 loads that we're not used to we can stumble and we can recover mechanical systems are not real good at this so the idea behind link was to be able to provide this type of capacity to systems so they could keep working if they're damaged if they're hurt or they encounter something they' never seen before so why do this there are a couple of reasons one is a lot of people are
00:00:59 killed in vehicles every every year a lot of these are due to loss of control up armored Humvees are very difficult to control because the weight has changed from where the designers originally intended so if we could make these safer we'd save a lot of people's lives one of the examples of where this could really really be useful was the everg given the everg given was a ship that walled the
00:01:25 Suez Canal and shut down Global Supply chains part of what happened was unexpected winds came across the ship in the canal the ship was unable to continue straight down the canal but the acceleration caused it to be sucked into the wall if there had been a system an artificial intelligence system that could provide suggestions to the operator as to how to recover the
00:01:51 accident may have been avoided link is not about relinquishing control to AI it's using AI as an advisor as an aid as a safety mechanism for operating large complex and difficult systems it also means that when someone is just learning they may not have all of the skill but the system can Aid as a buffer to prevent them from being hurt we tend to
00:02:22 think of AI as being these massive systems that take hundreds of megawatt hours to train and to learn we're looking at essentially the same compute power and the same battery that you find in a cell phone so it's almost no power at all means that it can be used it can be put in many applications and it's not generating a big carbon footprint currently we're stationed here
00:02:51 at the rvr which is the robotic vehicle range we're able to measure success by basically inducing a failure in the robots um control system that causes them to go unstable in such a manner that then they have to come up with a new control scheme and adapt on the Fly uh to regain control from then on we measure how long they can maintain the safety of this new safe control schema
00:03:14 these robots are fielded robots from the Army but we're also using them as surrogates and measure for larger Vehicles if it can stand still in a high wind if it can stand still on a steep slope and be able to tell it Sit Stay that's a very very good outcome so some of the outcomes from this are not as exciting if you're not looking at the math as they might otherwise be it still
00:03:40 is really neat we had a couple of really pleasant surprises throughout this activity one of them was we saw emergent behavior that showed the system was not only adapting to damage that had been done to it but it was using its environment to its Advantage an example of that was we had damaged Treads a robot couldn't turn in a given Direction because there were high winds in the
00:04:06 test site and the robot suddenly started to tack it used the wind to help it turn much the way a Sailing Boat would but it learned how to do this and overcome its damage using what was available in the environment so the link program is asking whether we can build a a safety vest where where somehow AI could assure the operator of the vehicle that it'll be safe our approach is to uh learn as
00:04:39 much to build a models of these uh vehicles in using either simulators physics simulators or using physics equations the kinematics models as they are called but then also calibrate them with some experiments in the field so the advantage of this approach is that the number of experiments that you need to do is drastically lower than if you didn't have this
00:05:01 model we are working on the functionality controllers making sure that whatever the user wants to execute we will be able to execute it regardless of the type of damage or situation that the robot is encountering we read what the user is putting as input into the joystick commands the goal of this entire system is to make sure that this intent from the user is properly
00:05:21 executed so what we do at the beginning is that we just forward the information through to the robot the robot execut it and then we use our sensors to try to see how well it was executed and it's constantly learning but it's also learning very quickly so right now for in this example we adapted in 20 seconds this is a kind of things where we can even go faster when we we manage to
00:05:40 process the sensory data more accurately and at a higher frequency uh so we are quite optimistic that we can bring this kind of adaptation level probably below dozens of seconds in the next couple of iterations so we take sort of two inputs so we have the input of the driver where they want to go and then we have the input of the sensory system which is helping the driver to manage what's
00:06:03 around the vehicle because if they're in a situation where they're being shot at or or really overstressed they can't pay attention to everything so we actually use methods from the finance industry that actually access risks and portfolios to actually manage and calculate these risks then the next level we say Okay given the driver's intent in the risks we have something
00:06:23 which we call MCTS which is Monte Carlo tree search which actually searches over hundreds of thousands of possible actions every second to figure out which actions satisfy the user's goal but also minimize the risks to the vehicle and the driver as well traction control is really important and in this program we want to assume that actually the terrain is incredibly varying and changing so
00:06:47 once we take the goal then the next step is for the low-level control system that uses the navigation and adjusts the tracks to drive the driver and then actually adapts as necessary uh to the terrain to carry out the goals from from those higher level systems so that's the basics and then the whole time we're monitoring all the sensors to look for unexpected kind of events so that we
00:07:07 need to jump into action and sort of change our plans earlier today there was a checker board that was in front of the robot that we had and because we we are using Vio it's essentially you're using landmarks that we are seeing in the environment to recognize where the robot is in the environment and as the checker board was like you know moving uh in the envir
00:07:30 basically the robot was compensating thinking that oh I am moving in the environment and so it's a pretty exciting challenge so we're learning you know how to build these kinds of systems because no one's ever put all that combinations of things together before in the next phase of Link we're moving off of the demonstration platforms which actually are fielded systems but we'll
00:07:53 be going to much more complex coupled systems that represent operating challenges to DOD today and we're hoping to eliminate those challenges