Improving Control of Unmanned Ground Vehicles with Augmented Reality

Traditionally, unmanned ground vehicles (UGVs) are tele-operated with a joystick, and with real-time video feedback information gathered from a camera on the UGV. When there isn't a direct line-of-sight from the operator to the UGV, the operator can get disoriented, making tele-operation challenging. However, new research from the University of Michigan shows that supplementing video feedback with augmented reality - a computer-generated view of a physical, real-world environment - can improve UGV tele-operation.



Transcript

00:00:03 you can see that as I move this master arm the slave arm just follows and copies exactly what it is I'm doing so instead of having to sit here and think about oh where do I want to move this joint this joint this joint I can just do it with my hands and uh sort of pick and choose where I want to go pretty easily and pretty intuitively one of the most famous examples uh of this type of

00:00:25 work was the uh Fukushima nuclear disaster site operators had to go in they couldn't actually go to the site itself where the reactors they needed to turn them off so they had to use teleoperation uh teleoperated robots to go in and assess the situation and in some cases actually manually turn Valves and things like that and this was particularly interesting because even

00:00:46 though the operators weren't in the site itself they were close enough that the radiation was hitting them and so they wanted to get that done as quickly as possible because every second that they were doing this test was time spent being bombarded by radiation so in situations like that it's it's critical that teley operated tasks be done um quickly and efficiently this is the

00:01:05 traditional way of doing it so left and right on the game pad stick and up and down in the game pad stick uh represent some axis movement of the robot arm but it really is mentally taxing to do that and so we want to make it easier by leveraging knowledge that we have about the robot arm and being able to sort of use this onetoone controller that we have so we've done research that shows

00:01:26 that augmented reality doesn't help for simple tasks necessarily or or tasks that are pretty straightforward uh we also found that people who uh were very good at these sort of spatial type tasks benefited less from um The Master Slave type interface than those who kind of struggled with those kind of tasks this is the virtual scene that you would see uh you can move this view around however

00:01:48 you want to sort of get a better view of what's happening in the robot scene so if I wanted to look at it from a side view I could do that or I could go from a top view and zoom all the way out and sort of get a better idea we want to be able to leverage uh things that robots are good at with uh having sort of critical thinking skills and human judgment the way we do this is generally

00:02:08 by using tele operation which is sort of remote control of the robot without line of sight when you do that there's a lot of problems that sort of arise from that situation you have to get all of your information essentially from the sensors on the robot and you lose a lot of fidelity of that information so what my research aims to do is try and get some of that Fidel of information back or at

00:02:32 least improve the operator's understanding of what's happening in the remote site uh by giving them more information than just what a camera video feed would give them long-term goals for the project would be doing more autonomy uh you can get better Fidelity with the augmented reality giving more cues to the operator but not so many cues that it's bothersome or

00:02:51 that it hurts the chances of them succeeding at a task so finding that balance between giving information and information overload is is crucial what we're trying to do is develop robotic systems that are able to autonomously map ship holes while they're in Port for the specific problem of Li