Simplifying Remote Control of Robotic Devices

By integrating video technology and familiar control devices, a research team from Georgia Tech and the Georgia Tech Research Institute is developing a technique to simplify remote control of robotic devices. The researchers' aim is to enhance a human operator's ability to perform precise tasks using a multi-jointed robotic device such as an articulated mechanical arm. Known as Uncalibrated Visual Servoing for Intuitive Human Guidance of Robots, the new method uses a special implementation of an existing vision-guided control method called visual servoing (VS). By applying VS technology in innovative ways, the researchers have constructed a robotic system that responds to human commands more directly and intuitively than older techniques.



Transcript

00:00:07 so the challenge today in controlling robots remotely is often times the robot is out of your field of view you rely on the video camera that's on the robot to guide your actions it's sort of like looking through a straw and trying to figure out what's going on around you in our case we're using a 3D camera it's called a 3D rangefinder camera so in addition to giving you to the

00:00:26 information it also gives you the depth of the image where our technology allows us to do is to control it based off of the image that you see on the computer screen typically you would control a robot using joint control so if you had a robot that had six joints you would control them each individually or you would control it in World coordinate systems we're actually implementing

00:00:47 uncalibrated visual surveying which again does not require any knowledge of the geometry of the robotic arm the analogy we like to use is that of threading a needle if you were to ask a person to thread a needle they don't actually need to know exactly where the needle is and exactly where the thread is as far as Xyz coordinates and that's exactly what visual suring is it gives a

00:01:07 robot a pair of eyes the user can look simply at the computer screen and then control it based off of the coordinate system that is attached to the computer screen the interface that we've experimented with is a gam pad controller we've mapped the joysticks and the buttons to directions that you would see on the computer screen if I wanted to move to the right with respect

00:01:26 to the image I would simply point the joystick of the game pad controller to the right and and the robot would figure out how to move its individual parts to execute moving to the right as commanded by the user using our method there's about a four-fold increase in speed you can do it four times faster than using traditional control methods the applications we see for This research

00:01:46 are areas where intuitive control of a robotic arm would be important so for example applications would include bomb disposal or surgical applications where you would use a robot to perform surgery so we think that research is important because it allows more intuitive control of robots which means that you can operate more efficiently and also more safely