To get a mobile robot ready to operate on the factory floor, you need the expertise of an experienced software programmer.

Or, if you’re a Purdue researcher, maybe just your cell phone.

A prototype smartphone app from Purdue University  allows a user to quickly program a robot to perform simple tasks, such as picking up parts from one area and delivering them to another.

The team hopes that the ease of use of an app will bring mobile robots to organizations that have up to now lacked the necessary resources and knowledge.

“Smaller companies can’t afford software programmers or expensive mobile robots,” said Karthik Ramani, Purdue’s Donald W. Feddersen Professor of Mechanical Engineering. “We’ve made it to where they can do the programming themselves, dramatically bringing down the costs of building and programming mobile robots,” he said.

To “do the programming themselves,” users walk the route of a robot’s tasks in real space, scanning QR codes at the starting point and the destination – say, a machine or a storage container.

The user then places the phone into a dock attached to the robot and wirelessly connected to the machine’s basic controls and motor. When docked, the phone acts as the robot’s eyes and brain, using information from the QR codes to follow the specified route and work with the objects.

Once in proximity to an Internet-connected machine, the augmented-reality app displays a set of pre-defined tasks to initiate, like having the robot pick up a 3D-printed part or navigate to the charging station.

“All you must do is act out the process with the help of your smartphone, verify the preview, and place the phone on your robot," Prof. Ramani told Tech Briefs.

The mobile device mediates the interactions between the user, robot, and the Internet of Things (IoT)-oriented tasks, and guides the path planning execution with the embedded capability of simultaneous localization and mapping  (SLAM) – the same algorithms currently used in self-driving cars and drones.

The app, known as VRa, also provides an option to automatically record video when the phone is docked, so that the user can play it back and evaluate a workflow.

The team used their technology to command robots to water a plant, vacuum a designated area, and transport objects.

Purdue researchers presented their research on the embedded app on June 23 at DIS 2019  in San Diego. The platform is patented through the Purdue Research Foundation Office of Technology Commercialization, with plans to make it available for commercial use.

Prof. Ramani told Tech Briefs what's possible when you make robot programming easy.

Tech Briefs: Now your phone can become a robot that does the boring work ,” says the headline of the press release. What kinds of boring work are we talking about? What kinds of tasks is your invention ideal for?

Prof. Karthik Ramani: There are many things that we as humans do that are not enjoyable or worthy for humans to do. These can be simply carrying objects, delivering things, waiting for something to be done and then doing something else, and infinitely many other tasks that we could substitute with other better and more creative things to do.

Executing many of these tasks can take a long time, be tedious, and sometimes even harmful for the human body to do. But creating programs for them used to be difficult. We have simplified the programming for the mobile types of motion and particular tasks that the robot does while interacting with the machines.

Tech Briefs: Can you take us through an example of how it works, from the perspective of, say, a manufacturing worker? How do you get a robot to, say, pick and place something?

Prof. Ramani: You just take the phone to where you want to pick something up. Then, you tap on the camera view of a QR code of the smart machine, you walk along the path you want the robot to go until you reach the destination, and then you tap on the QR code of the destination machine.

Now the phone is ready to be put into the docking station of the robot. You put it on the robot base and execute the program by tapping on “Execute.” The robot will do the first ask on the first starting point (say, pick up an object) and then go on the same path you did with the phone to deliver it at the destination station.

That opens up robotics to a large user base. The costs of programming come down since the person can be anyone. Larger accessibility coupled with lower cost changes things.

Tech Briefs: Why is this kind of capability — the ability to program a robot this way — so important, do you think?

Prof. Ramani: In general, programming robots becomes the barrier to using them. If we make things easy to access and use, and intuitive for a novice user, then anyone can program robots.

Tech Briefs: How does the phone “know” the kind of robot that it’s becoming?

Prof. Ramani: For this we have an initialization process for the user to tell the phone what type of robot it is put on (or programmed for). Then, the corresponding interface is made ready. The programming for a vacuuming robot or a pick-and-deliver robot will be different, and the phone understands this.

Tech Briefs: How does the robot understand its environment?

Prof. Ramani: We build upon and adapt a well-developed computer vision algorithm for spatial navigation and awareness called Simultaneous Localization and Mapping (SLAM).

Tech Briefs: What’s next? Where will you be testing this?

Prof. Ramani: We are working with manufacturing companies to develop use cases and also our own lab is testing it and using it for future projects.

Have a question for Prof. Ramani? What do you think? Share your comments below.