Engineers at the University of Waterloo (Ontario, Canada) have discovered a new way to program robots to help people with dementia locate medicine, glasses, phones, and other objects they need but have lost.
And while the initial focus is on assisting a specific group of people, the technology could someday be used by anyone who has searched high and low for something they’ve misplaced.
“The long-term impact of this is really exciting,” said Dr. Ali Ayub, a post-doctoral fellow in electrical and computer engineering. “A user can be involved not just with a companion robot but a personalized companion robot that can give them more independence.”
Ayub and three colleagues were struck by the rapidly rising number of people coping with dementia, a condition that restricts brain function, causing confusion, memory loss, and disability. Many of these individuals repeatedly forget the location of everyday objects, which diminishes their quality of life and places additional burdens on caregivers.
The research team began with a Fetch mobile manipulator robot, which has a camera for perceiving the world around it. Next, using an object-detection algorithm, they programmed the robot to detect, track, and keep a memory log of specific objects in its camera view through stored video. With the robot capable of distinguishing one object from another, it can record the time and date objects enter or leave its view.
Tests have shown the system is highly accurate. And while some individuals with dementia might find the technology daunting, Ayub said caregivers could readily use it. Moving forward, researchers will conduct user studies with people without disabilities, then people with dementia.
Here is a Tech Briefs interview — edited for length and clarity — with Ayub.
Tech Briefs: I’m sure there are too many to count, but what was the biggest technical challenge you faced throughout your work?
Ayub: The biggest technical challenge for us was to be able to track the object when the robots were moving a lot or there were a lot of objects within the environment. If there’s a very cluttered environment, it’s difficult to track them with high accuracy.
Tech Briefs: Can you explain in simple terms how you used AI to create a new kind of artificial memory and how the technology works?
Ayub: The technology is basically based on AI — a specific form of AI, which is machine learning or deep learning in general. What we try to do is we use techniques in deep learning to first process the images that are taken from the camera of the robot or the video. Then find the actual objects within the particular image or video that the robot is seeing through its camera. So, for example, if the robot is looking at a particular table in your office, it could see maybe a laptop or it could see a phone and a mug or something like that. That’s done through a form of deep learning called the convolutional neural network. That’s a form of machine learning to detect or classify objects within a particular image.
Once we find those objects within a particular image our next goal is to start tracking them. Tracking means we want to make sure that if these objects are moving around in an environment, we are able to follow them around. Essentially, if the video has an object moving around, then the robot should be able to track it while it is in that area. That way the robot can know or store a video of that object if it goes out of that view.
Once the tracking happens, if a new object enters the tracking area, the robot could see that there is a new object in the environment and could store an image or a video of it in its memory. Similarly, that’s the same thing that happens if an object leaves an area. After that, you can simply access that memory using a graphical user interface or an app on your phone.
One final thing is we have made it a more personalized system. Users can actually define the salient objects that are supposed to be tracked by the robot. They can also define the total number of days they want the robot to keep the memory.
Tech Briefs: Research shows that the system is highly accurate. Exactly how accurate is it?
Ayub: Currently, the accuracy is around 90 percent to correctly track and store the missing objects in the robot’s memory. Ten percent of the time it might either not detect the object or might not store or track the object — or it might store the wrong object.
Tech Briefs: Moving forward, the team plans to conduct user studies with people without disabilities, then people with dementia. How is that coming along? What are your next steps? Have they begun yet?
Ayub: We are currently working on that. I am working with two other firms now to improve the robustness of our system so that it’s capable of tracking better — or at least in a more cluttered environment.
And the second is to improve the GUI design to make it easier for people to access the robot’s memory. My supervisor did some studies a long time ago to better understand the GUI design. We’re using that to now improve the GUI before we really test it with the users. Once that’s ready, we’re going to start, hopefully, doing the studies either this fall or in the winter of 2024. Then, moving forward with people with dementia, I think we’ll move to that by the end of 2024 or mid-2024.
Tech Briefs: You’ve said that while the initial focus is on assisting a specific group of people, the technology could someday be used by anyone who has searched high and low for something that was misplaced. My question is how soon do you think we could see the robots out there helping people?
Ayub: That we could see quite soon; the only difference might depend on what kind of a robot people might want and the cost. Because right now the robot that we are using is focused more on people with dementia or people who might have disabilities or older adults, whereas the robot could actually provide assistance not only with memory visualization but beyond that as well. So, our focus is also toward, for example, if people are forgetting to take their medicine, the robot would not only remind them but also go and fetch it for them. It would be almost like a butler robot or a maid, something like that.
For general purpose use? I guess it could be deployed. It just would have to be with, perhaps, a different robot.
Tech Briefs: Do you have any advice for engineers aiming to bring their ideas to fruition?
Ayub: Persevering. Anytime you’re trying to develop your idea, especially using this new technology, there are always going to be issues — technical difficulties. You just have to kind of go through them before you get to the final stage. And sometimes it doesn’t even look like you’re making a huge impact initially, because it’s starting very slow and the system is not working perfectly. You just have to follow through on that and, if you really believe in your system or if you really believe in your idea, then you have to continue to work on it and see where it goes.