Subject tests a “smart” walking stick for people who are blind or visually impaired in a mock grocery store. (Image: Nico Goda)

Following a pair of tests, a new take on an old design using artificial intelligence (AI) from a group of engineers at CU Boulder has the potential to make life incredibly easier for the visually impaired.

The group developed a smart walking stick for which the hope is to one day help the blind navigate tasks such as shopping for cereal at the grocery store and choosing a place to sit in a crowded dining area.

“I really enjoy grocery shopping and spend a significant amount of time in the store,” said Shivendra Agrawal, doctoral student. “A lot of people can’t do that, however, and it can be really restrictive. We think this is a solvable problem.”

The smart walking stick is akin to canes you can find at retail stores, but it also boasts a camera and computer vision technology to map and catalog its surroundings. In doing so, it guides the user via handle vibrations and spoken directions.

“AI and computer vision are improving, and people are using them to build self-driving cars and similar inventions,” Agrawal said. “But these technologies also have the potential to improve quality of life for many people.”

To see if the stick could help with regards to the task of choosing a seat in a crowded location, the researchers set up a mock café — with several chairs, patrons, and even a few obstacles — in the lab.

A computer vision algorithm scores boxes of cereal to identify a target product — in this case, a box of Kashi GO Coconut Almond Crunch. (Image: Collaborative Artificial Intelligence and Robotics Lab)

The subjects wore a backpack with a laptop in it and grabbed the smart stick; swiveled to survey the room with a camera attached by the cane’s handle; and had algorithms running inside the laptop to identify the various features in the room to calculate the best route to a particular seat.

The study showed that subjects were able to find a desirable chair 10 out of 12 times with varying levels of difficulty and found at least one open chair each time. The subjects were all blindfolded sighted people, but the team plans to conduct further testing with blind or visually impaired people once the technology is more dependable.

“Shivendra’s work is the perfect combination of technical innovation and impactful application, going beyond navigation to bring advancements in underexplored areas, such as assisting people with visual impairment with social convention adherence or finding and grasping objects,” said Bradley Hayes, assistant professor.

In yet-to-be-published research, the group adapted the smart stick to find and grasp products in grocery store aisles lined with dozens of similar-looking and -feeling options.

For this test, the team again set up a mock environment in the lab: a grocery shelf stocked with several different cereal boxes. The researchers created a database of product photos in their software, and the subjects used the smart stick to scan the shelf for a certain product.

“It assigns a score to the objects present, selecting what is the most likely product,” said Agrawal. “Then the system issues commands like ‘move a little bit to your left.’”

It’ll be a while before the device hits the market, as the researchers want to make the system more compact so it can run off a standard smartphone attached to the cane.

“Our aim is to make this technology mature but also attract other researchers into this field of assistive robotics,” said Agrawal. “We think assistive robotics has the potential to change the world.”

Here is a Tech Briefs interview with Hayes and Agrawal, edited for length and clarity.

Tech Briefs: What inspired your research?

Hayes and Agrawal: Our work in the Collaborative Artificial Intelligence and Robotics (CAIRO) Laboratory  focuses on building technology enabling autonomous systems, like robots, to operate safely and effectively with and around humans, pushing the boundaries of what is possible through human-machine teaming. While the lab primarily performs research that explores techniques for making these systems more capable, trustworthy, and reliable, we realized that many of the technologies we develop for robots could be directly applied within the space of assistive technology. We see a huge, transformative potential for machine learning, AI, and robotics research to enhance and create new assistive technologies that will positively impact the lives of millions.

Tech Briefs: What were the biggest technical challenges you faced?

Hayes and Agrawal: Our algorithms need advanced sensing capabilities to “see” the highly unstructured, non-uniform, and cluttered world around us and parse the information in a way that could be used. Real-world scenes present many challenges due to lighting variations, dynamic elements, and occlusions. Our assistive technology must operate even in poor visual conditions, otherwise it won’t be perceived as trustworthy or dependable.

Another challenging aspect was determining how the system should interact with its users. Our system not only has to parse and make sense of the world but must also convey this information to users without cognitively burdening them. Striking a balance to create a conveyance system that is effective in helping the users achieve their goals while only providing the information they need remains an open challenge.

Finally, reducing the compute requirements and power consumption of this technology to something that can be run with a modern cell phone is an open technical challenge that we are addressing.

Tech Briefs: What’s the inevitable next step? Any plans for future research?

Hayes and Agrawal: Our initial motivation was to harness the power of AI/ML to create algorithms that have the potential to help people with visual impairment (PVI) with day-to-day activities. As we continue to develop new capabilities, we will be targeting use cases identified by PVI as those that are simultaneously most positively impactful for them and least covered by existing assistive devices/aids. Continuing to build the foundational techniques enabling the device to perceive and provide guidance while considering social norms that may be difficult without sight is a priority, as these are not well addressed by existing solutions. Our next steps will invariably involve closely working with the target audience for this technology, the PVI community, to get their feedback to improve existing functionality and to prioritize next steps.

Tech Briefs: The seat experiment was performed with blindfolded sighted people. How different will the results be with actual sight-impaired subjects? What variables do you anticipate?

Hayes and Agrawal: There doesn’t exist any technology to our knowledge that can look at seats and prioritize them based on social preference, privacy, and intimacy, so there may not be an objective comparison baseline for the technology itself. However, we anticipate that PVI will be much more familiar with using the device as its form factor is still very similar to a standard white cane. We do hypothesize that we would find differences in subjective user experience more so than any objective system success metrics.

Tech Briefs: Can you explain in simple terms how the technology works?

Hayes and Agrawal: Currently, our system can help with two tasks in a lab setting.

1) Finding socially preferred seats: Our system creates a map of the space in real-time using its camera and locates relevant objects, such as chairs, benches, people, and backpacks. It assigns social scores to seats to optimize for a user’s privacy and intimacy with strangers based on the surrounding objects and nearby people. It then selects the ‘best’ seat and provides haptic-based navigational guidance through two vibrational coin motors placed on the cane’s handle that sit just under the user’s thumb on the left and right side.

2) Grasping desired grocery products: The system first uses its camera to locate the desired product on the shelf. Once it spots the product, it issues verbal guidance to guide the user to retrieve the product from the shelf by using commands like, “Move a little bit to the left,” “Move 6 inches up,” or “Move a foot forward.” Once the user is nearly touching the target object the system will tell them to grasp it.

Tech Briefs: Do you have any advice for engineers aiming to bring their ideas to fruition/market?

Hayes and Agrawal: I think need-finding, the process of identifying what needs to be solved and how much automation is desired, is one of the most important aspects of creating real-world assistive technology. Y-Combinator’s motto of “Make something people want” is simple and remarkably poignant. Researchers and engineers can increase their chances of creating something truly impactful by involving members of the target population early in the process. There is a fine line though, as approaching people with half-baked ideas and the promise of a technology that cannot be realized any time soon can be detrimental to the effort and poison the well for others.