Researchers at the University of Georgia have designed an AI-powered, voice-activated backpack that can help the visually impaired navigate and perceive the world around them.
Potentially eliminating the need for guide dogs and canes, the backpack helps detect common challenges such as traffic signs, hanging obstacles, crosswalks, moving objects and changing elevations, while running on a low-power, interactive device.
Currently, visual assistance systems for navigation are limited and range from Global Positioning System-based, voice-assisted smartphone apps to camera-enabled smart walking stick solutions. But these systems lack the depth perception necessary to facilitate independent navigation.
The new system is housed inside a small backpack containing a host computing unit, such as a laptop. A vest jacket conceals a camera, and a fanny pack is used to hold a pocket-size battery pack capable of providing approximately eight hours of use. A Luxonis OAK-D spatial AI camera can be affixed to either the vest or fanny pack, then connected to the computing unit in the backpack. Three tiny holes in the vest provide viewports for the OAK-D, which is attached to the inside of the vest.
A Bluetooth-enabled earphone lets the user interact with the system via voice queries and commands, and the system responds with verbal information. As the user moves through their environment, the system audibly conveys information about common obstacles including signs, tree branches and pedestrians. It also warns of upcoming crosswalks, curbs, staircases and entryways.
The OAK-D unit is a versatile and powerful AI device that runs on Intel Movidius VPU and the Intel® Distribution of OpenVINO™ toolkit for on-chip edge AI inferencing. It is capable of running advanced neural networks while providing accelerated computer vision functions and a real-time depth map from its stereo pair, as well as color information from a single 4k camera.
Developer Jagadish K. Mahendran at the Institute for Artificial Intelligence, University of Georgia said; “Last year when I met up with a visually impaired friend, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help. This motivated me to build the visual assistance system with OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), powered by Intel.”
Mahendran plans on open-sourcing the technology and publishing a research paper about the work.