Living with a physical disability presents many challenges, not the least of which is navigating one’s environment. Getting around can be particularly difficult for blind people, since layouts and obstacles often change suddenly, with and without their knowledge. With this in mind, IBM and Carnegie Mellon University (CMU) have developed an assistive technology application for navigation.
The app, called NavCog, draws on existing skills blind people use to navigate. It uses existing sensors in the environment to create vibrations a blind person can follow through his or her smartphone. These sensors are often built into the smartphones; others are Bluetooth beacons located on walkways and in other high-traffic areas. Signals from these beacons are converted into vibrations that are “whispered” into the user’s ear through earbuds, warning him or her of obstacles.
NavCog is available through IBM’s cloud-based Bluemix application. In addition to its vibration technology, NavCog contains a map editing tool and localization algorithms. These algorithms help blind people determine where they are in real time and in which direction they are facing. This increases the person’s independence and decreases the likelihood of getting lost or injured, especially in large areas such as college campuses.
One does not need to be legally blind to benefit from NavCog. A person with low vision or perceptual issues could use NavCog’s 3-D modeling tool, which uses the person’s smartphone to convert the surrounding environment into 3-D images. These images let the person see everything around him or her at once so the safest routes through indoor or outdoor areas can be planned.
Combining multiple assistive technologies into one app like NavCog is known as “cognitive assistance,” which augments missing or weakened abilities. Currently, cognitive assistance focuses on helping the blind but could expand to include other disabilities.
Science and technology experts are working on additions to cognitive assistance, such as ultrasound technology that would depict locations more accurately and facial recognition. The goal is to “open the new real-world accessibility era for the blind in the near future,” said Martial Hebert, director of Carnegie Mellon’s Robotics Institute.