Scientists at Caltech are using AR to give objects in a room a voice, allowing the visually impaired to hear their surroundings.
According to the World Health Organization, it is estimated that approximately 1.3 billion people around the globe live with some form of vision impairment. 217 million of those people having moderate to severe vision impairment, while 36 million people fall under the category of being legally blind.
A recently published report entitled ‘Augmented Reality Powers a Cognitive Assistant for the Blind’ details the work from three Caltech scientists who are delving into the idea of using AR as a way to aid the blind in navigating new and unfamiliar locations through computer vision algorithms that allow objects in the real-world to “announce themselves” to the user as they enter a room.
Consisting of Markus Meister, Yang Liu, and USC postdoctoral scholar Noelle Stiles (PhD ’16), the team used the Microsoft HoloLens’s capability to create a digital mesh over a “scene” of the real-world. Using unique software called Cognitive Augmented Reality Assistant (CARA), they were able to convert information into audio messages, giving each object a “voice” that you would hear while wearing the headset.
Through CARA, the HoloLens would detect objects in your surroundings and then use spatialized sound to inform you of what the object is by having them call out to you. If the object is at the left, the voice will come from the left side of the AR headset, while any object on the right will speak out to you from the right side of the headset. The pitch of the voice will change depending on how far you are from the object; the closer the object, the higher the pitch.
To avoid bombarding the user with too much information, the Caltech team devised several modes in which to simplify the experience. One being ‘Spotlight Mode,’ where a user can point their face directly towards an object and that specific object will announce itself to you by telling you what it is. Another mode is called ‘Scan Mode’, where objects in the room are scanned by CARA and then each item announces itself going from left to right.
The third mode, ‘Target Mode,’ lets the user employ any object in the room as a guide to help navigate their surroundings. This approach is very similar to what you would experience if you were using an audio guide at a museum, but instead of information on art, you’re receiving information on your surroundings. Think of it as having audio radar.
In a video provided by Caltech, Markus Meister, Professor of Biological Sciences/Executive Officer for Neurobiology at Caltech, stated, “All the technology of VR and AR is about acquiring the information from the scene and then converting it into to other uses.”
During Caltech’s trials, they had 7 blind subjects use the HoloLens with the goal of navigating themselves to Liu’s second floor office. Upon putting on the AR headset, test subjects are greeted with a message stating that the navigation has started, which then follows with instructions of ‘Follow me’.
With minimal training, each subject found themselves navigating through a lobby, two sets of stairs, and several corners – all leading up to the Liu’s office – without any issues on their first attempt; and because the virtual guide is designed to stay a few steps ahead, test subjects were able to “see” what was ahead of them.
Tommy Marcellus, an individual who isn’t legally blind, but without glasses wouldn’t be able to drive, walk around a park, or even work, tells VRScout, “This type of technology would absolutely open up possibilities for people less fortunate than me by giving them some additional independence and safety,” adding, “something as simple as stepping up on curb and entering doorways is something that we all take for granted. I couldn’t imagine what that’s like if you’re legally blind.”
Marcellus does bring up the fact that this technology wouldn’t allow him to drive without glasses, watch a movie, or recognize a friend across the street, but he strongly believes that AR could replace the white cane, which many blind individuals still use to scan their surroundings for obstacles and orientation marks. “We could see people wearing AR headsets instead of using their white canes to get around, and they’d be able to do it with more accuracy,” said Marcellus.
So where does the team go from here? According to Meister, “The skies the limit for what kind of functionalities you want to build into a device like that, because it’s essentially a software problem.”
Early testing of Caltech’s work shows a lot of promise, but there is still a lot of work ahead of them. The next phase of testing should explore how well the HoloLens and CARA work in larger public spaces with an enormous group of people constantly moving and shifting, such as shopping mall, stores, and amusement parks.
The study was funded by the Shurl and Kay Curci Foundation.