0
Select Articles

High-Tech Eyes PUBLIC ACCESS

New Independence for People with Visual Impairments.

[+] Author Notes

John Kosowatz is senior editor at ASME.org.

Mechanical Engineering 139(03), 36-41 (Mar 01, 2017) (6 pages) Paper No: ME-17-MAR2; doi: 10.1115/1.2017-Mar-2

This article provides an overview of high-tech sensors, visual detection software, and mobile computing power applications, which are being developed to enable visually impaired people to navigate. By adapting technology developed for robots, automobiles, and other products, researchers and developers are creating wearable devices that can aid the visually impaired as they navigate through their daily routines—even identifying people and places. The Eyeronman system, developed by NYU’s Visuomotor Integration Laboratory and Tactile Navigation Tools, combines a sensor-laden outer garment or belt with a vest studded with vibrating actuators. The sensors detect objects in the immediate environment and relay their locations via buzzes on the wearer's torso. OrCam’s, a computer vision company in Jerusalem, team of programmers, computer engineers, and hardware designers have developed MyEye device, which attaches to the temple of a pair of eyeglasses. The device instructs the user on how to store items in memory, including things such as credit cards and faces of friends and family.

J.R. RIZZO HAS KNOWN since he was a teenager that his retinas were deteriorating and he would someday be blind. So it was understandable that he became fascinated by animals with naturally poor eyesight—bats come immediately to mind, but any number of fish and mammals get around just fine in environments that seem murky at best.

As Rizzo read up on those animals, he noticed that many had something in common. “Multisensory integration,” Rizzo said, explaining that these animals relied on other senses in addition to eyesight to understand their position in the world.

“It was amazing to me, from a species standpoint, how something with very poor vision used different sensory input,” Rizzo said. “Dolphins use echolocation. What could that mean for humans?”

Rizzo, who is now legally blind, is an assistant professor of rehabilitative medicine at New York University and one of a number of technologists working to answer that question. They are using high-tech sensors, visual detection software, and mobile computing power to develop new means to enable visually impaired people to navigate a world designed for—and by—sighted people.

It's a big advance. Until just recently, navigational aids for the blind and visually impaired had changed little from a white cane or a guide dog.

While those aids can provide a welcome degree of mobility, they are unable to pick up critical clues from the surroundings, such as reading a sign or detecting an onrushing object, that sighted people take for granted.

The Eyeronman system, developed by NYU's Visuomotor Integration Laboratory and Tactile Navigation Tools, combines a sensor-laden outer garment or belt with a vest studded with vibrating actuators (left). The sensors detect objects in the immediate environment (above) and relay their locations via buzzes on the wearer's torso.

Grahic Jump LocationThe Eyeronman system, developed by NYU's Visuomotor Integration Laboratory and Tactile Navigation Tools, combines a sensor-laden outer garment or belt with a vest studded with vibrating actuators (left). The sensors detect objects in the immediate environment (above) and relay their locations via buzzes on the wearer's torso.

Already a number of smartphone apps help blind people perform basic tasks: identifying denominations of money or reading labels on a product package. By adapting technology developed for robots, automobiles, and other products, researchers and developers are creating wearable devices that can aid the visually impaired as they navigate through their daily routines— even identifying people and places.

“We want to give people information they can no longer see,” said Yonatan Wexler, executive vice president of research and development at OrCam, a computer vision company in Jerusalem.

OrCam spun off from Mobileye, an Israeli firm that pioneered advanced collision-avoidance systems for vehicles. Its cofounders, industrial engineer Ziv Aviram and computer scientist Amnon Shashua, had hoped to apply the advanced computer-vision algorithms they had developed for cars to a more personal level.

“There's a huge number of people whose sight is a problem,” over 14 million in the U.S. alone, said Wexler. “What they miss is information—a lot of information that comes through the eyes.”

Before Wexler and his team could apply the Mobileye technology to the challenge of vision impairment, they needed to build a platform almost from scratch. In 2010 when the company was founded, “We thought the technology was about to mature,” he said. But to achieve their goal, “We had to develop the technology.”

Wexler pointed out that research into teaching computers to see goes back to the 1970s, but visual perception was “still an open problem. We see innately, but you really don’t realize how complex the act of seeing is.”

At the time, one of the devices on the market most similar to what they were trying to produce was the IntelReader, Intel's handheld device with a camera and processor that “reads” printed material, converting it into digital text and then reading it aloud to the user. But Wexler said it was bulkier than what they were seeking and took too long to read back the text.

“When you look at something, the brain starts reading,” he said. “We wanted to read any text on any surface. There was no technology that could do that, so we started to develop our own reading capability.”

It took OrCam's team of programmers, computer engineers, and hardware designers five years to develop the firm's MyEye device, which attaches to the temple of a pair of eyeglasses. The hardware features a frontfacing camera and a bone conduction speaker in an arm extending toward the wearer's ear. A cable connects the device to a pocket-size computer that uses its own algorithms for computer vision and an i.MX 6Quad processor to interpret visuals and process them in real time.

The user activates the device by pointing a finger or pushing a button. The vision system scans the field of view, and if it recognizes an object or a location, the computer announces the name through the speaker.

Much like the human eye, the MyEye works best in lighted environments, but the firm claims a flashlight is adequate in darkened areas. The system comes preloaded with a set of objects it can recognize, but the user can easily add to the library by shaking the device to add an item or waving a hand to add a face or a place. The device instructs the user on how to store items in memory, including things such as credit cards and faces of friends and family.

Wexler said the team worried about privacy, so OrCam is designed specifically not to be a recording device. It does not store images, only signatures.

“It reads and tells the user what is has read, and then forgets about it,” he said. “So if it is hacked, [the hacker] will not find anything to harm the customer.”

Kris Kitani, assistant research professor in the Robotics Institute at Carnegie Mellon University, has been watching OrCam's progress. “They are headed in the right direction,” Kitani said. “You can get a lot of information from a camera.”

Kitanti is part of a team at Carnegie Mellon that last year released an open-source platform to develop Navcog, a smartphone app that taps into sensors and Bluetooth beacons to enable visually impaired users to move about without traditional assistance.

For now, the app only works on the Carnegie Mellon campus, where beacons are installed throughout halls and pathways. The app analyzes data pulled from the beacon and signals the user through smartphone vibrations or voice through earbuds, but developers want to push it further.

The system, which Kitani and his team developed with help from researchers at IBM, works a bit like GPS for vehicles. But GPS has a positional accuracy of about 10 feet, which is much too coarse for pedestrians to use.

“We want to develop accurate localization, as opposed to the resolution from an automobile-based GPS system,” Kitani said. “That's fine for cars. But with a blind person, you have issues like, ‘What part of the sidewalk are you walking on.’ ”

“We see innately, but you really don’t realize how complex the act of seeing is.”

— Yonatan Wexler, OrCam

Developers can access cognitive assistance tools through the IBM Bluemix cloud computing service. The toolkit includes an app for navigation, a map editing tool, and localization algorithms to help blind people identify in real time where they are, as well as what direction they are facing and local environmental information. A computer-vision navigation tool can turn smartphone images of the localized environment into a 3-D space model to improve localization and navigation.

Team leader Chieko Asakawa, a visiting faculty member at Carnegie Mellon and an IBM fellow, said, “To gain further independence and help improve the quality of life, ubiquitous connectivity across indoor and outdoor environments is necessary.” Asakawa is herself visually impaired.

The team is working to expand the app, aware that relying on Bluetooth beacons can be limiting. Kitani believes one key to developing the network beyond the Carnegie Mellon campus is the eventual availability of low-cost beacons.

Looking ahead, he said the team wants to add a smartphone-based navigation system working with a camera.

“Ultrasound operates at the speed of sound, but you still need to wait for the chirp off of the echo.”

— J.R. Rizzo, New York University

“One of the things about using computer vision in the wild is that the same object can look radically different at different times of the day,” Kitani said. “And any kind of technology using computer vision needs a level of robustness.”

Back in New York, Rizzo is developing a system that could be independent of beacons and other markers. At NYU's Visuomotor Integration Laboratory, where Rizzo is the director, and at his startup, Tactile Navigation Tools, Rizzo and his team is fitting outerwear with transmitters and sensors that detect oncoming objects along with their shape, location, and speed.

Rizzo first worked on the idea in medical school, as choroideremia began taking more of his eyesight.

He teamed up with a partner and technical advisor, who is a neuroscientist. Together, the pair worked within NYU's School of Business to develop a business plan and meet business advisors and potential investors. They recruited a range of engineering, computer, and technical experts to help in the system's development, offering equity stakes in the company to keep payroll low and direct seed money into research and development.

“The design has evolved substantially,” Rizzo said. “But it is an intuitive-based system. We’re creating a stable foundation for sensory schemes, and a foundation that can be modified for each individual user.”

In visually impaired people, studies have shown the portion of the brain normally used to process visual information instead processes auditory information. That plasticity allows visually impaired people to train themselves to recognize objects based on sound cues. Sound waves bouncing off a wall, for instance, are perceived as distinct from those reflecting off a car, and those differences become part of an auditory library.

Rizzo's system, which he calls Eyeronman, turns that passive listening into active scanning. At present, the system consists of a vest and belt studded with ultrasound, infrared, and laser ranging sensors or lidar.

“It is sensor fusion,” he said. “Using sonar, buttressed by lidar, and integrated into something meaningful.”

When someone wearing the vest walks down a sidewalk, the sensors detect objects in a wide cone up to 18 feet away, and converts the data into a series of vibrations via actuators in the vest. If a dog is running toward the wearer from the left, the lower left panel of the vest will start to buzz. When the dog stops or sits, the buzzing will slow to a gentler vibration.

Right now, Eyeronman works well only if the wearer is moving slowly enough for the data to be read accurately. To get the system capable of processing data at a normal walking speed will require overcoming some technical challenges.

“Ultrasound operates at the speed of sound, but you still need to wait for the chirp off of the echo,” Rizzo said. He added that his team has had to work to minimize crosstalk and outside noise from sensors.

Still, “When people put this vest on, nothing is average,” he said. “You turn the system on and have people walk an obstacle course, and people understand. They pick this up almost instantly. You can now walk in one direction and put your torso in another direction” and recognize objects.

The MyEye system developed at OrCam identifies objects or text for the user. The camera attached to the temple of the glasses frame (top) scans the field of view. When the wearer points at an object or block of text, the image is sent via a connecting cable to a pocket-size computer, which processes the image and relays its contents by speech through a speaker at the wearer's ear.

Grahic Jump LocationThe MyEye system developed at OrCam identifies objects or text for the user. The camera attached to the temple of the glasses frame (top) scans the field of view. When the wearer points at an object or block of text, the image is sent via a connecting cable to a pocket-size computer, which processes the image and relays its contents by speech through a speaker at the wearer's ear.

Rizzo calls the system a “game changer” and hopes to start commercial production sometime in 2018.

The system also has applications beyond the visually impaired.

“If we create an omnidimensional spatial perception, it has application to a number of vertical markets,” Rizzo said. The system could be mounted on fireretardant clothing for use by fire departments. Police or military units could use a version incorporated into bullet-proof vests.

For now, the system needs to be on the outermost article of clothing, but Rizzo said eventually it could be worn as a discreet harness. “An external shell just makes sense right now,” he said.

Rizzo is also thinking of more data streams that could be delivered to users via his vest. Cloud-based software, for instance, could deliver voice messages through a 4G, Wi-Fi multi-modal system.

That would be a development to bring technology for the visually impaired squarely into the 21st century.

Copyright © 2017 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In