fertieg18
Dusza towarzystwa
Dołączył: 23 Wrz 2010
Posty: 401
Przeczytał: 0 tematów
Ostrzeżeń: 0/5 Skąd: England
|
Wysłany: Czw 4:04, 07 Paź 2010 Temat postu: white canes” |
|
|
By Ben Coxworth
A conventional white cane (Photo: Ryxhd123)
“This is crucial navigational information that is difficult to obtain by using a conventional white cane,” Ye said. “The project’s hypothesis is that a single Flash LADAR sensor can solve blind navigation problems — avoiding obstacles and way-finding. Thus it is possible to build [a] portable navigational device.”
By processing the VRO output through 3D data segmentation software, the system should reportedly be able to identify things such as staircases, doorways,[link widoczny dla zalogowanych], drop-offs in the floor, or overhead bulkheads. Once these obstacles and/or hazards are identified, the system would provide that information to the users through auditory cues.
U Arkansas’ Dr. Cang Ye and his colleagues plan to use a Flash LADAR (laser detection and ranging) three-dimensional imaging sensor to create a detailed model of the user’s environment. Unlike other laser ranging systems that require the laser to mechanically scan back and forth across the environment, Flash takes everything in at once, within sequential floodlit exposures that typically lasts less than a nanosecond each – this is particularly well-suited to people on the move.
For the past several years, various research institutions and organizations have been experimenting with electronic “white canes” for the blind. One of these was the ultrasound-enabled UltraCane, which we profiled five years ago. Now, however, an associate professor of applied science at the University of Arkansas is working on something more advanced – a white cane that utilizes laser technology to give users the lay of the land.
The Flash system obtains two images per exposure, one that measures the physical range (or distance away) of each pixel, and one that measures their intensity. Ye’s team has created an algorithm called VR-Odometry (VRO), that uses this data to calculate the user’s position within their environment. VRO compares the same feature in each two adjacent intensity images, and observes the differences between the two to determine how the user and that feature are moving relative to one another. By combining this information with the range information for that same feature,[link widoczny dla zalogowanych], the system lets the user know where they are in their environment,[link widoczny dla zalogowanych], and where they’re going.
20:24 October 4, 2010
The project was made possible by a US$320,389 research grant from the National Science Foundation’s Robust Intelligence Program, to develop navigational devices for the blind.
Post został pochwalony 0 razy
|
|