Advertisement

A System for Assisting the Visually Impaired in Localization and Grasp of Desired Objects

  • Kaveri Thakoor
  • Nii Mante
  • Carey Zhang
  • Christian Siagian
  • James Weiland
  • Laurent Itti
  • Gérard Medioni
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8927)

Abstract

A prototype wearable visual aid for helping visually impaired people find desired objects in their environment is described. The system is comprised of a head-worn camera to capture the scene, an Android phone interface to specify a desired object, and an attention-biasing-enhanced object recognition algorithm to identify three most likely object candidate regions, select the best-matching one, and pass its location to an object tracking algorithm. The object is tracked as the user’s head moves, and auditory feedback is provided to help the user maintain the object in the field of view, enabling easy reach and grasp. The implementation and integration of the system leading to testing of the working prototype with visually-impaired subjects at the Braille Institute in Los Angeles (demonstration in the accompanying video) is described. Results indicate that this system has clear potential to help visually-impaired users in achieving near-real-time object localization and grasp.

Keywords

Object recognition Attention Tracking Localization Grasp Auditory feedback Visually impaired 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

Supplementary material (MP4 1,294 KB)

References

  1. 1.
    Visual Impairment and Blindness Fact Sheet, World Health Organization (2012). http://www.who.int/mediacentre/factsheets/fs282/en/ (accessed: May 6, 2013)
  2. 2.
    Nau, A.C.: Gaps in assistive technology for the blind: understanding the needs of the disabled. In: Keynote Lecture, IEEE ICME Workshop on Multimodal and Alternative Perception for Visually Impaired People (MAP4VIP), San Jose, CA (July 2013)Google Scholar
  3. 3.
    Manduchi, R., Coughlan, J.: The last meter: blind visual guidance to a target. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2014)Google Scholar
  4. 4.
    Manduchi, R., Coughlan, J.: (Computer) vision without sight. Communications of the ACM 55(1) (2012)Google Scholar
  5. 5.
    Thakoor, K., Marat, S., Nasiatka, P.J., McIntosh, B.P., Sahin, F.E., Tanguay, A.R., Weiland, J.D., Itti, L.: Attention-Biased speeded-up robust features (AB-SURF): a neurally-inspired object recognition algorithm for a wearable aid for the visually impaired. In: IEEE ICME Workshop on Multimodal and Alternative Perception for Visually Impaired People (MAP4VIP), San Jose, CA (July 2013) (Best Student Paper Award)Google Scholar
  6. 6.
    Bjorkman, M., Eklundh, J.-O.: Vision in the Real World: Finding, Attending, and Recognizing Objects. International Journal of Imaging Systems and Technology 16, 189–208 (2007)CrossRefGoogle Scholar
  7. 7.
    Schauerte, B., Martinez, M., Constantinescu, A.: An assistive vision system for the blind that helps find lost things. In: Proceedings of the 13th International Conference on Computers Helping People with Special Needs, vol 2, pp. 566–572 (2012)Google Scholar
  8. 8.
    Bigham, J.P., Jayant, C., Miller, A., White, B., Yeh, T.: VizWiz: locateIt-enabling blind people to locate objects in their environment. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2010)Google Scholar
  9. 9.
    Nanayakkara, S.C., Shilkrot, R., Maes, P.: EyeRing: a finger-worn assistant. In: International ACM SIGCHI Conference on Human Factors in Computing, Austin, TX (2012)Google Scholar
  10. 10.
    Matusiak, K., Skulimowski, P., Strurnillo, P.: Object recognition in a mobile phone application for visually impaired users. In: The 6th International Conference on Human System Interaction (HSI), pp. 479–484 (2013)Google Scholar
  11. 11.
    Recognizer, L.: Looktel (2009). http://www.looktel.com/recognizer (accessed February 23, 2013)
  12. 12.
    OrCam - See for Yourself. http://www.orcam.com/ (accessed: May 01, 2014)
  13. 13.
    Wolfe, J.M.: Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review 1(2), 202–238 (1994)CrossRefGoogle Scholar
  14. 14.
    Treisman, A.M., Gelade, G.: A Feature-Integration Theory of Attention. Cognitive Psychology 12, 97–136 (1980)CrossRefGoogle Scholar
  15. 15.
    Gepperth, A.R.T., Rebhan, S., Hasler, S., Fritsch, J.: Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues. Cognitive Computation 3(1), 146–166 (2011)Google Scholar
  16. 16.
    Winlock, T., Christiansen, E., Belongie, S.: Toward real-time grocery detection for the visually impaired. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 49–56 (2010)Google Scholar
  17. 17.
    Meijer, P.B.: An experimental system for auditory image representations. IEEE Transactions on Biomedical Engineering 39(2), 112–121 (1992)CrossRefGoogle Scholar
  18. 18.
    Striem-Amit, E., Guendelman, M., Amedi, A.: Visual Acuity of the Congenitally Blind Using Visual-to-Auditory Sensory Substitution. PLoS ONE 7(3), March 2012Google Scholar
  19. 19.
    Papageorgiou, C., Poggio, T.: A trainable system for object detection. International Journal of Computer Vision 38(1), 15–33 (2000)CrossRefzbMATHGoogle Scholar
  20. 20.
    Moghaddam, B., Pentland, A.: Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 696–710 (1997)CrossRefGoogle Scholar
  21. 21.
    Marat, S., Ho-Phuoc, T., Granjon, L., Guyader, N., Pellerin, D., Guerin-Dugue, A.: Modeling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos. International Journal of Computer Vision 82(3), 231–243 (2009)CrossRefGoogle Scholar
  22. 22.
    Rutishauser, U., Walther, D., Koch, C., Perona, P.: Is bottom-up attention useful for object recognition?. In: IEEE Conference on Computer Vision and Pattern Recognition (2004)Google Scholar
  23. 23.
    Navalpakkam, V., Itti, L.: Modeling the influence of task on attention. Vision Research 45(2), 205–231 (2005)CrossRefGoogle Scholar
  24. 24.
    Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 40(10), 1489–1506 (2000)CrossRefGoogle Scholar
  25. 25.
    Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60(2) (2004)Google Scholar
  26. 26.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  27. 27.
    Adebiyi, A., Zhang, C., Thakoor, K., Weiland, J.D.: Feedback measures for a wearable visual aid designed for the visually impaired. Association for Research in Vision and Ophthalmology Annual Meeting, May 5–9, Seattle, Washington (2013)Google Scholar
  28. 28.
    Aly, M., Welinder, P., Munich, M., Perona, P.: Scaling object recognition: benchmark of current state of the art techniques. In: IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops) (2009)Google Scholar
  29. 29.
    Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: International Conference on Computer Vision and Pattern Recognition (2006)Google Scholar
  30. 30.
    Babenko, B., Yang, M.-H., Belongie, S.: Visual tracking with online multiple instance learning. In: International Conference on Computer Vision and Pattern Recognition (2009)Google Scholar
  31. 31.
    Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 234–247. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  32. 32.
    Dinh, T., Vo, N., Medioni, G.: Context tracker: exploring supporters and distracters in unconstrained environments. In: International Conference on Computer Vision and Pattern Recognition (2011)Google Scholar
  33. 33.
    Mante, N., Medioni, G., Tanguay, A., Weiland, J.: An auditory feedback study on the object localization and tracking system. In: Biomedical Engineers Society Annual Meeting (BMES Annual Meeting) (2014)Google Scholar
  34. 34.
    iLab Neuromorphic Robotics Toolkit: Get NRT. http://nrtkit.org/documentation/g_GetNRT.html (accessed: June 29, 2014)
  35. 35.
    Measuring Usability with the System Usability Scale (SUS): Measuring Usability. http://www.measuringusability.com/sus.php (accessed: June 29, 2014)
  36. 36.
    Kestur, S., Park, M.S., Sabarad, J., Dantara, D., Narayanan, V.: Emulating mammalian vision on reconfigurable hardware. In: IEEE 20th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 141–148 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Kaveri Thakoor
    • 1
  • Nii Mante
    • 1
  • Carey Zhang
    • 1
  • Christian Siagian
    • 1
  • James Weiland
    • 1
  • Laurent Itti
    • 1
  • Gérard Medioni
    • 1
  1. 1.University of Southern CaliforniaLos AngelesUSA

Personalised recommendations