Advertisement

Personal Shopping Assistance and Navigator System for Visually Impaired People

  • Paul Chippendale
  • Valeria TomaselliEmail author
  • Viviana D’Alto
  • Giulio Urlini
  • Carla Maria Modena
  • Stefano Messelodi
  • Sebastiano Mauro Strano
  • Günter Alce
  • Klas Hermodsson
  • Mathieu Razafimahazo
  • Thibaud Michel
  • Giovanni Maria Farinella
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8927)

Abstract

In this paper, a personal assistant and navigator system for visually impaired people will be described. The showcase presented intends to demonstrate how partially sighted people could be aided by the technology in performing an ordinary activity, like going to a mall and moving inside it to find a specific product. We propose an Android application that integrates Pedestrian Dead Reckoning and Computer Vision algorithms, using an off-the-shelf Smartphone connected to a Smartwatch. The detection, recognition and pose estimation of specific objects or features in the scene derive an estimate of user location with sub-meter accuracy when combined with a hardware-sensor pedometer. The proposed prototype interfaces with a user by means of Augmented Reality, exploring a variety of sensorial modalities other than just visual overlay, namely audio and haptic modalities, to create a seamless immersive user experience. The interface and interaction of the preliminary platform have been studied through specific evaluation methods. The feedback gathered will be taken into consideration to further improve the proposed system.

Keywords

Assistive technology Indoor navigation Visually impaired Augmented reality Mobile devices Wearable cameras Quality of experience 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    WHO, Fact Sheet N\(^\circ \)282: http://www.who.int/mediacentre/factsheets/fs282/en/
  2. 2.
    OpenStreetMap official website: http://openstreetmap.org
  3. 3.
  4. 4.
  5. 5.
  6. 6.
    Simulation of VENTURI Y2D: http://youtu.be/NNabKQIXiTc
  7. 7.
    HaptiMap project, FP7-ICT-224675: http://www.haptimap.org/
  8. 8.
    Alce, G., Hermodsson, K., Wallergård, M.: WozARd. In: Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI, pp. 600–605 (2013)Google Scholar
  9. 9.
    Angermann, M., Frassl, M., Doniec, M., Julian, B.J., Robertson, P.: Characterization of the indoor magnetic field for applications in localization and mapping. In: 2012 International Conference on Indoor Positioning and Indoor Navigation, pp. 1–9 (2012)Google Scholar
  10. 10.
    Baniukevic, A., Jensen, C.S., Hua, L.: Hybrid indoor positioning with wi-fi and bluetooth: architecture and performance. In: 14th International Conference on Mobile Data Management, vol. 1, pp. 207–216 (2013)Google Scholar
  11. 11.
    Barba, E., MacIntyre, B., Mynatt, E.D.: Here we are! Where are we? Locating mixed reality in the age of the smartphone. Proceedings of the IEEE 100, 929–936 (2012)CrossRefGoogle Scholar
  12. 12.
    Borenstein, J., Ojeda, L., Kwanmuang, S.: Heuristic Reduction of Gyro Drift in a Personal Dead-reckoning System. Journal of Navigation 62(1), 41–58 (2009)CrossRefGoogle Scholar
  13. 13.
    Brox, T., Bregler, C., Matas, J.: Large displacement optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 41–48 (2009)Google Scholar
  14. 14.
    Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2(3), 27:1–27:27 (2011)CrossRefGoogle Scholar
  15. 15.
    Colbrant, A., Lasorsa, Y., Lemordant, J., Liodenot, D., Razafimahazo, M.: One Idea and Three Concepts for Indoor-Outdoor Navigation, INRIA Research Report n\(^\circ \) 7849 (2011)Google Scholar
  16. 16.
    Dow, S., Macintyre, B., Lee, J., Oezbek, C., Bolter, J.D., Gandy, M.: Wizard of Oz Support throughout an Iterative Design Process. IEEE Pervasive Computing 4(4), 18–26 (2005)CrossRefGoogle Scholar
  17. 17.
    Dunser, A., Billinghurst, M.: Handbook of Augmented Reality, Furht, B. (ed.) chap. 13, pp. 289–307 (2011)Google Scholar
  18. 18.
    Farinella, G.M., Ravì, D., Tomaselli, V., Guarnera, M., Battiato, S.: Representing Scenes for Real-Time Context Classification on Mobile Devices (2014). http://dx.doi.org/10.1016/j.patcog.2014.05.014
  19. 19.
    Fischler, M.A., Bolles, R.C.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM 24, 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Jones, M.J., Rehg, J.M.: Statistical color models with application to skin detection. International Journal of Computer Vision 46(1), 81–96 (2002)CrossRefzbMATHGoogle Scholar
  21. 21.
    Kurata, T., Kourogi, M., Ishikawa, T., Kameda, Y., Aoki, K., Ishikawa, J.: Indoor-outdoor navigation system for visually-impaired pedestrians: preliminary evaluation of position measurement and obstacle display. In: ISWC 2011 Proceedings of the 2011 15th Annual International Symposium on Wearable Computer, pp. 123–124 (2011)Google Scholar
  22. 22.
    Ladetto, Q.: Capteurs et Algorithmes pour la Localisation Autonome en Mode Pédestre, Phd thesis, École Polytechnique Fédérale de Lausanne (2003)Google Scholar
  23. 23.
    Lam, E., Goodman, J.W.: A mathematical analysis of the DCT coefficient distributions for images. IEEE Transactions on Image Processing 9(10), 1661–1666 (2000)CrossRefzbMATHGoogle Scholar
  24. 24.
    Le, M.H.V., Saragas, D., Webb, N.: Indoor navigation system for handheld devices, master’s thesis, Worcester Polytechnic Institute, Massachusetts, USA (2009)Google Scholar
  25. 25.
    Messelodi, S., Modena, C.M.: Scene Text Recognition and Tracking to Identify Athletes in Sport Videos. Multimedia Tools and Applications 63(2), 521–545 (2013)CrossRefGoogle Scholar
  26. 26.
    Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. International Journal of Computer Vision 42(3), 145–175 (2001)CrossRefzbMATHGoogle Scholar
  27. 27.
    Porzi, L., Messelodi, S., Modena, C.M., Ricci, E.: A smart watch-based gesture recognition system for assisting people with visual impairments. In: ACM International Workshop on Interactive Multimedia on Mobile and Portable Devices, pp. 19–24 (2013)Google Scholar
  28. 28.
    Xiao, W., Ni, W., Toh, Y.K.: Integrated Wi-Fi fingerprinting and inertial sensing for indoor positioning. In: 2011 International Conference on Indoor Positioning and Indoor Navigation, pp. 1–6 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Paul Chippendale
    • 1
  • Valeria Tomaselli
    • 2
    Email author
  • Viviana D’Alto
    • 3
  • Giulio Urlini
    • 3
  • Carla Maria Modena
    • 1
  • Stefano Messelodi
    • 1
  • Sebastiano Mauro Strano
    • 2
  • Günter Alce
    • 4
  • Klas Hermodsson
    • 4
  • Mathieu Razafimahazo
    • 5
  • Thibaud Michel
    • 5
  • Giovanni Maria Farinella
    • 6
  1. 1.Fondazione Bruno KesslerTrentoItaly
  2. 2.STMicroelectronicsCataniaItaly
  3. 3.STMicroelectronicsMilanoItaly
  4. 4.Sony Mobile CommunicationsLundSweden
  5. 5.Inria Grenoble - Rhône-Alpes/LIGMontbonnot-Saint-MartinFrance
  6. 6.Image Processing Laboratory (IPLAB)University of CataniaCataniaItaly

Personalised recommendations