Mobile Visual Assistive Apps: Benchmarks of Vision Algorithm Performance

  • Jose Rivera-Rubio
  • Saad Idrees
  • Ioannis Alexiou
  • Lucas Hadjilucas
  • Anil A. Bharath
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8158)


Although the use of computer vision to analyse images from smartphones is in its infancy, the opportunity to exploit these devices for various assistive applications is beginning to emerge. In this paper, we consider two potential applications of computer vision in the assistive context for blind and partially sighted users. These two applications are intended to help provide answers to the questions of “Where am I?” and “What am I holding?”.

First, we suggest how to go about providing estimates of the indoor location of a user through queries submitted by a smartphone camera against a database of visual paths – descriptions of the visual appearance of common journeys that might be taken. Our proposal is that such journeys could be harvested from, for example, sighted volunteers. Initial tests using bootstrap statistics do indeed suggest that there is sufficient information within such visual path data to provide indications of: a) along which of several routes a user might be navigating; b) where along a particular path they might be.

We will also discuss a pilot benchmarking database and test set for answering the second question of “What am I holding?”. We evaluated the role of video sequences, rather than individual images, in such a query context, and suggest how the extra information provided by temporal structure could significantly improve the reliability of search results, an important consideration for assistive applications.


Image-based localisation path-planning mobile assistive devices object categorisation mobile computer vision 


  1. 1.
    Durrant-Whyte, H., Bailey, T.: Robotics and Automation Magazine 13(99), 80 (2006)Google Scholar
  2. 2.
    Everingham, M., Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal Visual Object Classes (VOC) Challenge. International Journal of Computer Vision 88(2), 303–338 (2009)CrossRefGoogle Scholar
  3. 3.
    Klein, G., Murray, D.: Parallel Tracking and Mapping on a camera phone (2009)Google Scholar
  4. 4.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004)Google Scholar
  5. 5.
    Manduchi, R., Coughlan, J. (Computer) Vision without Sight. Communications of the ACM 55(1), 96–104 (2012)CrossRefGoogle Scholar
  6. 6.
    Pradeep, V., Medioni, G., Weiland, J.: Robot Vision for the Visually Impaired. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), CVAVI 2010, pp. 15–22 (2010)Google Scholar
  7. 7.
    Rivera-Rubio, J., Idrees, S., Alexiou, I., Bharath, A.A.: The SHORT-30 database. Object recognition in an increasingly mobile world. In: British Machine Vision Association Meetings: Vision in an Increasingly Mobile World, London (2013)Google Scholar
  8. 8.
    Shen, G., Chen, Z., Zhang, P., Moscibroda, T., Zhang, Y.: Walkie-Markie: Indoor Pathway Mapping Made Easy, pp. 85–98,
  9. 9.
    Vedaldi, A., Fulkerson, B.: VLFeat: An Open and Portable Library of Computer Vision Algorithms (2008),
  10. 10.
    Wang, H., Sen, S., Elgohary, A., Farid, M., Youssef, M.: Unsupervised Indoor Localization. In: MobiSys. ACM (2012)Google Scholar
  11. 11.
    Worsfold, J., Chandler, E.: Wayfinding project. Tech. rep., Royal National Institute of Blind People, London (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Jose Rivera-Rubio
    • 1
  • Saad Idrees
    • 1
  • Ioannis Alexiou
    • 1
  • Lucas Hadjilucas
    • 1
  • Anil A. Bharath
    • 1
  1. 1.BICV Group, Department of BioengineeringImperial CollegeLondonU.K.

Personalised recommendations