Advertisement

FingerReader: A Finger-Worn Assistive Augmentation

  • Roy Shilkrot
  • Jochen Huber
  • Roger Boldu
  • Pattie Maes
  • Suranga Nanayakkara
Chapter
Part of the Cognitive Science and Technology book series (CSAT)

Abstract

The FingerReader is a finger-augmenting device equipped with a camera that assists in pointing and touching tasks. While the FingerReader was initially designed as an assistive device for sightless reading of printed text, it expanded to support other activities, such as reading music. The FingerReader is shown to be an intuitive interface for accessing pointable visual material, through user studies and quantitative assessments. This article discusses the origins, design rationale, iterations, applications, evaluations, and an encompassing overview of the past 4 years of work on the FingerReader.

Keywords

Human computer interaction Wearable interfaces Assistive technology 

Notes

Acknowledgements

We would like to acknowledge the people who were directly involved in the ideation, creation and evaluation of the FingerReader: Connie Liu, Sophia Wu, Marcelo Polanco, Michael Chang, Sabrine Iqbal, Amit Zoran, K.P. Yao. We would like to acknowledge the help of the VIBUG group in MIT for testing and improving the FingerReader.

References

  1. 1.
  2. 2.
    Baba T, Kikukawa Y, Yoshiike T, Suzuki T, Shoji R, Kushiyama K, Aoki M (2012) Gocen: a handwritten notational interface for musical performance and learning music. In: SIGGRAPH 2012 emerging technologies. ACMGoogle Scholar
  3. 3.
    Bigham JP, Jayant C, Ji H, Little G, Miller A, Miller RC, Miller R, Tatarowicz A, White B, White S, Yeh T (2010) VizWiz: nearly real-time answers to visual questions. In: Proceedings of the 23nd annual ACM symposium on user interface software and technology, UIST ’10, New York, NY, USA. ACM, pp 333–342Google Scholar
  4. 4.
    Byrne JH, Dafny N (1997) Neuroscience online: an electronic textbook for the neurosciences. The University of Texas Medical School at Houston, Department of Neurobiology and AnatomyGoogle Scholar
  5. 5.
    Crombie D, Dijkstra S, Schut E, Lindsay N (2002) Spoken music: enhancing access to music for the print disabled. In: Computers helping people with special needs. Lecture notes in computer science, vol 2398. Springer, Berlin, pp 667–674Google Scholar
  6. 6.
    d’Albe EEF (1914) On a type-reading optophone. Proc R Soc Lond A 90(619):373–375CrossRefGoogle Scholar
  7. 7.
    David M (Oct 2007) Every stalker’s dream: camera ringGoogle Scholar
  8. 8.
    El-Glaly YN, Quek F, Smith-Jackson TL, Dhillon G (2012) It is not a talking book;: it is more like really reading a book! In: Proceedings of the 14th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’12, New York, NY, USA. ACM, pp 277–278Google Scholar
  9. 9.
    Ezaki N, Bulacu M, Schomaker L (2004) Text detection from natural scene images: towards a system for visually impaired persons. Proc ICPR 2:683–686Google Scholar
  10. 10.
    Hanif SM, Prevost L (2007) Texture based text detection in natural scene images—a help to blind and visually impaired persons. In: CVHIGoogle Scholar
  11. 11.
    Hedberg E, Bennett Z (Dec 2010) Thimble—there’s a thing for thatGoogle Scholar
  12. 12.
    Horvath S, Galeotti J, Wu B, Klatzky R, Siegel M, Stetten G (2014) FingerSight: fingertip haptic sensing of the visual environment. IEEE J Trans Eng Health Med 2:1–9CrossRefGoogle Scholar
  13. 13.
    Jacko VA, Choi JH, Carballo A, Charlson B, Moore JE (2015) A new synthesis of sound and tactile music code instruction in a pilot online braille music curriculum. J Vis Impair Blindness (Online) 109(2):153Google Scholar
  14. 14.
    Kane SK, Frey B, Wobbrock JO (2013) Access lens: a gesture-based screen reader for real-world documents. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’13, New York, NY, USA. ACM, pp 347–350Google Scholar
  15. 15.
    Lee H (Sept 2011) Finger readerGoogle Scholar
  16. 16.
    Linvill JG, Bliss JC (1966) A direct translation reading aid for the blind. Proc IEEE 54(1):40–51CrossRefGoogle Scholar
  17. 17.
    Luangnapa N, Silpavarangkura T, Nukoolkit C, Mongkolnam P (2012) Optical music recognition on android platform. In: Advances in information technology. Communications in computer and information science, vol 344. Springer, Berlin, pp 106–115Google Scholar
  18. 18.
    Mattar MA, Hanson AR, Learned-Miller EG (June 2005) Sign classification using local and meta-features. In: CVPR—workshops. IEEE, p 26Google Scholar
  19. 19.
    McNeill D (2000) Language and gesture, vol 2. Cambridge University PressGoogle Scholar
  20. 20.
    Nanayakkara S, Shilkrot R, Yeo KP, Maes P (2013) EyeRing: a finger-worn input device for seamless interactions with our surroundings. In: Proceedings of the 4th augmented human international conference, AH ’13, New York, NY, USA. ACM, pp 13–20Google Scholar
  21. 21.
    Pazio M, Niedzwiecki M, Kowalik R, Lebiedz J (2007) Text detection system for the blind. In: 15th European signal processing conference EUSIPCO, pp 272–276Google Scholar
  22. 22.
    Peters J-P, Thillou C, Ferreira S (2004) Embedded reading device for blind people: a user-centered design. In: Procedings of the ISIT. IEEE, pp 217–222Google Scholar
  23. 23.
    Polanco MRII (2015) Mobireader: a wearable, assistive smartphone peripheral for reading text. Master’s thesis, Massachusetts Institute of TechnologyGoogle Scholar
  24. 24.
    Rebelo A, Fujinaga I, Paszkiewicz F, Marcal ARS, Guedes C, Cardoso JS (2012) Optical music recognition: state-of-the-art and open issues. Int J Multimedia Inf Retrieval 1(3):173–190CrossRefGoogle Scholar
  25. 25.
    Rissanen MJ, Fernando ONN, Iroshan H, Vu S, Pang N, Foo S (2013) Ubiquitous shortcuts: mnemonics by just taking photos. CHI ’13 extended abstracts on human factors in computing systems, CHI EA ’13. New York, NY, USA. ACM, pp 1641–1646CrossRefGoogle Scholar
  26. 26.
    Roach JW, Tatem JE (1988) Using domain knowledge in low-level visual processing to interpret handwritten music: an experiment. Pattern Recognit 21(1):33–44CrossRefGoogle Scholar
  27. 27.
    Sand A, Pedersen CNS, Mailund T, Brask AT (2010) HMMlib: a C++ library for general hidden Markov models exploiting modern CPUs. In: 2010 ninth international workshop on parallel and distributed methods. IEEE, pp 126–134Google Scholar
  28. 28.
    Shen H, Coughlan JM (2012) Towards a real-time system for finding and reading signs for visually impaired users. In: Computers helping people with special needs. Springer, pp 41–47Google Scholar
  29. 29.
    Shilkrot R (2015) Digital digits: designing assistive finger augmentation devices. PhD thesis, Massachusetts Institute of TechnologyGoogle Scholar
  30. 30.
    Shilkrot R, Huber J, Liu C, Maes P, Nanayakkara SC (2014) FingerReader: a wearable device to support text reading on the go. In: CHI EA. ACM, pp 2359–2364Google Scholar
  31. 31.
    Shilkrot R, Huber J, Meng Ee W, Maes P, Nanayakkara SC (2015) Fingerreader: a wearable device to explore printed text on the go. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems, CHI ’15, New York, NY, USA. ACM, pp 2363–2372Google Scholar
  32. 32.
    Smith R (2007) An overview of the tesseract OCR engine. In: ICDAR, pp 629–633Google Scholar
  33. 33.
    Stearns L, Du R, Oh U, Wang Y, Findlater L, Chellappa R, Froehlich JE (Sept 2014) The design and preliminary evaluation of a finger-mounted camera and feedback system to enable reading of printed text for the blindGoogle Scholar
  34. 34.
    Viktor L (Nov 2014) iSeeNotes—sheet music OCR!Google Scholar
  35. 35.
    Yamamoto Y, Uchiyama H, Kakehi Y (2011) onNote: playing printed music scores as a musical instrument. In: Proceedings of UIST. ACM, pp 413–422Google Scholar
  36. 36.
    Yang XD, Grossman T, Wigdor D, Fitzmaurice G (2012). Magic finger: always-available input through finger instrumentation. In: Proceedings of the 25th annual ACM symposium on user interface software and technology, UIST ’12, New York, NY, USA. ACM, pp 147–156Google Scholar
  37. 37.
    Yarrington D, McCoy K (2008) Creating an automatic question answering text skimming system for non-visual readers. In: Proceedings of the 10th international ACM SIGACCESS conference on computers and accessibility, Assets ’08, New York, NY, USA. ACM, pp 279–280Google Scholar
  38. 38.
    Ye H, Malu M, Oh U, Findlater L (2014) Current and future mobile and wearable device use by people with visual impairments. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’14, New York, NY, USA. ACM, pp 3123–3132Google Scholar
  39. 39.
    Yi C (2010) Text locating in scene images for reading and navigation aids for visually impaired persons. In: Proceedings of the 12th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’10, New York, NY, USA. ACM, pp 325–326Google Scholar
  40. 40.
    Yi C, Tian Y (2012) Assistive text reading from complex background for blind persons. In: Camera-based document analysis and recognition. Springer, pp 15–28Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Roy Shilkrot
    • 1
  • Jochen Huber
    • 2
  • Roger Boldu
    • 3
  • Pattie Maes
    • 4
  • Suranga Nanayakkara
    • 3
  1. 1.Computer Science DepartmentStony Brook UniversityStony BrookUSA
  2. 2.SynapticsZugSwitzerland
  3. 3.Singapore University of Technology and DesignSingaporeSingapore
  4. 4.Media LabMassachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations