Advertisement

Understanding the Authoring and Playthrough of Nonvisual Smartphone Tutorials

  • André RodriguesEmail author
  • André Santos
  • Kyle Montague
  • Hugo Nicolau
  • Tiago Guerreiro
Conference paper
  • 807 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11746)

Abstract

Mobile device users are required to constantly learn to use new apps, features, and adapt to updates. For blind people, adapting to a new interface requires additional time and effort. At the limit, and often so, devices and applications may become unusable without support from someone else. Using tutorials is a common approach to foster independent learning of new concepts and workflows. However, most tutorials available online are limited in scope, detail, or quickly become outdated. Also, they presume a degree of tech savviness that is not at the reach of the common mobile device user. Our research explores the democratization of assistance by enabling non-technical people to create tutorials in their mobile phones for others. We report on the interaction and information needs of blind people when following ‘amateur’ tutorials. Thus, providing insights into how to widen and improve the authoring and playthrough of these learning artifacts. We conducted a study where 12 blind users followed tutorials previously created by blind or sighted people. Our findings suggest that instructions authored by sighted and blind people are limited in different aspects, and that those limitations prevent effective learning of the task at hand. We identified the types of contents produced by authors and the information required by followers during playthrough, which often do not align. We provide insights on how to support both authoring and playthrough of nonvisual smartphone tutorials. There is an opportunity to design solutions that mediate authoring, combine contributions, adapt to user profile, react to context and are living artifacts capable of perpetual improvement.

Keywords

Tutorials Blind Smartphones Accessibility Assistance 

References

  1. 1.
    AppleViz: iOS blind and low-vision support community (2018). https://www.applevis.com/. Accessed 3 Jan 2019
  2. 2.
    Bigham, J.P., et al.: Vizwiz: nearly real-time answers to visual questions. In: Proceedings of ACM Symposium on User Interface Software and Technology, UIST 2010, pp. 333–342 (2010).  https://doi.org/10.1145/1866029.1866080. http://doi.acm.org/10.1145/1866029.1866080
  3. 3.
    Brady, E., Morris, M.R., Zhong, Y., White, S., Bigham, J.P.: Visual challenges in the everyday lives of blind people. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 2117–2126. ACM, New York (2013).  https://doi.org/10.1145/2470654.2481291. http://doi.acm.org/10.1145/2470654.2481291
  4. 4.
    Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006).  https://doi.org/10.1191/1478088706qp063oa. http://www.tandfonline.com/doi/abs/10.1191/1478088706qp063oaCrossRefGoogle Scholar
  5. 5.
    CodeFactory: Mobile Accessibility (2011). http://codefactoryglobal.com/app-store/mobile-accessibility/. Accessed 17 May 2018
  6. 6.
    Eyes-Free: eyes free (2010). https://groups.google.com/forum/#!forum/eyes-free. Accessed 3 Jan 2019
  7. 7.
    Fernquist, J., Grossman, T., Fitzmaurice, G.: Sketch-sketch revolution: an engaging tutorial system for guided sketching and application learning. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST 2011, pp. 373–382. ACM, New York (2011).  https://doi.org/10.1145/2047196.2047245. http://doi.acm.org/10.1145/2047196.2047245
  8. 8.
    Furnas, G.W., Landauer, T.K., Gomez, L.M., Dumais, S.T.: The vocabulary problem in human-system communication. Commun. ACM 30(11), 964–971 (1987).  https://doi.org/10.1145/32206.32212. http://doi.acm.org/10.1145/32206.32212CrossRefGoogle Scholar
  9. 9.
    Gleason, C., et al.: Crowdsourcing the installation and maintenance of indoor localization infrastructure to support blind navigation. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(1), 9:1–9:25 (2018).  https://doi.org/10.1145/3191741. http://doi.acm.org/10.1145/3191741CrossRefGoogle Scholar
  10. 10.
    Grossman, T., Fitzmaurice, G.: ToolClips: an investigation of contextual video assistance for functionality understanding. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 1515–1524. ACM, New York (2010).  https://doi.org/10.1145/1753326.1753552. http://doi.acm.org/10.1145/1753326.1753552
  11. 11.
    Grossman, T., Fitzmaurice, G., Attar, R.: A survey of software learnability: metrics, methodologies and guidelines. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2009, pp. 649–658. ACM, New York (2009).  https://doi.org/10.1145/1518701.1518803. http://doi.acm.org/10.1145/1518701.1518803
  12. 12.
    Grussenmeyer, W., Folmer, E.: Accessible touchscreen technology for people with visual impairments: a survey. ACM Trans. Access. Comput. 9(2), 6:1–6:31 (2017).  https://doi.org/10.1145/3022701. http://doi.acm.org/10.1145/3022701CrossRefGoogle Scholar
  13. 13.
    Hagiya, T., Yazaki, T., Horiuchi, T., Kato, T.: Typing tutor: automatic error detection and instruction in text entry for elderly people. In: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, MobileHCI 2015, pp. 696–703. ACM, New York (2015).  https://doi.org/10.1145/2786567.2793690. http://doi.acm.org/10.1145/2786567.2793690
  14. 14.
    Kane, S.K., Bigham, J.P., Wobbrock, J.O.: Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, Assets 2008, pp. 73–80. ACM, New York (2008).  https://doi.org/10.1145/1414471.1414487. http://doi.acm.org/10.1145/1414471.1414487
  15. 15.
    Kato, T.: What “question-asking protocols” can say about the user interface. Int. J. Man Mach. Stud. 25(6), 659–673 (1986).  https://doi.org/10.1016/S0020-7373(86)80080-3. http://www.sciencedirect.com/science/article/pii/S0020737386800803CrossRefGoogle Scholar
  16. 16.
    Kelleher, C., Pausch, R.: Stencils-based tutorials: design and evaluation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2005, pp. 541–550. ACM, New York (2005).  https://doi.org/10.1145/1054972.1055047. http://doi.acm.org/10.1145/1054972.1055047
  17. 17.
    Lafreniere, B., Grossman, T., Fitzmaurice, G.: Community enhanced tutorials: improving tutorials with multiple demonstrations. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 1779–1788. ACM, New York (2013).  https://doi.org/10.1145/2470654.2466235. http://doi.acm.org/10.1145/2470654.2466235
  18. 18.
    Lieberman, H., Rosenzweig, E., Fry, C.: Steptorials: mixed-initiative learning of high-functionality applications. In: Proceedings of the 19th International Conference on Intelligent User Interfaces, IUI 2014, pp. 359–364. ACM, New York (2014).  https://doi.org/10.1145/2557500.2557543. http://doi.acm.org/10.1145/2557500.2557543
  19. 19.
    Montague, K., Hanson, V.L., Cobley, A.: Designing for individuals: usable touch-screen interaction through shared user models. In: Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2012, pp. 151–158. ACM, New York (2012).  https://doi.org/10.1145/2384916.2384943. http://doi.acm.org/10.1145/2384916.2384943
  20. 20.
    Oh, U., Kane, S.K., Findlater, L.: Follow that sound: using sonification and corrective verbal feedback to teach touchscreen gestures. In: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2013, pp. 13:1–13:8 (2013).  https://doi.org/10.1145/2513383.2513455. http://doi.acm.org/10.1145/2513383.2513455
  21. 21.
    Rodrigues, A., Camacho, L., Nicolau, H., Montague, K., Guerreiro, T.: AidMe: interactive non-visual smartphone tutorials. In: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, MobileHCI 2018, pp. 205–212. ACM, New York (2018).  https://doi.org/10.1145/3236112.3236141. http://doi.acm.org/10.1145/3236112.3236141
  22. 22.
    Rodrigues, A., Montague, K., Nicolau, H., Guerreiro, J., Guerreiro, T.: In-context Q&A to support blind people using smartphones. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2017, pp. 32–36. ACM, New York (2017).  https://doi.org/10.1145/3132525.3132555. http://doi.acm.org/10.1145/3132525.3132555
  23. 23.
    Rodrigues, A., Montague, K., Nicolau, H., Guerreiro, T.: Getting smartphones to talkback: understanding the smartphone adoption process of blind users. In: Proceedings of ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2015, pp. 23–32 (2015).  https://doi.org/10.1145/2700648.2809842. http://doi.acm.org/10.1145/2700648.2809842
  24. 24.
    Sato, D., Takagi, H., Kobayashi, M., Kawanaka, S., Asakawa, C.: Exploratory analysis of collaborative web accessibility improvement. ACM Trans. Access. Comput. 3(2), 5:1–5:30 (2010).  https://doi.org/10.1145/1857920.1857922. http://doi.acm.org/10.1145/1857920.1857922CrossRefGoogle Scholar
  25. 25.
    Shinohara, K., Wobbrock, J.O.: In the shadow of misperception: assistive technology use and social interactions. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2011, pp. 705–714. ACM, New York (2011).  https://doi.org/10.1145/1978942.1979044. http://doi.acm.org/10.1145/1978942.1979044
  26. 26.
    Takagi, H., Kawanaka, S., Kobayashi, M., Itoh, T., Asakawa, C.: Social accessibility: achieving accessibility through collaborative metadata authoring. In: Proceedings of ACM SIGACCESS Conference on Computers and Accessibility, Assets 2008, pp. 193–200 (2008).  https://doi.org/10.1145/1414471.1414507. http://doi.acm.org/10.1145/1414471.1414507
  27. 27.
    Vigo, M., Harper, S.: Coping tactics employed by visually disabled users on the web. Int. J. Hum. Comput. Stud. 71(11), 1013–1025 (2013).  https://doi.org/10.1016/j.ijhcs.2013.08.002. http://www.sciencedirect.com/science/article/pii/S1071581913001006CrossRefGoogle Scholar
  28. 28.
    Vyshnavi: Tech Accessibility Youtube Channel (2018). https://www.youtube.com/channel/UCOVwNKy4c_jDllBXUntbh5Q/about. Accessed 3 Jan 2019
  29. 29.
    Wang, C.Y., Chu, W.C., Chen, H.R., Hsu, C.Y., Chen, M.Y.: EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2014, pp. 4027–4036. ACM, New York (2014).  https://doi.org/10.1145/2556288.2557407. http://doi.acm.org/10.1145/2556288.2557407
  30. 30.
    Zhang, D., Zhou, L., Uchidiuno, J.O., Kilic, I.Y.: Personalized assistive web for improving mobile web browsing and accessibility for visually impaired users. ACM Trans. Access. Comput. 10(2), 6:1–6:22 (2017).  https://doi.org/10.1145/3053733. http://doi.acm.org/10.1145/3053733CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • André Rodrigues
    • 1
    Email author
  • André Santos
    • 1
  • Kyle Montague
    • 2
  • Hugo Nicolau
    • 3
  • Tiago Guerreiro
    • 1
  1. 1.LASIGE, Faculdade de CiênciasUniversidade de LisboaLisbonPortugal
  2. 2.Open LabNewcastle UniversityNewcastle upon TyneUK
  3. 3.INESC-ID, Instituto Superior TécnicoUniversidade de LisboaLisbonPortugal

Personalised recommendations