Skip to main content

Intelligent Interaction in Accessible Applications

  • Chapter
  • First Online:

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

Advances in artificial intelligence over the past decade, combined with increasingly affordable computing power, have made new approaches to accessibility possible. In this chapter we describe three ongoing projects in the Department of Computer Science at North Carolina State University. CAVIAR, a Computer-vision Assisted Vibrotactile Interface for Accessible Reaching, is a wearable system that aids people with vision impairment (PWVI) in locating, identifying, and acquiring objects within reach; a mobile phone worn on the chest processes video input and guides the user’s hand to objects via a wristband with vibrating actuators. TIKISI (Touch It, Key It, Speak It), running on a tablet, gives PWVI the ability to explore maps and other forms of graphical information. AccessGrade combines crowd-sourcing with machine learning techniques to predict the accessibility of Web pages.

This is a preview of subscription content, log in via an institution.

Notes

  1. 1.

    www.freedomscientific.com/jaws-hq.asp

  2. 2.

    www.nvda-project.org

  3. 3.

    www.section508.gov

  4. 4.

    www.access8878.co.uk

  5. 5.

    www.w3.org/WAI/guid-tech.html

  6. 6.

    http://lab.arc90.com/2009/03/02/readability

  7. 7.

    The ellipses in the overview Lenora hears indicate prosodic cues. The prosody of speech output affords speech interface users a familiar way of indicating importance in a long stream of speech. The ellipses, which represent pauses, are also exactly the locations where a small audio tone or click can be inserted for further enforcement [23]. The silencing command is similar to that in Gravitas [17] to pause the focus at the given node, and then begin exploring from that point.

References

  1. Abe, K., Azumatani, Y., Kukouda, M., & Suzuki, S. (1986). Discrimination of symbols, lines, and characters in flow chart recognition. In Proceedings of 8th ICPR, Paris, France (pp.1071–1074).

    Google Scholar 

  2. Aratï, A., Juhasz, Z., Blenkhorn, P., Evans, D. G., & Evreinov, G. E. (2004). Java-powered braille slate talker. In J. Klaus, K. Miesenberger, W. L. Zagler, & D. Burger (Eds.), ICCHP, Lecture notes in computer science (Vol. 3118, pp. 506–513). Linz: Springer. URL: http://dblp.uni-trier.de/db/conf/icchp/icchp2004.html#AratoJBEE04

  3. Bahram, S., Chakraborty, A., & St. Amant, R. (2012). Caviar: A vibrotactile device for accessible reaching. In Proceedings of the international conference on Intelligent User Interfaces (IUI), Lisbon, Portugal (pp. 245–248). New York: ACM.

    Google Scholar 

  4. Bahram, S., Sen, D., & Amant, R. S. (2011). Prediction of web page accessibility based on structural and textual features. In Proceedings of the international cross-disciplinary conference on web accessibility, W4A’11 (pp. 31:1–31:4). New York: ACM. doi:10.1145/1969289.1969329. URL: http://doi.acm.org/10.1145/1969289.1969329

  5. Bigham, J., Kaminsky, R., Ladner, R., Danielsson, O., & Hempton, G. (2006). WebInSight: Making web images accessible. In Proceedings of the 8th international ACM SIGACCESS conference on computers and accessibility, Portland, USA (pp. 181–188). New York: ACM.

    Google Scholar 

  6. Bigham, J., Lau, T., & Nichols, J. (2009). Trailblazer: Enabling blind users to blaze trails through the web. In Proceedings of the 13th international conference on intelligent user interfaces (pp. 177–186). New York: ACM.

    Google Scholar 

  7. Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. In Proceedings of the 7th annual conference on computer graphics and interactive techniques, SIGGRAPH’80 (pp. 262–270). New York: ACM. URL: http://doi.acm.org/10.1145/800250.807503

  8. Bosman, S., Groenendaal, B., Findlater, J., Visser, T., Graaf, M., & Markopoulos, P. (2003). Gentleguide: An exploration of haptic output for indoors pedestrian guidance. In Human-computer interaction with mobile devices and services (pp. 358–362). Berlin/Heidelberg: Springer.

    Google Scholar 

  9. Brajnik, G., Yesilada, Y., & Harper, S. (2010). Testability and validity of wcag 2.0: The expertise effect. In Proceedings of the 12th international ACM SIGACCESS conference on computers and accessibility, Assets’10 (pp. 43–50). New York: ACM. URL: http://doi.acm.org/10.1145/1878803.1878813

  10. Brock, A., Truillet, P., Oriola, B., & Jouffrais, C. (2010). Usage of multimodal maps for blind people: Why and how. In ACM international conference on interactive tabletops and surfaces Saarbrücken, Germany (pp. 247–248). New York: ACM.

    Google Scholar 

  11. Brudvik, J., Bigham, J., Cavender, A., & Ladner, R. (2008). Hunting for headings: Sighted labeling vs. automatic classification of headings. In Proceedings of the 10th international ACM conference on computers and accessibility (pp. 201–208). New York: ACM.

    Google Scholar 

  12. Bühler, C., Heck, H., Perlick, O., Nietzio, A., & Ulltveit-Moe, N. (2006). Interpreting results from large scale automatic evaluation of web accessibility. In ICCHP’06 (pp. 184–191). Berlin/Heidelberg: Springer-Verlag.

    Google Scholar 

  13. Davalcu, H., Vadrevu, S., Nagarajan, S., & Ramakrishnan, I. (2005). Ontominer: Bootstrapping and populating ontologies from domain-specific web sites. IEEE Intelligent Systems, 18(5), 24–33.

    Google Scholar 

  14. Ferres, L., Lindgaard, G., & Sumegi, L. (2010). Evaluating a tool for improving accessibility to charts and graphs. In Proceedings of the 12th international ACM SIGACCESS conference on computers and accessibility (pp. 83–90). New York: ACM.

    Google Scholar 

  15. Freire, A. P., Fortes, R. P. M., Turine, M. A. S., & Paiva, D. M. B. (2008). An evaluation of web accessibility metrics based on their attributes. In Proceedings of the 26th annual ACM international conference on design of communication, SIGDOC’08 (pp. 73–80). New York: ACM. URL: http://doi.acm.org/10.1145/1456536.1456551

  16. Gardner, J., & Bulatov, V. (2006). Scientific diagrams made easy with IVEO. In Computers helping people with special needs (pp. 1243–1250). URL: http://dx.doi.org/10.1007/11788713_179

  17. Goncu, C., & Marriott, K. (2011). Gravvitas: Generic multi-touch presentation of accessible graphics. In Human-computer interaction–INTERACT 2011 (pp. 30–48). Berlin/Heidelberg: Springer.

    Google Scholar 

  18. Hill, D. R., & Grieb, C. (1988). Substitution for a restricted visual channel in multimodal computer-human dialogue. IEEE Transactions on Systems, Man, and Cybernetics, 18(3), 285–304.

    Article  Google Scholar 

  19. Google, Inc. (2013). TalkBack – Android application. https://play.google.com/store/apps/details?id=com.google.android.marvin.talkback. Accessed 02 Apr 2013.

  20. Jacobson, R. (1998). Navigating maps with little or no sight: An audio-tactile approach. In Proceedings of the workshop on Content Visualization and Intermedia Representations (CVIR). New Brunswick: ACL (ACL = Association for Computational Linguistics).

    Google Scholar 

  21. Kane, S., Morris, M., Perkins, A., Wigdor, D., Ladner, R., & Wobbrock, J. (2011). Access overlays: Improving non-visual access to large touch screens for blind users. In Proceedings of the 24th annual ACM symposium on user interface software and technology (pp. 273–282). New York: ACM.

    Google Scholar 

  22. Kane, S. K., Bigham, J. P., & Wobbrock, J. O. (2008). Slide rule: Making mobile touch screens accessible to blind people using multi-touch interaction techniques. In Proceedings of the 10th international ACM SIGACCESS conference on computers and accessibility, Assets’08 (pp.73–80). New York: ACM. URL: http://doi.acm.org/10.1145/1414471.1414487

  23. Kane, S. K., Bigham, J. P., & Wobbrock, J. O. (2008). Slide rule: Making mobile touch screens accessible to blind people using multi-touch interaction techniques. In Proceedings of the 10th international ACM SIGACCESS conference on computers and accessibility, Assets’08 (pp.73–80). New York: ACM. URL: http://doi.acm.org/10.1145/1414471.1414487

  24. King, A. R. (2006). Re-presenting visual content for blind people. Ph.D. thesis, University of Manchester, Manchester.

    Google Scholar 

  25. Kottapally, K., Ngo, C., Reddy, R., Pontelli, E., Son, T., & Gillan, D. (2003). Towards the creation of accessibility agents for non-visual navigation of the web. In Proceedings of the 2003 conference on universal usability (pp. 134–141). New York: ACM.

    Google Scholar 

  26. Landau, S., & Wells, L. (2003). Merging tactile sensory input and audio data by means of the talking tactile tablet. In EuroHaptics’03 (pp. 414–418). Ireland: Dublin.

    Google Scholar 

  27. Leshed, G., Haber, E., Matthews, T., & Lau, T. (2008). CoScripter: Automating & sharing how-to knowledge in the enterprise. In Proceeding of the twenty-sixth annual ACM conference on human factors in computing systems (pp. 1719–1728). New York: ACM.

    Google Scholar 

  28. Lieberman, J., & Breazeal, C. (2007). Tikl: Development of a wearable vibrotactile feedback suit for improved human motor learning. IEEE Transactions on Robotics, 23(5), 919–926.

    Article  Google Scholar 

  29. Mahmud, J., Borodin, Y., Das, D., & Ramakrishnan, I. (2007). Combating information overload in non-visual web access using context. In Proceedings of the 12th international conference on intelligent user interfaces (pp. 341–344). New York: ACM.

    Google Scholar 

  30. Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.

    MATH  Google Scholar 

  31. Oviatt, S. (1996) Multimodal interfaces for dynamic interactive maps. In Proceedings of the SIGCHI conference on human factors in computing systems: Common ground, CHI’96 (pp. 95–102). New York: ACM. URL: http://doi.acm.org/10.1145/238386.238438

  32. Perkins, C., & Gardiner, A. (2003). Real world map reading strategies. The Cartographic Journal, 40(3), 265–268.

    Article  Google Scholar 

  33. Petrie, H., Schlieder, C., Blenkhorn, P., Evans, G., King, A., O’Neill, A., Ioannidis, G., Gallagher, B., Crombie, D., & Mager, R., et al. (2002) Tedub: A system for presenting and exploring technical drawings for blind people. In Computers helping people with special needs (pp. 47–67).

    Google Scholar 

  34. Ramakrishnan, I., Mahmud, J., Borodin, Y., Islam, M., & Ahmed, F. (2009). Bridging the web accessibility divide. Electronic Notes in Theoretical Computer Science, 235, 107–124.

    Article  Google Scholar 

  35. Siekierska, E., Labelle, R., Brunet, L., Mccurdy, B., Pulsifer, P., Rieger, M., et al. (2003). Enhancing spatial learning and mobility training of visually impaired people-a technical paper on the internet-based tactile and audio-tactile mapping. The Canadian Geographer/Le G? ographe canadien, 47(4), 480–493.

    Article  Google Scholar 

  36. Spelmezan, D., Jacobs, M., Hilgers, A., & Borchers, J. (2009). Tactile motion instructions for physical activities. In Proceedings of the 27th international conference on human factors in computing systems, CHI’09 (pp. 2243–2252). New York: ACM. http://doi.acm.org/10.1145/1518701.1519044

  37. Takagi, H., Asakawa, C., Fukuda, K., & Maeda, J. (2004). Accessibility designer: Visualizing usability for the blind. In Proceedings of the 6th international ACM SIGACCESS conference on computers and accessibility (pp. 177–184). New York: ACM.

    Google Scholar 

  38. Van Der Linden, J., Schoonderwaldt, E., & Bird, J. (2009). Good vibrations: Guiding body movements with vibrotactile feedback. In Proceedings of 3rd international workshop physicality (pp. 13–18). UK: Cambridge.

    Google Scholar 

  39. Vanderheiden, G. C. (1996). Use of audio-haptic interface techniques to allow nonvisual access to touchscreen appliances. In Proceedings of the human factors and ergonomics society annual meeting (pp. 1266–1266). Thousand Oaks: SAGE Publications.

    Google Scholar 

  40. Yi, L., Liu, B., & Li, X. (2003). Eliminating noisy information in web pages for data mining. In Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 296–305). New York: ACM.

    Google Scholar 

  41. Yoshioka, M. (2008). IR interface for contrasting multiple news sites. Information Retrieval Technology (pp. 508–513).

    Google Scholar 

  42. Yu, Y., Samal, A., & Seth, S. (1994). Isolating symbols from connection lines in a class of engineering drawings. Pattern Recognition 27(3), 391–404. doi:10.1016/0031-3203(94)90116-3. URL: http://www.sciencedirect.com/science/article/pii/0031320394901163

  43. Yu, Y., Samal, A., & Seth, S. (1997). A system for recognizing a large class of engineering drawings. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(8), 868–890.

    Article  Google Scholar 

  44. Zapirain, B., Zorrilla, A., Oleagordia, I., & Muro, A. (2010). Accessible schematics content descriptors using image processing techniques for blind students learning. In 5th international symposium on I/V Communications and Mobile Network (ISVC) (pp. 1–4). New York: IEEE. doi:10.1109/ISVC.2010.5656270

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sina Bahram .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bahram, S., Chakraborty, A., Ravindran, S., St. Amant, R. (2013). Intelligent Interaction in Accessible Applications. In: Biswas, P., Duarte, C., Langdon, P., Almeida, L., Jung, C. (eds) A Multimodal End-2-End Approach to Accessible Computing. Human–Computer Interaction Series. Springer, London. https://doi.org/10.1007/978-1-4471-5082-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5082-4_5

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-5081-7

  • Online ISBN: 978-1-4471-5082-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics