Intelligent Interaction in Accessible Applications

  • Sina Bahram
  • Arpan Chakraborty
  • Srinath Ravindran
  • Robert St. Amant
Part of the Human–Computer Interaction Series book series (HCIS)


Advances in artificial intelligence over the past decade, combined with increasingly affordable computing power, have made new approaches to accessibility possible. In this chapter we describe three ongoing projects in the Department of Computer Science at North Carolina State University. CAVIAR, a Computer-vision Assisted Vibrotactile Interface for Accessible Reaching, is a wearable system that aids people with vision impairment (PWVI) in locating, identifying, and acquiring objects within reach; a mobile phone worn on the chest processes video input and guides the user’s hand to objects via a wristband with vibrating actuators. TIKISI (Touch It, Key It, Speak It), running on a tablet, gives PWVI the ability to explore maps and other forms of graphical information. AccessGrade combines crowd-sourcing with machine learning techniques to predict the accessibility of Web pages.


Haptic Feedback Graphical Information Multimodal Interface Peripersonal Space Universal Design 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Abe, K., Azumatani, Y., Kukouda, M., & Suzuki, S. (1986). Discrimination of symbols, lines, and characters in flow chart recognition. In Proceedings 8th ICPR, Paris, France(pp. 1071–1074).Google Scholar
  2. 2.
    Aratï, A., Juhasz, Z., Blenkhorn, P., Evans, D. G., & Evreinov, G. E. (2004). Java-powered braille slate talker. In J. Klaus, K. Miesenberger, W. L. Zagler, & D. Burger (Eds.), ICCHP, lecture notes in computer science (Vol. 3118, pp. 506–513). Springer. URL:
  3. 3.
    Bahram, S., Chakraborty, A., & St. Amant, R. (2012). Caviar: A vibrotactile device for accessible reaching. In Proceedings of the international conference on Intelligent User Interfaces (IUI) (pp. 245–248). New York: ACM.Google Scholar
  4. 4.
    Bahram, S., Sen, D., & Amant, R. S. (2011). Prediction of web page accessibility based on structural and textual features. In Proceedings of the international cross-disciplinary conference on Web Accessibility, W4A ’11 (pp. 31:1–31:4). New York: ACM.  10.1145/1969289.1969329. URL:
  5. 5.
    Bigham, J., Kaminsky, R., Ladner, R., Danielsson, O., & Hempton, G. (2006). WebInSight: Making web images accessible. In Proceedings of the 8th international ACM SIGACCESS conference on computers and accessibility (pp. 181–188). New York: ACM.Google Scholar
  6. 6.
    Bigham, J., Lau, T., & Nichols, J. (2009). Trailblazer: Enabling blind users to blaze trails through the web. In Proceedings of the 13th international conference on intelligent user interfaces (pp. 177–186). New York: ACM.Google Scholar
  7. 7.
    Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. In Proceedings of the 7th annual conference on computer graphics and interactive techniques, SIGGRAPH ’80 (pp. 262–270). New York: ACM. URL:
  8. 8.
    Bosman, S., Groenendaal, B., Findlater, J., Visser, T., Graaf, M., & Markopoulos, P. (2003). Gentleguide: An exploration of haptic output for indoors pedestrian guidance. In L. Chittaro (Ed.), Human-computer interaction with mobile devices and services (pp. 358–362). Berlin: Springer.CrossRefGoogle Scholar
  9. 9.
    Brajnik, G., Yesilada, Y., & Harper, S. (2010). Testability and validity of wcag 2.0: The expertise effect. In Proceedings of the 12th international ACM SIGACCESS conference on computers and accessibility, ASSETS ’10 (pp. 43–50). New York: ACM. URL:
  10. 10.
    Brock, A., Truillet, P., Oriola, B., & Jouffrais, C. (2010). Usage of multimodal maps for blind people: Why and how. In ACM international conference on interactive tabletops and surfaces (pp. 247–248). New York: ACM.Google Scholar
  11. 11.
    Brudvik, J., Bigham, J., Cavender, A., & Ladner, R. (2008). Hunting for headings: Sighted labeling vs. automatic classification of headings. In Proceedings of the 10th international ACM conference on computers and accessibility (pp. 201–208). New York: ACM.Google Scholar
  12. 12.
    Bühler, C., Heck, H., Perlick, O., Nietzio, A., & Ulltveit-Moe, N. (2006). Interpreting results from large scale automatic evaluation of web accessibility. In ICCHP’06 (pp. 184–191). New York: ACM.Google Scholar
  13. 13.
    Davalcu, H., Vadrevu, S., Nagarajan, S., & Ramakrishnan, I. (2005). Ontominer: Bootstrapping and populating ontologies from domain-specific web sites. IEEE Intelligent Systems, 18(5), 24–33. New York: ACM.Google Scholar
  14. 14.
    Ferres, L., Lindgaard, G., & Sumegi, L. (2010). Evaluating a tool for improving accessibility to charts and graphs. In Proceedings of the 12th international ACM SIGACCESS conference on computers and accessibility (pp. 83–90). New York: ACM.Google Scholar
  15. 15.
    Freire, A. P., Fortes, R. P. M., Turine, M. A. S., & Paiva, D. M. B. (2008). An evaluation of web accessibility metrics based on their attributes. In Proceedings of the 26th annual ACM international conference on design of communication, SIGDOC ’08 (pp. 73–80). New York: ACM. URL:
  16. 16.
    Gardner, J., & Bulatov, V. (2006). Scientific diagrams made easy with IVEO. In Computers helping people with special needs (pp. 1243–1250). Berlin: Springer. URL:
  17. 17.
    Goncu, C., & Marriott, K. (2006). Gravvitas: Generic multi-touch presentation of accessible graphics. In Human-computer interaction–INTERACT 2011 (pp. 30–48). Berlin: Springer.Google Scholar
  18. 18.
    Hill, D. R., & Grieb, C. (1988). Substitution for a restricted visual channel in multimodal computer-human dialogue. IEEE Transactions on Systems, Man, and Cybernetics, 18(3), 285–304.CrossRefGoogle Scholar
  19. 19.
    Jacobson, R. (1998). Navigating maps with little or no sight: An audio-tactile approach. In Proceedings of the workshop on Content Visualization and Intermedia Representations (CVIR) (pp. 95–102).Google Scholar
  20. 20.
    Kane, S., Morris, M., Perkins, A., Wigdor, D., Ladner, R., & Wobbrock, J. (2011). Access overlays: Improving non-visual access to large touch screens for blind users. In Proceedings of the 24th annual ACM symposium on user interface software and technology (pp. 273–282). New York: ACM.Google Scholar
  21. 21.
    Kane, S. K., Bigham, J. P., & Wobbrock, J. O. (2008). Slide rule: Making mobile touch screens accessible to blind people using multi-touch interaction techniques. In Proceedings of the 10th international ACM SIGACCESS conference on computers and accessibility, Assets ’08 (pp. 73–80). New York: ACM. URL:
  22. 22.
    King, A. R. (2006). Re-presenting visual content for blind people. Ph.D. thesis, University of Manchester.Google Scholar
  23. 23.
    Kottapally, K., Ngo, C., Reddy, R., Pontelli, E., Son, T., & Gillan, D. (2003). Towards the creation of accessibility agents for non-visual navigation of the web. In Proceedings of the 2003 conference on universal usability (pp. 134–141). New York: ACM.Google Scholar
  24. 24.
    Landau, S., & Wells, L. (2003). Merging tactile sensory input and audio data by means of the talking tactile tablet. In EuroHaptics ’03, IEEE Computer Society (pp. 414–418). New York: ACM.Google Scholar
  25. 25.
    Leshed, G., Haber, E., Matthews, T., & Lau, T. (2008). CoScripter: Automating & sharing how-to knowledge in the enterprise. In Proceeding of the twenty-sixth annual ACM conference on human factors in computing systems (pp. 1719–1728). New York: ACM.Google Scholar
  26. 26.
    Lieberman, J., & Breazeal, C. (2007). Tikl: Development of a wearable vibrotactile feedback suit for improved human motor learning. IEEE Transactions on Robotics, 23(5), 919–926.CrossRefGoogle Scholar
  27. 27.
    Mahmud, J., Borodin, Y., Das, D., & Ramakrishnan, I. (2007). Combating information overload in non-visual web access using context. In Proceedings of the 12th international conference on intelligent user interfaces (pp. 341–344). New York: ACM.Google Scholar
  28. 28.
    Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.Google Scholar
  29. 29.
    Oviatt, S. (1996). Multimodal interfaces for dynamic interactive maps. In Proceedings of the SIGCHI conference on human factors in computing systems: Common ground, CHI ’96 (pp. 95–102). New York: ACM. URL:
  30. 30.
    Perkins, C., & Gardiner, A. (2003). Real world map reading strategies. The Cartographic Journal, 40(3), 265–268.CrossRefGoogle Scholar
  31. 31.
    Petrie, H., Schlieder, C., Blenkhorn, P., Evans, G., King, A., O’Neill, A., Ioannidis, G., Gallagher, B., Crombie, D., Mager, R., et al. (2002). Tedub: A system for presenting and exploring technical drawings for blind people. In Computers helping people with special needs (pp. 47–67). Berlin: Springer.Google Scholar
  32. 32.
    Ramakrishnan, I., Mahmud, J., Borodin, Y., Islam, M., & Ahmed, F. (2009). Bridging the web accessibility divide. Electronic Notes in Theoretical Computer Science, 235, 107–124.CrossRefGoogle Scholar
  33. 33.
    Siekierska, E., Labelle, R., Brunet, L., Mccurdy, B., Pulsifer, P., Rieger, M., & O’Neil, L. (2003). Enhancing spatial learning and mobility training of visually impaired people-a technical paper on the internet-based tactile and audio-tactile mapping. The Canadian Geographer/Le G? ographe canadien, 47(4), 480–493.CrossRefGoogle Scholar
  34. 34.
    Spelmezan, D., Jacobs, M., Hilgers, A., & Borchers, J. (2009). Tactile motion instructions for physical activities. In Proceedings of the 27th international conference on human factors in computing systems, CHI ’09 (pp. 2243–2252). New York: ACM.
  35. 35.
    Takagi, H., Asakawa, C., Fukuda, K., & Maeda, J. (2004). Accessibility designer: Visualizing usability for the blind. In Proceedings of the 6th international ACM SIGACCESS conference on computers and accessibility (pp. 177–184). ACM.Google Scholar
  36. 36.
    Van Der Linden, J., Schoonderwaldt, E., & Bird, J. (2009). Good vibrations: Guiding body movements with vibrotactile feedback. In Proceedings of 3rd international workshop physicality (pp. 13–18).Google Scholar
  37. 37.
    Vanderheiden, G. C. (1996). Use of audio-haptic interface techniques to allow nonvisual access to touchscreen appliances. Human Factors and Ergonomics Society Annual Meeting Proceedings, 40(24), 1266.CrossRefGoogle Scholar
  38. 38.
    Yi, L., Liu, B., & Li, X. (2003). Eliminating noisy information in web pages for data mining. In Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 296–305). New York: ACM.Google Scholar
  39. 39.
    Yoshioka, M. (2008). IR interface for contrasting multiple news sites. In Information retrieval technology (pp. 508–513). Berlin: Springer.CrossRefGoogle Scholar
  40. 40.
    Yu, Y., Samal, A., & Seth, S. (1994). Isolating symbols from connection lines in a class of engineering drawings. Pattern Recognition, 27(3), 391–404.  10.1016/0031-3203(94)90116-3. URL:
  41. 41.
    Yu, Y., Samal, A., & Seth, S. (1997). A system for recognizing a large class of engineering drawings. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(8), 868–890.CrossRefGoogle Scholar
  42. 42.
    Zapirain, B., Zorrilla, A., Oleagordia, I., & Muro, A. (2010). Accessible schematics content descriptors using image processing techniques for blind students learning. In 5th International Symposium on I/V Communications and Mobile Network (ISVC) (pp. 1–4). New York: IEEE.  10.1109/ISVC.2010.5656270

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  • Sina Bahram
    • 1
  • Arpan Chakraborty
    • 2
  • Srinath Ravindran
    • 3
  • Robert St. Amant
    • 1
  1. 1.Department of Computer ScienceNorth Carolina State UniversityRaleighUSA
  2. 2.UdacityRaleighUSA
  3. 3.YahooRaleighUSA

Personalised recommendations