How Blind and Sighted Individuals Perceive the Typographic Text-Signals of a Document

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9737)

Abstract

Typographic, layout and logical elements constitute visual text-signals of a document that carry semantic information over and above its content. Although they are important to the reader, most of the current Text-to-Speech (TtS) systems do not support them. As there is a lack of studies on how blind perceive them and aiming to incorporate them efficiently in advanced TtS systems, we investigate in a systematic way the perception of the main typographic text-signals by 73 blind and sighted students. The results show that both groups of the participants perceive that font-styles are used largely to better locate, recognize or distinguish the topics or specific information in a document. Almost half of the sighted argue that they are useful for the comprehension of the content, but only 4 % of the blind students perceive the same. Most of the sighted participants (68 %) consider that bold is used to indicate an important word or phrase in the text that needs more attention by the reader, but only 23 % of them perceive the same for the italics. 27 % of the blind participants and 23 % of the sighted perceive that the role of font-size is to provide emphasis. Moreover, only 9 % of the sighted students grasp that bold is used for emphasis and 13 % of them that italics is used for light emphasis. Half of the blind participants consider that font-size plays an important role in separating the basic elements of a text (e.g. titles, footnotes), but only 13 % of the sighted believe the same. Finally, the sighted and blind students recognize the titles of a text mainly using non-identical criteria.

Keywords

Document accessibility Text-signals Typography Font-size Font-type 

References

  1. 1.
    Kouroupetroglou, G., Tsonos, D.: Multimodal accessibility of documents. In: Pinder, S. (ed.) Advances in Human-Computer Interaction, pp. 451–470. I-Tech Education and Publishing, Vienna (2008)Google Scholar
  2. 2.
    Lorch, R.F.: Text-signaling devices and their effects on reading and memory processes. Educ. Psychol. Rev. 1, 209–234 (1989)CrossRefGoogle Scholar
  3. 3.
    Lemarié, J., Eyrolle, H., Cellier, J.M.: Visual signals in text comprehension: how to restore them when oralizing a text via a speech synthesis? Comput. Hum. Behav. 22, 1096–1115 (2006)CrossRefGoogle Scholar
  4. 4.
    Lorch, R.F., Chen, H.T., Lemarié, J.: Communicating headings and preview sentences in text and speech. J. Exp. Psychol. Appl. 18, 265–276 (2012)CrossRefGoogle Scholar
  5. 5.
    Spyridakis, J.H.: Signaling effects: a review of the research—Part I. J. Tech. Writ. Commun. 19, 227–240 (1989)Google Scholar
  6. 6.
    Lemarié, J., Lorch, R.F., Eyrolle, H., Virbel, J.: SARA: a text-based and reader-based theory of text signaling. Educ. Psychol. 43, 27–48 (2008)CrossRefGoogle Scholar
  7. 7.
    Lorch, R.F., Lemarié, J., Grant, R.A.: Signaling hierarchical and sequential organization in expository text. Sci. Stud. Read. 15, 267–284 (2011)CrossRefGoogle Scholar
  8. 8.
    Han, Z.H., Park, E.S., Combs, C.: Textual enhancement of input: issues and possibilities. Appl. Linguist. 29, 597–618 (2008)CrossRefGoogle Scholar
  9. 9.
    W3C: The World Wide Web Consortium. http://www.w3.org
  10. 10.
    Fourli-Kartsouni, F., Slavakis, K., Kouroupetroglou, G., Theodoridis, S.: A Bayesian Network Approach to Semantic Labelling of Text Formatting in XML Corpora of Documents. In: Stephanidis, C. (ed.) Universal Access in HCI, Part III, HCII 2007. LNCS, vol. 4556, pp. 299–308. Springer, Heidelberg (2007)Google Scholar
  11. 11.
    Fellbaum, K., Kouroupetroglou, G.: Principles of electronic speech processing with applications for people with disabilities. Technol. Disabil. 20, 55–85 (2008)Google Scholar
  12. 12.
    Freitas, D., Kouroupetroglou, G.: Speech technologies for blind and low vision persons. Technol. Disabil. 20, 135–156 (2008)Google Scholar
  13. 13.
    Isaila, N., Smeureannu, I.: The access of persons with visual disabilities at the scientific content. WSEAS Trans. Comput. 9, 788–797 (2010)Google Scholar
  14. 14.
    Pavel, O.: Automatic system for making web content accessible for visually impaired users. WSEAS Trans. Comput. Res. 1, 325–328 (2006)Google Scholar
  15. 15.
    Kouroupetroglou, G.: Acoustic mapping of visual text signals through advanced text-to-speech: the case of font size. WSEAS Trans. Comput. 14, 559–569 (2015)Google Scholar
  16. 16.
    Tsonos, D., Kouroupetroglou, G.: Modeling readers’ emotional state response on documents’ typographic elements. Adv. Hum. Comput. Interact. 2011, 1–18 (2011). doi:10.1155/2011/206983 CrossRefGoogle Scholar
  17. 17.
    Tsonos, D., Kouroupetroglou, G., Deligiorgi, D.: Regression modeling of reader’s emotions induced by font based text signals. In: Stephanidis, C., Antona, M. (eds.) UAHCI/HCII 2013, Part II. LNCS, vol. 8010, pp. 434–443. Springer, Heidelberg (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Informatics and TelecommunicationsNational and Kapodistrian University of AthensAthensGreece
  2. 2.Graduate Program in Basic and Applied Cognitive ScienceNational and Kapodistrian University of AthensAthensGreece

Personalised recommendations