How Blind and Sighted Individuals Perceive the Typographic Text-Signals of a Document
Typographic, layout and logical elements constitute visual text-signals of a document that carry semantic information over and above its content. Although they are important to the reader, most of the current Text-to-Speech (TtS) systems do not support them. As there is a lack of studies on how blind perceive them and aiming to incorporate them efficiently in advanced TtS systems, we investigate in a systematic way the perception of the main typographic text-signals by 73 blind and sighted students. The results show that both groups of the participants perceive that font-styles are used largely to better locate, recognize or distinguish the topics or specific information in a document. Almost half of the sighted argue that they are useful for the comprehension of the content, but only 4 % of the blind students perceive the same. Most of the sighted participants (68 %) consider that bold is used to indicate an important word or phrase in the text that needs more attention by the reader, but only 23 % of them perceive the same for the italics. 27 % of the blind participants and 23 % of the sighted perceive that the role of font-size is to provide emphasis. Moreover, only 9 % of the sighted students grasp that bold is used for emphasis and 13 % of them that italics is used for light emphasis. Half of the blind participants consider that font-size plays an important role in separating the basic elements of a text (e.g. titles, footnotes), but only 13 % of the sighted believe the same. Finally, the sighted and blind students recognize the titles of a text mainly using non-identical criteria.
KeywordsDocument accessibility Text-signals Typography Font-size Font-type
This research was partially funded by the National and Kapodistrian University of Athens, Special Account for Research Grants.
- 1.Kouroupetroglou, G., Tsonos, D.: Multimodal accessibility of documents. In: Pinder, S. (ed.) Advances in Human-Computer Interaction, pp. 451–470. I-Tech Education and Publishing, Vienna (2008)Google Scholar
- 5.Spyridakis, J.H.: Signaling effects: a review of the research—Part I. J. Tech. Writ. Commun. 19, 227–240 (1989)Google Scholar
- 9.W3C: The World Wide Web Consortium. http://www.w3.org
- 10.Fourli-Kartsouni, F., Slavakis, K., Kouroupetroglou, G., Theodoridis, S.: A Bayesian Network Approach to Semantic Labelling of Text Formatting in XML Corpora of Documents. In: Stephanidis, C. (ed.) Universal Access in HCI, Part III, HCII 2007. LNCS, vol. 4556, pp. 299–308. Springer, Heidelberg (2007)Google Scholar
- 11.Fellbaum, K., Kouroupetroglou, G.: Principles of electronic speech processing with applications for people with disabilities. Technol. Disabil. 20, 55–85 (2008)Google Scholar
- 12.Freitas, D., Kouroupetroglou, G.: Speech technologies for blind and low vision persons. Technol. Disabil. 20, 135–156 (2008)Google Scholar
- 13.Isaila, N., Smeureannu, I.: The access of persons with visual disabilities at the scientific content. WSEAS Trans. Comput. 9, 788–797 (2010)Google Scholar
- 14.Pavel, O.: Automatic system for making web content accessible for visually impaired users. WSEAS Trans. Comput. Res. 1, 325–328 (2006)Google Scholar
- 15.Kouroupetroglou, G.: Acoustic mapping of visual text signals through advanced text-to-speech: the case of font size. WSEAS Trans. Comput. 14, 559–569 (2015)Google Scholar
- 17.Tsonos, D., Kouroupetroglou, G., Deligiorgi, D.: Regression modeling of reader’s emotions induced by font based text signals. In: Stephanidis, C., Antona, M. (eds.) UAHCI/HCII 2013, Part II. LNCS, vol. 8010, pp. 434–443. Springer, Heidelberg (2013)Google Scholar