Image selection and annotation for an environmental knowledge base
Images play an important role in the representation and acquisition of specialized knowledge. Not surprisingly, terminological knowledge bases (TKBs) often include images as a way to enhance the information in concept entries. However, the selection of these images should not be random, but rather based on specific guidelines that take into account the type and nature of the concept being described. This paper presents a proposal on how to combine the features of images with the conceptual propositions in EcoLexicon, a multilingual TKB on the environment. This proposal is based on the following: (1) the combinatory possibilities of concept types; (2) image types, such as photographs, drawings and flow charts; (3) morphological features or visual knowledge patterns (VKPs), such as labels, colours, arrows, and their effect on the functional nature of each image type. Currently, images are stored in association with concept entries according to the semantic content of their definitions, but they are not described or annotated according to the parameters that guided their selection, which would undoubtedly contribute to the systematization and automatization of the process. First, the images included in EcoLexicon were analyzed in terms of their adequateness, the semantic relations expressed, the concept types and their VKPs. Then, with these data, guidelines for image selection and annotation were created. The final aim is twofold: (1) to systematize the selection of images and (2) to start annotating old and new images so that the system can automatically allocate them in different concept entries based on shared conceptual propositions.
KeywordsKnowledge representation Image selection Image annotation EcoLexicon
This research was carried out within the framework of the project RECORD [Knowledge Representation in Dynamic Networks, FFI2011-22397] and the project CONTENT [Cognitive and Neurological Bases for Terminology-enhanced Translation], both funded by the Spanish Ministry of Economy and Competitiveness.
- Anglin, G., Vaez, H., & Cunningham, K. (2004). Visual representations and learning: The role of static and animated graphics. In D. Jonassen (Ed.), Handbook of research on educational communications and technology (pp. 755–794). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
- Barriuso, A., & Torralba, A. (2012) Notes on image annotation. arXiv:1210.3448 [cs.CV].
- Darian, S. (2001). More than meets the eye: The role of visuals in science textbooks. LSP & Professional Communication, 1(1), 10–36.Google Scholar
- Di, W., Wah, C., Bhardwaj, A., Piramuthu, R., & Sudaresan, N. (2013). Style finder: Fine-grained clothing style recognition and retrieval. In Third IEEE international workshop on mobile vision, Portland, OR, USA.Google Scholar
- Faber, P. (Ed.). (2012). A cognitive linguistics view of terminology and specialized language. Berlin: De Gruyter Mouton.Google Scholar
- Jeon, J., Lavrenko, V., & Manmatha, R. (2003). Automatic image annotation and retrieval using cross-media relevance models. In Proceedings of the 26th annual international ACM SIGIR conference on research and development in information retrieval, Toronto, Canada, pp. 119–126.Google Scholar
- León Araúz, P. (2009). Representación multidimensional del conocimiento especializado: el uso de marcos desde la macroestructura hasta la microestructura. Ph.D. thesis, University of Granada.Google Scholar
- León Araúz, P., & Faber, P. (2010). Natural and contextual constraints for domain-specific relations. In V. Barbu Mititelu, V. Pekar, E. Barbu (Eds.), Proceedings of the workshop semantic relations, theory and applications, Valletta, pp. 12–17.Google Scholar
- León Araúz, P., Reimerink, A., & Faber, P. (2009). Knowledge extraction on multidimensional concepts: Corpus pattern analysis (CPA) and concordances. In 8ème Conférence Internationale Terminologie et Intelligence Artificielle, Toulouse.Google Scholar
- Levie, W. H., & Lentz, R. (1982). Effects of text illustrations: A review of research. Educational Communication and Technology Journal, 30, 195–232.Google Scholar
- Mayer, R. E., & Anderson, R. B. (1992). The instructive animation: Helping students build connections between words and pictures in multimedia learning. Journal of Educational Psychology, 84(4), 715–726.Google Scholar
- Mei, T., Wang, Y., Hua, X. S., Gong, S., & Li, S. (2008). Coherent image annotation by learning semantic distance. In IEEE conference on computer vision and pattern recognition, Piscataway, NJ, USA, pp. 1–8.Google Scholar
- Monterde Rey, A. M. (2002). Terminología: estudio de las distintas formas de representación conceptual en textos técnicos y su relación con la traducción. In Actas de las II Jornadas de Jóvenes Traductores (pp. 147–156). Las Palmas de G. C: Servicio de Publicaciones de la Universidad de Las Palmas de G. C.Google Scholar
- Nöth, W. (2001). Word and image: Intermedial aspects. Medien Pädagogik, 6. http://www.medienpaed.com/00-2/noeth1.pdf.
- Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart & Winston.Google Scholar
- Paivio, A. (1986). Mental representations: A dual-coding approach. New York: Oxford University Press.Google Scholar
- Parikh, D. (2013). Visual attributes for enhanced human-machine communication. In Allerton, pp. 1126–1127.Google Scholar
- Park, O., & Hopkins, R. (1993). Instructional conditions for using animated visual displays: A review. Instructional Science, 22, 1–24.Google Scholar
- Prieto Velasco, J. A. (2008). Información gráfica y grados de especialidad en el discurso científico-técnico: un estudio de corpus. Ph.D. thesis, University of Granada.Google Scholar
- Prieto Velasco, J. A., & Faber, P. (2012). Graphical information. In P. Faber (Ed.), A cognitive linguistics view of terminology and specialized language (pp. 225–248). Berlin: De Gruyter Mouton.Google Scholar
- Reimerink, A., & Faber, P. (2009). A frame-based knowledge base for the environment. In Proceedings of towards e-environment, Prague, pp. 629–636.Google Scholar
- Rieber, L. P. (1994). Computers, graphics, and learning. Madison, WI: Brown & Benchmark.Google Scholar
- Wenyin, L., Dumais, S., Sun, Y., Zhang, H. J., Czerwinski, M., & Field, B. (2001). Semi-automatic image annotation. In Proceedings of interact 2001: Conference on human–computer interaction, pp. 326–333.Google Scholar
- Zitnick, C. L., & Parikh, D. (2013). Bringing semantics into focus using visual abstraction. In IEEE conference on computer vision and pattern recognition (CVPR), pp. 3009–3016.Google Scholar