Automatic Annotation of Geographic Maps

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4061)


In this paper, we describe an approach to generate semantic descriptions of entities in city maps so that they can be presented through accessible interfaces. The solution we present processes bitmap images containing city map excerpts. Regions of interest in these images are extracted automatically based on colour information and subsequently their geometric properties are determined. The result of this process is a structured description of these regions based on the Geography Markup Language (GML), an XML based format for the description of GIS data. This description can later serve as an input to innovative presentations of spatial structures using haptic and auditory interfaces.


Geographic Information System Semantic Information Automatic Annotation Semantic Annotation Blind People 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gallagher, B., Frasch, W.: Tactile acoustic computer interaction system (TACIS): A new type of graphic access for the blind. Technology for Inclusive Design and Equality Improving the Quality of Life for the European Citizen. In: Proceedings of the 3rd TIDE Congress, Helsinki (June 1998)Google Scholar
  2. 2.
    Parente, P., Bishop, G.: BATS: The Blind Audio Tactile Mapping System. In: Proceedings of the ACM Southeast Conference (ACMSE 2003), Savannah GA (March 2003)Google Scholar
  3. 3.
    Horstmann, M., Lorenz, M., Watkowski, A., Ioannidis, G., Herzog, O., King, A., Evans, D.G., Hagen, C., Schlieder, C., Burn, A.-M., King, N., Petrie, H., Dijkstra, S., Crombie, D.: Automated interpretation and accessible presentation of technical diagrams for blind people. New Review of Hypermedia and Multimedia 10(2), 141–163 (2004)CrossRefGoogle Scholar
  4. 4.
    Becker, H.: Automatische Extraktion von Szenentext. Diploma thesis, Universität Bremen (2005)Google Scholar
  5. 5.
    Miene, A., Hermes, T., Ioannidis, G.: Extracting Textual Inserts from Digital Videos. In: Proc. of the Sixth International Conference on Document Analysis and Recognition (ICDAR 2001), Seattle, Washington, USA, September 10–13, 2001, pp. 1079–1083. IEEE Computer Society, Los Alamitos (2001)CrossRefGoogle Scholar
  6. 6.
    Espinosa, M.A., Ungar, S., Ochaíta, E., Blades, M., Spencer, C.: Comparing methods for introducing blind and visually impaired people to unfamiliar urban environments. Journal of Environmental Psychology 18, 277–287 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  1. 1.Center for Computing Technologies (TZI)Universität BremenBremenGermany
  2. 2.OFFISOldenburgGermany
  3. 3.Department of Computing ScienceUniversity of OldenburgOldenburgGermany

Personalised recommendations