Advertisement

Abstract

Detailed, consistent semantic annotation of large collections of multimedia data is difficult and time-consuming. In domains such as eScience, digital curation and industrial monitoring, fine-grained high-quality labeling of regions enables advanced semantic querying, analysis and aggregation and supports collaborative research. Manual annotation is inefficient and too subjective to be a viable solution. Automatic solutions are often highly domain or application specific, require large volumes of annotated training corpi and, if using a ‘black box’ approach, add little to the overall scientific knowledge. This article evaluates the use of simple artificial neural networks to semantically annotate micrographs and discusses the generic process chain necessary for semi-automatic semantic annotation of images.

Keywords

multimedia semantic annotation semantic gap artificial neural networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Finke, R.: Principles of Mental Imagery, pp. 89–90. MIT Press, Cambridge (1989)Google Scholar
  2. 2.
    Saathoff, C., Petridis, K., Anastasopoulos, D., Timmermann, N., Kompatsiaris, I., Staab, S.: M-OntoMat-Annotizer: Linking Ontologies with Multimedia Low-Level Features for Automatic Image Annotation. In: Sure, Y., Domingue, J. (eds.) ESWC 2006. LNCS, vol. 4011, Springer, Heidelberg (2006)Google Scholar
  3. 3.
    Little, S., Hunter, J.: Rules-By-Example - a Novel Approach to Semantic Indexing and Querying of Images. In: McIlraith, S.A., Plexousakis, D., van Harmelen, F. (eds.) ISWC 2004. LNCS, vol. 3298, Springer, Heidelberg (2004)Google Scholar
  4. 4.
    Hunter, J., Little, S.: A Framework to enable the Semantic Inferencing and Querying of Multimedia Content. International Journal of Web Engineering and Technology (IJWET) Special Issue on the Semantic Web 2 (December 2005)Google Scholar
  5. 5.
    Chang, S.F., Chen, W., Sundaram, H.: Semantic Visual Templates: linking visual features to semantics. In: ICIP 1998, Chicago, Illinois (1998)Google Scholar
  6. 6.
    Zhao, R., Grosky, W.: Negotiating The Semantic Gap: From Feature Maps to Semantic Landscapes. Pattern Recognition 35(3), 51–58 (2002)CrossRefGoogle Scholar
  7. 7.
    Adams, B., Iyengar, G., Lin, C., Naphade, M., Neti, C., Nock, H., Smith, J.: Semantic Indexing of Multimedia Content Using Visual, Audio and Text Cues. EURASIP Journal on Applied Signal Processing (2003)Google Scholar
  8. 8.
    Naphade, M., Kozintsev, I., Huang, T., Ramchandran, K.: A Factor Graph Framework for Semantic Indexing and Retrieval in Video. In: CBAIVL 2000, IEEE Computer Society Press, Los Alamitos (2000)Google Scholar
  9. 9.
    Naphade, M., Huang, T.: Detecting semantic concepts using context and audiovisual features. In: Proceedings of Detection and Recognition of Events in Video, 2001, pp. 92–98 (2001)Google Scholar
  10. 10.
    IBM alphaWorks. Multimedia Analysis and Retrieval Engine (MARVEL). Last accessed (August 2006), http://www.alphaworks.ibm.com/tech/marvel
  11. 11.
    Natsev, A., Naphade, M., Tesic, J.: Learning the Semantics of Multimedia Queries and Concepts from a Small Number of Examples. ACM Multimedia (2005)Google Scholar
  12. 12.
    Colantonio, S., Gurevich, I.B., Salvetti, O.: Automatic Fuzzy-Neural based Segmentation of Microscopic Cell Images. In: Industrial Conference on Data Mining - Workshops, pp. 34–45 (2006)Google Scholar
  13. 13.
    Di Bona, S., Niemann, H., Pieri, G., Salvetti, O.: Brain volumes characterisation using hierarchical neural networks. Artificial Intelligence in Medicine 28(3), 307–322 (2003)CrossRefGoogle Scholar
  14. 14.
    Perner, P., Zscherpel, U., Jacobsen, C.: A comparison between neural networks and decision trees based on data from industrial radiographic testing. Pattern Recognition Letters 22, 47–54 (2001)CrossRefzbMATHGoogle Scholar
  15. 15.
    Perner, P. (ed.): Case-Based Reasoning on Images and Signals. Springer, Heidelberg (in print, 2007)Google Scholar
  16. 16.
    Perner, P.: Prototype-based classification. Journal of Applied Intelligence (in print, 2007)Google Scholar
  17. 17.
    Benitez, A., Chang, S.-F.: Image classification using multimedia knowledge networks. In: ICIP 2003, vol. 2, 3, pp. III-613–616 (2003)Google Scholar
  18. 18.
    Fellbaum, C.: Wordnet, An Electronic Lexical Database. MIT press, Cambridge (1998)zbMATHGoogle Scholar
  19. 19.
    Hollink, L., Schreiber, A., Wielemaker, J., Wielinga, B.: Semantic Annotation of Image Collections. In: KCAP 2003, Florida, USA (2003)Google Scholar
  20. 20.
    Hollink, L., Worring, M., Schreiber, A.: Building a Visual Ontology for Video Retrieval. In: ACM Multimedia, Singapore (November 2005)Google Scholar
  21. 21.
    Bloehdorn, S., Petridis, K., Saathoff, C., Simou, N., Tzouvaras, V., Avrithis, Y., Handschuh, S., Kompatsiaris, I., Staab, S., Strintzis, M.G.: Semantic Annotation of Images and Videos for Multimedia Analysis. In: Gómez-Pérez, A., Euzenat, J. (eds.) ESWC 2005. LNCS, vol. 3532, Springer, Heidelberg (2005)Google Scholar
  22. 22.
    Hunter, J., Regan, M., Little, S.: Position Paper for Semantic Web Life Sciences Workshop – The Visible Cell. In: W3C’s Semantic Web Life Sciences Workshop, Cambridge Mass. (2004)Google Scholar
  23. 23.
    Marsh, B., Mastronarde, D., Buttle, K., Howell, K., McIntosh, J.: Organellar relationships in the Golgi region of the pancreatic beta cell line, HIT-T15, visualized by high resolution electron tomography. Proceedings of the National Academy of Sciences of the United States of America 98(5), 2399–2406 (2001)CrossRefGoogle Scholar
  24. 24.
    Institute for Molecular Biology, ”Visible Cell Project”, University of Queensland, Australia, http://www.visiblecell.com
  25. 25.
    van Rijsbergen, C.J.: Information Retrieval, 2nd edn. Butterworths (1979)Google Scholar
  26. 26.
    Hollink, L., Little, S., Hunter, J.: Evaluating the Application of Semantic Inferencing Rules to Image Annotation. In: KCAP 2005, Banff, Canada (2005)Google Scholar
  27. 27.
    Perner, P.: Data Mining on Multimedia Data. LNCS. Springer, Heidelberg (2002)zbMATHGoogle Scholar
  28. 28.
    Hunter, J.: Adding Multimedia to the Semantic Web - Building and Applying MPEG-7 Ontology. In: Stamou, G., Kollias, S. (eds.) Multimedia Content and the Semantic Web: Standards, and Tools, Wiley, Chichester (2005)Google Scholar
  29. 29.
    Tsinaraki, C., Polydoros, P., Kazasis, F., Christodoulakis, S.: Ontology-based Semantic Indexing for MPEG-7 and TV-Anytime Audiovisual Content. Special issue of Multimedia Tools and Application Journal on Video Segmentation for Semantic Annotation and Transcoding 26, 299–325 (2005)Google Scholar
  30. 30.
    Garcia, R., Celma, O.: Semantic integration and retrieval of multimedia metadata. In: SemAnnot 2005, Galway, Ireland (November 2005)Google Scholar
  31. 31.
    Manjunath, B.S., Salembier, P., Sikora, T. (eds.): Introduction to MPEG-7: Multimedia Content Description Interface. Wiley, Chichester (2003)Google Scholar
  32. 32.
    American College of Radiology, Breast Imaging Reporting and Data System (BI-RADS®)Google Scholar
  33. 33.
    National Library of Medicine. ”Medical Subject Headings (MeSH).” Last accessed (July 2007), http://www.nlm.nih.gov/mesh/
  34. 34.
    Smith, B., Williams, J., Schulze-Kremer, S.: The Ontology of the Gene Ontology. In: Proceedings of AMIA Symposium (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Suzanne Little
    • 1
  • Ovidio Salvetti
    • 2
  • Petra Perner
    • 1
  1. 1.Institute of Computer Vision and Applied Computer SciencesGermany
  2. 2.ISTI-CNR, PisaItaly

Personalised recommendations