Modelling Ontology Evaluation and Validation

  • Aldo Gangemi
  • Carola Catenacci
  • Massimiliano Ciaramita
  • Jos Lehmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4011)


We present a comprehensive approach to ontology evaluation and validation, which have become a crucial problem for the development of semantic technologies. Existing evaluation methods are integrated into one sigle framework by means of a formal model. This model consists, firstly, of a meta-ontology called O 2, that characterises ontologies as semiotic objects. Based on O 2 and an analysis of existing methodologies, we identify three main types of measures for evaluation: structural measures, that are typical of ontologies represented as graphs; functional measures, that are related to the intended use of an ontology and of its components; and usability-profiling measures, that depend on the level of annotation of the considered ontology. The meta-ontology is then complemented with an ontology of ontology validation called oQual, which provides the means to devise the best set of criteria for choosing an ontology over others in the context of a given project. Finally, we provide a small example of how to apply oQual-derived criteria to a validation case.


Description Logic Semantic Technology Intended Conceptualization Ontology Description Competency Question 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Almuhareb, A., Poesio, M.: Attribute-based and value-based clustering: an evaluation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (2004)Google Scholar
  2. 2.
    Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. Addison Wesley, Reading (1999)Google Scholar
  3. 3.
    Berardi, D., Calvanese, D., De Giacomo, G.: Reasoning on UML Class Diagrams using Description Logic Based Systems. In: Proceedings of the KI, Workshop on Applications of Description Logics (2001)Google Scholar
  4. 4.
    Brewster, C., Alani, H., Dasmahapatra, S., Wilks, Y.: Data-driven ontology evaluation. In: Proceedings of LREC (2004) Google Scholar
  5. 5.
    Ciaramita, M., Gangemi, A., Ratsch, E., Saric, J., Rojas, I.: Unsupervised Learning of Semantic Relations between Concepts of a Molecular Biology Ontology. In: Proceedings of the 19th International Joint Conference on Artificial Intelligence (2005)Google Scholar
  6. 6.
    Daelemans, W., Reinberger, M.L.: Shallow Text Understanding for Ontology Content Evaluation. IEEE Intelligent Systems, 1541–1672 (2004)Google Scholar
  7. 7.
    Gangemi, A.: Ontology Design Patterns for Semantic Web Content. In: Motta, E., Gil, Y. (eds.) Proceedings of the Fourth International Semantic Web Conference (2005)Google Scholar
  8. 8.
    Gangemi, A., Catenacci, C., Ciaramita, M., Lehmann, J.: Ontology evaluation: A review of methods and an integrated model for the quality diagnostic task. Technical Report (2005), Available at:
  9. 9.
    Gómez-Pérez, A.: Ontology Evaluation. In: Staab, S., Studer, R. (eds.) Handbook on Ontologies, pp. 251–274. Springer, Heidelberg (2003)Google Scholar
  10. 10.
    Guarino, N.: Towards a Formal Evaluation of Ontology Quality. IEEE Intelligent Systems, 1541–1672 (2004)Google Scholar
  11. 11.
    Hartmann, J., Spyns, P., Giboin, A., Maynard, D., Cuel, R., Suárez-Figueroa, M.C., Sure, Y.: Methods for ontology evaluation. Knowledge Web Deliverable D1.2.3 (2004) Google Scholar
  12. 12.
    Hartmann, J., Palma, R., Sure, Y., Suárez-Figueroa, M.C., Haase P.: OMV– Ontology Metadata Vocabulary. In: The Ontology Patterns for the Semantic Web (OPSW) Workshop at ISWC 2005, Galway, Ireland (2005),
  13. 13.
    Kaakinen, J., Hyona, J., Keenan, J.M.: Individual differences in perspective effects on on-line text processing. Discourse Processes 33, 159–173 (2002)CrossRefGoogle Scholar
  14. 14.
    Lozano-Tello, A., Gómez-Pérez, A.: ONTOMETRIC: A method to choose the appropriate ontology. Journal of Database Management 15(2) (2004)Google Scholar
  15. 15.
    Masolo, C., Gangemi, A., Guarino, N., Oltramari, A., Schneider, L.: WonderWeb Deliverable D18: The WonderWeb Library of Foundational Ontologies (2004), Available at:
  16. 16.
    Noy, N.: Evaluation by Ontology Consumers. IEEE Intelligent Systems,1541–1672 (2004) Google Scholar
  17. 17.
    Peirce, C.S.: Collected Papers. Hartshorne, C., Weiss, P., Burks, A.W. (eds.), vol. 1-8. Harvard University Press, Cambridge (1931-1958)Google Scholar
  18. 18.
    Porzel, R., Malaka, R.: A Task-based Approach for Ontology Evaluation. In: Proceedings of ECAI 2004 (2004)Google Scholar
  19. 19.
    Spyns, P.: EvaLexon: Assessing triples mined from texts. Technical Report 09, STAR Lab, Brussel (2005) Google Scholar
  20. 20.
    Steels, L.: Components of Expertise. AI Magazine 11, 2, 30–49 (1990)Google Scholar
  21. 21.
    Sure, Y. (ed.): Why Evaluate Ontology Technologies? Because It Works. IEEE Intelligent Systems,1541–1672 (2004) Google Scholar
  22. 22.
    Uschold, U., Gruninger, M.: Ontologies: Principles, Methods, and Applications. Knowledge Eng. Rev. 11(2), 93–155 (1996)CrossRefGoogle Scholar
  23. 23.
    Welty, C., Guarino, N.: Supporting ontological analysis of taxonomic relationships. Data and Knowledge Engineering 39(1), 51–74 (2001)CrossRefzbMATHGoogle Scholar
  24. 24.
    Yao, H., Orme, A.M., Etzkorn, L.: Cohesion Metrics for Ontology Design and Application. Journal of Computer Science 1(1), 107–113 (2005)CrossRefGoogle Scholar
  25. 25.
  26. 26.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Aldo Gangemi
    • 1
  • Carola Catenacci
    • 1
  • Massimiliano Ciaramita
    • 1
  • Jos Lehmann
    • 1
  1. 1.Laboratory for Applied OntologyISTC-CNRRomaItaly

Personalised recommendations