, Volume 94, Issue 3, pp 1239–1251 | Cite as

Iberian universities: a characterisation from ESI rankings

  • Tânia F. G. G. Cova
  • Alberto A. C. C. PaisEmail author
  • Sebastião J. Formosinho


The access to bibliographic and citation databases allows to evaluate scientific performance, and provides useful means of general characterisation. In this paper we investigate the clustering of Iberian universities, resulting from the similarity in the number and specific nature of the scientific disciplines given by the Essential Science Indicators database. A further refining of the analysis, as provided by PCA, clearly reveals the relationship between the universities and the scientific disciplines in the main groups. Similarity between universities is not dictated only by the number of areas in the ranking, but also stems from the nature of the ranked scientific areas and the specific combination in each university.


Iberian universities Ranking areas Essential science indicators Principal component analysis 


  1. Adam, D. (2002). Citation analysis: The counting house. Nature, 415, 726–729.CrossRefGoogle Scholar
  2. Aksnes, D., Schneider, J., & Gunnarsson, M. (2012). Ranking national research systems by citation indicators a comparative analysis using whole and fractionalised counting methods. Journal of Informetrics, 6(1), 36–43.CrossRefGoogle Scholar
  3. Almeida, J., Barbosa, L., Pais, A., & Formosinho, S. (2007). Improving hierarchical cluster analysis: A new method with outlier detection and automatic clustering. Chemometrics and Intelligent Laboratory Systems, 87(2), 208–217.CrossRefGoogle Scholar
  4. Almeida, J., Pais, A., & Formosinho, S. (2009). Science indicators and science patterns in Europe. Journal of Informetrics, 3(2), 134–142.CrossRefGoogle Scholar
  5. Alonso, S., Cabrerizo, F., Herrera-Viedma, E., & Herrera, F. (2009). h-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289.CrossRefGoogle Scholar
  6. Bar-Ilan, J., Levene, M., & Lin A. (2007). Some measures for comparing citation databases. Journal of Informetrics, 1, 26–34.CrossRefGoogle Scholar
  7. Bishop, N., Gillet, V., Holliday, J., & Willett, P. (2003). Chemoinformatics research at the university of sheffield: A history and citation analysis. Journal of Information Science, 29(4), 249–267.CrossRefGoogle Scholar
  8. Bornmann, L., Schier, H., Marx, W., & Daniel, H. (2012). What factors determine citation counts of publications in chemistry besides their quality? Journal of Informetrics, 6(1), 11–18.CrossRefGoogle Scholar
  9. Braun, T., Dióspatonyi, I., Zádor, E., & Zsindely, S. (2007). Journal gatekeepers indicator-based top universities of the world, of europe and of 29 countriesa pilot study. Scientometrics, 71(2), 155–178.CrossRefGoogle Scholar
  10. Brereton, R. G. (2003). Chemometrics: Data analysis for the laboratory and chemical plant. Chichester: Wiley.Google Scholar
  11. Cronin, B. (2001). Bibliometrics and beyond: Some thoughts on web-based citation analysis. Journal of Information Science, 27(1), 1–7.CrossRefGoogle Scholar
  12. Csajbók, E., Berhidi, A., Vasas, L., & Schubert, A. (2007). Hirsch-index for countries based on essential science indicators data. Scientometrics, 73(1), 91–117.CrossRefGoogle Scholar
  13. Eaton, J. W., Bateman, D., & Hauberg, S. (2007). GNU Octave manual: A high-level interactive language for numerical computations. Network Theory Ltd., version 3 for Octave version 3.2.4.Google Scholar
  14. Essential Science Indicators (2010). The Thomson Corporation. Accessed Janurary and July 2010, Accessed July 2010.
  15. Garfield, E. (1995). Citation indexes for science: A new dimension in documentation through association of ideas. Science, 122, 108–111.CrossRefGoogle Scholar
  16. Garfield, E. (2006). Citation indexes for science. a new dimension in documentation through association of ideas. International journal of epidemiology, 35(5), 1123–1127.CrossRefGoogle Scholar
  17. Hirsch, J. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United states of America, 102(46), 165–169.CrossRefGoogle Scholar
  18. ISI Web of Knowledge (2010). The Thomson Corporation. Accessed January 2010.
  19. Jolliffe, I. (2002). Principal component analysis. (2nd ed.). New York: Springer.zbMATHGoogle Scholar
  20. Leydesdorff, L. (2005). Evaluation of research and evolution of science indicators. Current Science-Bangalore, 89(9), 1510–1517.Google Scholar
  21. Leydesdorff, L., & Rafols, I. (2011). Indicators of the interdisciplinarity of journals: Diversity, centrality, and citations. Journal of Informetrics, 5(1), 87–100.CrossRefGoogle Scholar
  22. Martin, B. (1996). The use of multiple indicators in the assessment of basic research. Scientometrics, 36(3), 343–362.CrossRefGoogle Scholar
  23. Moed, H., De Bruin, R., & Van Leeuwen, T. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422.CrossRefGoogle Scholar
  24. Moed, H. F., Colledge, L., Reedijk, J., Moya-Anegon, F., Guerrero-Bote, V., Plume, A., & Amin, M. (2012). Citation-based metrics are appropriate tools in journal assessment provided that they are accurate and used in an informed way. Scientometrics. doi: 10.1007/s11192-012-0679-8
  25. Organisation for Economic Co-operation and Development (2010). Accessed January 2010.
  26. Schreiber, M. (2008). A modification of the h-index: The hm-index accounts for multi-authored manuscripts. Journal of Informetrics, 2(3), 211–216.MathSciNetCrossRefGoogle Scholar
  27. Serenko, A., & Dohan, M. (2011). Comparing the expert survey and citation impact journal ranking methods: Example from the field of artificial intelligence. Journal of Informetrics, 5, 629–648.CrossRefGoogle Scholar
  28. Sicilia, M., Sánchez-Alonso, S., & García-Barriocanal, E. (2011). Comparing impact factors from two different citation databases: The case of computer science. Journal of Informetrics, 5, 698–704.CrossRefGoogle Scholar
  29. Thelwall, M. (2008). Bibliometrics to webometrics. Journal of information science, 34(4), 605–621.CrossRefGoogle Scholar
  30. Tian, Y., Wen, C., & Hong, S. (2008). Global scientific production on gis research by bibliometric analysis from 1997 to 2006. Journal of Informetrics, 2(1), 65–74.CrossRefGoogle Scholar
  31. Vanclay, J. K. (2011). Impact factor: Outdated artefact or stepping-stone to journal certification? Scientometrics. doi: 10.1007/s11192-011-0561-0
  32. Van Leeuwen, T., Visser, M., Moed, H., Nederhof, T., & Van Raan, A. (2003). The holy grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57(2), 257–280.CrossRefGoogle Scholar
  33. Vieira, E., & Gomes, J. (2010). A research impact indicator for institutions. Journal of Informetrics, 4, 581–590.CrossRefGoogle Scholar
  34. Vinkler, P. (2007). Eminence of scientists in the light of the h-index and other scientometric indicators. Journal of Information Science, 33(4), 481–491.MathSciNetCrossRefGoogle Scholar
  35. Ward, J., Jr. (1963). Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301), 236–244.MathSciNetCrossRefGoogle Scholar
  36. Zhang, L., Thijs, B., & Glänzel, W. (2011). The diffusion of h-related literature. Journal of Informetrics, 5, 583–593.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  • Tânia F. G. G. Cova
    • 1
  • Alberto A. C. C. Pais
    • 1
    Email author
  • Sebastião J. Formosinho
    • 1
  1. 1.Department of ChemistryUniversity of CoimbraCoimbraPortugal

Personalised recommendations