A new approach to measuring scientific production in JCR journals and its application to Spanish public universities


Scientific production has been evaluated from very different perspectives, the best known of which are essentially based on the impact factors of the journals included in the Journal Citation Reports (JCR). This has been no impediment to the simultaneous issuing of warnings regarding the dangers of their indiscriminate use when making comparisons. This is because the biases incorporated in the elaboration of these impact factors produce significant distortions, which may invalidate the results obtained. Notable among such biases are those generated by the differences in the propensity to cite of the different areas, journals and/or authors, by variations in the period of materialisation of the impact and by the varying presence of knowledge areas in the sample of reviews contained in the JCR. While the traditional evaluation method consists of standardisation by subject categories, recent studies have criticised this approach and offered new possibilities for making inter-area comparisons. In view of such developments, the present study proposes a novel approach to the measurement of scientific activity, in an attempt to lessen the aforementioned biases. This approach consists of combining the employment of a new impact factor, calculated for each journal, with the grouping of the institutions under evaluation into homogeneous groups. An empirical application is undertaken to evaluate the scientific production of Spanish public universities in the year 2000. This application considers both the articles published in the multidisciplinary databases of the Web of Science (WoS) and the data concerning the journals contained in the Sciences and Social Sciences Editions of the Journal Citation Report (JCR). All this information is provided by the Institute of Scientific Information (ISI), via its Web of Knowledge (WoK).

This is a preview of subscription content, access via your institution.

Fig. 1


  1. 1.

    In theory, the normalisation required to homogenise field representativity in the sample should be performed by weighting the millions of citations handled every year by the following factor: 1/total citations of articles published in JCR journals.

  2. 2.

    Carnegie Foundation’s classification distinguishes between universities which offer the complete range of higher education, including doctoral degrees (Doctorate-granting Institutions or Doctoral/Research Universities) and those which teach up to the Master’s level (Master’s Colleges and Universities) or those which focus principally on Bachelor’s degrees (Baccalaureate Colleges). Maclean’s University Rankings distinguishes between Primarily Undergraduate Universities (i.e. those which are largely dedicated to undergraduates), Comprehensive Universities (which offer an extensive range of qualifications for undergraduates and graduates and receive significant revenue from their research activity) and, lastly, Medical-Doctoral Universities (which offer a wide range of doctoral and research programmes and include medical faculties).

  3. 3.

    Scientific reviews are the chosen medium for 85% of all works published in the Scientific-Technical areas. In Humanistic and Social Sciences they only represent 40% (books are the chosen medium for 48% of publications in Humanities). Moreover, national reviews are the natural vehicle for contributions to Social Sciences in Spain (Gobierno de Aragón 2004).

  4. 4.

    These areas are: Mathematics and Physics; Chemistry; Cellular and Molecular Biology; Biomedical Sciences; Natural Sciences; Engineering and Architecture; Social, Political and Behavioural Sciences; Economic and Business Sciences; Law, History and Art; Philosophy, Philology and Linguistics.

  5. 5.

    The intention here is to relativise universities’ production with regard to their size. Although the disadvantage of this relativisation may reside in how widely teaching/research loads vary among universities, we believe that in the case of the SPUs it is plausible to assume that they all have a similar distribution of teaching and research, since this is established by the relevant legislation.

  6. 6.

    For the elaboration of these two blocks it was necessary to establish a criterion which permitted the determination of which area an article belonged to. A decision was taken to follow the classification of the JCR and not the Citation Index (SCI and SSCI). The fact that some reviews are included in both subsets was also taken into account. To overcome this difficulty, it was decided to divide the impacts proportionally between two macro-areas.


  1. Aksnes, D. W. (2006). Citation rates and perceptions of scientific contribution. Journal of the American Society for Information Science and Technology, 57(2), 169–185.

    Article  Google Scholar 

  2. Archambault, E., & Larivière, V. (2009). History of the journal impact factor: Contingencies and consequences. Scientometrics, 79(3), 635–649.

    Article  Google Scholar 

  3. Braun, T. (1999). Bibliometric indicators for the evaluation of universities—intelligence from the quantitation of the scientific literature. Scientometrics, 45(3), 425–432.

    Article  Google Scholar 

  4. Buchanan, R. A. (2006). Accuracy of cited references—the role of citation databases. College and Research Libraries, 67(4), 292–303.

    Google Scholar 

  5. Garfield, E. (1996). How can impact factors be improved?. British Medical Journal, 313(7054), 411–413.

    Google Scholar 

  6. Garfield, E. (1998). Long-term vs. short-term journal impact: part ii cumulative impact factors. The Scientist, 12(14), 12–13.

    Google Scholar 

  7. Glänzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(20), 171–193.

    Article  Google Scholar 

  8. Gobierno de Aragón, D. d. C. y. T. (2004). II Plan Autonómico de Investigación, Desarrollo y Transferencia de Conocimientos de Aragón: [II PAID 2005–2008]. Zaragoza: Gobierno de Aragón, Departamento de Ciencia y Tecnología.

  9. Gómez-Sancho, J. M., & Mancebón-Torrubia, M. J. (2009). The evaluation of scientific production: Towards a neutral impact factor. Scientometrics, 81(2), 435–458.

    Article  Google Scholar 

  10. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). New York: Prentice-Hall.

    Google Scholar 

  11. Hernández Armenteros, J. (2002). La universidad española en cifras (2002) Información académica, productiva y financiera de las universidades públicas españolas Indicadores universitarios. Madrid: Conferencia de Rectores de las Universidades Españolas (CRUE).

    Google Scholar 

  12. Jacsó, P. (2006). Deflated, inflated and phantom citation counts. Online Information Review, 30(3), 297–309.

    Article  Google Scholar 

  13. Kostoff, R. N. (2002). Citation analysis of research performer quality. Scientometrics, 53(1), 49–71.

    Article  Google Scholar 

  14. Leydesdorff, L. (2008). Caveats for the use of citation indicators in research and journal evaluations. Journal of the American Society for Information Science and Technology, 59(2), 278–287.

    Article  Google Scholar 

  15. Moed, H. F. (2006). Bibliometric rankings of world universities. CWTS Report 2006-01. The Netherlands: Centre for Science and Technology Studies (CWTS), Leiden University.

  16. Moed, H. F., & Van Leeuwen, T. N. (1995). Improving the accuracy of institute for scientific information’s journal impact factors. Journal of the American Society for Information Science and Technology, 46(6), 461–467.

    Article  Google Scholar 

  17. Moed, H. F., & Van Leeuwen, T. N. (1996). Impact factors can mislead. Nature, 381, 381–386.

    Article  Google Scholar 

  18. Moed, H. F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15, 95–107.

    Article  Google Scholar 

  19. Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance—database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422.

    Article  Google Scholar 

  20. Moed, H. F., Van Leeuwen, T. N., & Reedijk, J. (1999). Towards appropriate indicators of journal impact. Scientometrics, 46(3), 575–589.

    Article  Google Scholar 

  21. Moya Anegon, F., Chinchilla Rodríguez, Z., Corera Alvarez, E., Gómez Crisóstomo, M. R., González Molina, A., Muñoz Fernández, F. J., et al. (2007). Indicadores bibliométricos de la actividad científica española (1990–2004). Madrid: FECYT.

    Google Scholar 

  22. Mueller, P. S., Murali, N. S., Cha, S. S., Erwin, P. J. Y., & Ghosh, A. K. (2006). The association between impact factors and language of general internal medicine journals. Swiss Medical Weekly, 136(27/28), 441–443.

    Google Scholar 

  23. Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics. doi:10.1016/j.joi.2010.02.003

  24. Rinia, E. J., van Leeuwen, T. N., Bruins, E. E. W., Van Vuren, H. G., & Van Raan, A. F. J. (2001). Citation delay in interdisciplinary knowledge exchange. Scientometrics, 51(1), 293–309.

    Article  Google Scholar 

  25. Rousseau, R. (2005). Median and percentile impact factors: A set of new indicators. Scientometrics, 63(3), 431–441.

    Article  Google Scholar 

  26. Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics, 36(1), 311–324.

    Article  Google Scholar 

  27. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079), 498–502.

    Google Scholar 

  28. Sombatsompop, N., Markpin, T., & Premkamolnetr, N. (2004). A modified method for calculating the impact factors of journals in ISI journal citation reports—polymer science category in 1997–2001. Scientometrics, 60(2), 235–271.

    Article  Google Scholar 

  29. Wallin, J. A. (2005). Bibliometric methods: Pitfalls and possibilities. Basic and Clinical Pharmacology and Toxicology, 97(5), 261–275.

    Article  Google Scholar 

  30. Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.

    Article  Google Scholar 

  31. Zitt, M., Ramana-Rahary, S., & Bassecoulard, E. (2005). Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation. Scientometrics, 63(2), 373–401.

    Article  Google Scholar 

Download references


The authors would like to thank the two anonymous reviewers for their useful and constructive comments. Any errors in the article are responsibility of its authors.

Author information



Corresponding author

Correspondence to José María Gómez-Sancho.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Gómez-Sancho, J.M., Mancebón-Torrubia, M.J. A new approach to measuring scientific production in JCR journals and its application to Spanish public universities. Scientometrics 85, 271–293 (2010). https://doi.org/10.1007/s11192-010-0217-5

Download citation


  • Research evaluation
  • Universities
  • Journal impact factor