Advertisement

Scientometrics

, Volume 99, Issue 3, pp 615–630 | Cite as

Best-in-class and strategic benchmarking of scientific subject categories of Web of Science in 2010

  • J. A. García
  • Rosa Rodriguez-Sánchez
  • J. Fdez-Valdivia
  • Nicolas Robinson-García
  • Daniel Torres-Salinas
Article

Abstract

Here we show a novel technique for comparing subject categories, where the prestige of academic journals in each category is represented statistically by an impact-factor histogram. For each subject category we compute the probability of occurrence of scholarly journals with impact factor in different intervals. Here impact factor is measured with Thomson Reuters Impact Factor, Eigenfactor Score, and Immediacy Index. Assuming the probabilities associated with a pair of subject categories our objective is to measure the degree of dissimilarity between them. To do so, we use an axiomatic characterization for predicting dissimilarity between subject categories. The scientific subject categories of Web of Science in 2010 were used to test the proposed approach for benchmarking Cell Biology and Computer Science Information Systems with the rest as two case studies. The former is best-in-class benchmarking that involves studying the leading competitor category; the latter is strategic benchmarking that involves observing how other scientific subject categories compete.

Keywords

Scientific subject categories Web of Science Impact-factor histogram Cell biology Computer science information systems Benchmarking 

Notes

Acknowledgments

This research was sponsored by the Spanish Board for Science and Technology (MICINN) under grant TIN2010-15157 co-financed with European FEDER funds. Nicolás Robinson-García is currently supported by a FPU grant from the Ministerio de Educación y Ciencia of the Spanish Government. Thanks are due to the reviewers for their constructive suggestions.

References

  1. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). National research assessment exercises: The effects of changing the rules of the game during the game. Scientometrics, 88(1), 229–238.CrossRefGoogle Scholar
  2. Adam, D. (2002). The counting house. Nature, 415, 726–729.CrossRefGoogle Scholar
  3. Buter, R. K., Noyons, E. C. M., & van Raan, A. F. J. (2011). Searching for converging research using field to field citations. Scientometrics, 86, 325–338.CrossRefGoogle Scholar
  4. Cronin, B., & Meho, L. I. (2008). The shifting balance of intellectual trade in information studies. Journal of the American Society for Information Science and Technology, 59(4), 551–564.CrossRefGoogle Scholar
  5. Fanelli, D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US states data. PLoS ONE, 5(4), e10271. doi: 10.1371/journal.pone.0010271.CrossRefGoogle Scholar
  6. García, J. A., Fdez-Valdivia, J., Fdez-Vidal, Xose, R., & Rodriguez-Sanchez, R. (2001). Information theoretic measure for visual target distinctness. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(4), 362–383.CrossRefGoogle Scholar
  7. García, J. A., Rodriguez-Sánchez, R., & Fdez-Valdivia, J. (2012a). Scientific subject categories of Web of Science ranked according to their multidimensional prestige of influential journals. Journal of the American Society for Information Science and Technology, 63(5), 1017–1029.CrossRefGoogle Scholar
  8. García, J. A., Rodriguez-Sánchez, R., Fdez-Valdivia, J., & Martinez-Baena, J. (2012b). On first quartile journals which are not of highest impact. Scientometrics, 90, 925–943.CrossRefGoogle Scholar
  9. García, J. A., Rodríguez-Sánchez, R., Fdez-Valdivia, J., Robinson-Garcia, N., & Torres-Salinas, D. (2012c). Benchmarking research performance at the university level with information theoretic measures. Scientometrics. doi: 10.1007/s11192-012-0854-y.
  10. Jiménez-Contreras, E., Moya-Anegón, F., & Delgado López-Cózar, E. (2003). The evolution of research activity in Spain. The impact of the National Commission for the Evaluation of Research Activity (CNEAI). Research Policy, 32(1), 123–142.CrossRefGoogle Scholar
  11. Kullback, S. (1978). Information theory and statistics. Gloucester, MA: Peter Smith.Google Scholar
  12. Kullback, S., & Leibler, R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79–86.CrossRefzbMATHMathSciNetGoogle Scholar
  13. Leahey, E. (2007). Not by productivity alone: How visibility and specialization contribute to academic earnings. American Sociological Review, 72(4), 533–561.CrossRefGoogle Scholar
  14. Leydesdorff, L., & Bornmann, L. (2011). Integrated impact indicators compared with impact factors: An alternative research design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 2133–2146.CrossRefGoogle Scholar
  15. Luukkonen, T. (1992). Is scientists’ publishing behaviour reward-seeking? Scientometrics, 24(2), 297–319.CrossRefGoogle Scholar
  16. Moed, H. F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity? Scientometrics, 74(1), 153–161.CrossRefGoogle Scholar
  17. Torres-Salinas, D., Rodríguez-Sánchez, R., Robinson-García, N., Fdez-Valdivia, J., & García, J. A. (2013). Mapping citation patterns of book chapters in the Book Citation Index. Journal of Informetrics (In press).Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2013

Authors and Affiliations

  • J. A. García
    • 1
  • Rosa Rodriguez-Sánchez
    • 1
  • J. Fdez-Valdivia
    • 1
  • Nicolas Robinson-García
    • 2
  • Daniel Torres-Salinas
    • 3
  1. 1.Departamento de Ciencias de la Computación e I.A., CITIC-UGRUniversidad de GranadaGranadaSpain
  2. 2.EC3: Evaluación de la Ciencia y la Comunicación CientíficaUniversidad de GranadaGranadaSpain
  3. 3.EC3: Evaluación de la Ciencia y la Comunicación Científica, Centro de Investigación Médica AplicadaUniversidad de NavarraPamplona, NavarraSpain

Personalised recommendations