Scientometrics

, Volume 84, Issue 2, pp 317–320 | Cite as

A cautionary bibliometric tale of two cities

  • G. E. Derrick
  • H. Sturk
  • A. S. Haynes
  • S. Chapman
  • W. D. Hall
Article

Abstract

Reliability of citation searches is a cornerstone of bibliometric research. The authors compare simultaneous search returns at two sites to demonstrate discrepancies that can occur as a result of differences in institutional subscriptions to the Web of Science and Web of Knowledge. Such discrepancies may have significant implications for the reliability of bibliometric research in general, but also for the calculation of individual and group indices used for promotion and funding decisions. The authors caution care when describing the methods used in bibliometric analysis and when evaluating researchers from different institutions. In both situations a description of the specific databases used would enable greater reliability.

Keywords

Web of Science Web of Knowledge Institutional subscriptions Evaluative bibliometrics 

Supplementary material

11192_2009_118_MOESM1_ESM.doc (325 kb)
Supplementary Table(DOC 325 kb)

References

  1. Fuller, C. D., Choi, M., & Thomas, C. R. Jr. (2009). Bibliometric analysis of radiation oncology departmental scholarly publication productivity at domestic residency training institutions. Journal of American College of Radiology, 6(2), 112–118.CrossRefGoogle Scholar
  2. Garfield, E. (1990). Journal editors awaken to the impact of citation errors. How we control them at ISI. Current Contents, 41, 5–13.Google Scholar
  3. Hagen, N. T. (2008). Harmonic allocation of authorship credit: Source-level correction of bibliometric bias assures accurate publication and citation analysis. PLoS One, 3(12), e4021.CrossRefGoogle Scholar
  4. Hirsh, J. (2005). An index to quantify an individual’s scientific research output. PNAS, 102(46), 16569–16572.CrossRefGoogle Scholar
  5. Jacso, P. (2005). As we may search—comparison of major features of the Web of Science, Scopus and Google Scholar citation-based and citation-enhanced databases. Current Science, 89(9), 1537–1547.Google Scholar
  6. Lee, J., Kraus, K. L., Couldwell, W. T. (2009). Use of the h index in neurosurgery. Journal of Neurosurgery, 111(2), 387–392.CrossRefGoogle Scholar
  7. Moed, H. F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15(2), 95–102.CrossRefGoogle Scholar
  8. Osca-Lluch, J., Molla, C. C., & Ortega, M. P. (2009). Consequences of the error in bibliographical references. Psicothema, 21(2), 300–303.Google Scholar
  9. Sorensen, A. A. (2009). Alzheimer’s disease research: Scientific productivity and impact of the top 100 investigators in the field. Journal of Alzheimers Disease, 16(3), 451–465.Google Scholar
  10. Sypsa, V., & Hatzakis, A. (2009). Assessing the impact of biomedical research in academic institutions of disparate sizes. BMC Research Methodology, 9(1), 33.CrossRefGoogle Scholar
  11. Thompson, D. F., Callen, E. C., & Nahata, M. C. (2009). Publication metrics and record of pharmacy practice chairs. Annals of Pharmacotherapy, 43(2), 268–275.Google Scholar
  12. Zhang, C. T. (2009). The e-index, complementing the h-index for excess citations. PLoS One, 4(5), e5429.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2010

Authors and Affiliations

  • G. E. Derrick
    • 1
  • H. Sturk
    • 2
  • A. S. Haynes
    • 1
  • S. Chapman
    • 1
  • W. D. Hall
    • 2
  1. 1.Sydney School of Public Health, Edward Ford BuildingThe University of SydneySydneyAustralia
  2. 2.School of Population HealthUniversity of QueenslandBrisbaneAustralia

Personalised recommendations