Reliability of citation searches is a cornerstone of bibliometric research. The authors compare simultaneous search returns at two sites to demonstrate discrepancies that can occur as a result of differences in institutional subscriptions to the Web of Science and Web of Knowledge. Such discrepancies may have significant implications for the reliability of bibliometric research in general, but also for the calculation of individual and group indices used for promotion and funding decisions. The authors caution care when describing the methods used in bibliometric analysis and when evaluating researchers from different institutions. In both situations a description of the specific databases used would enable greater reliability.
KeywordsWeb of Science Web of Knowledge Institutional subscriptions Evaluative bibliometrics
- Garfield, E. (1990). Journal editors awaken to the impact of citation errors. How we control them at ISI. Current Contents, 41, 5–13.Google Scholar
- Jacso, P. (2005). As we may search—comparison of major features of the Web of Science, Scopus and Google Scholar citation-based and citation-enhanced databases. Current Science, 89(9), 1537–1547.Google Scholar
- Osca-Lluch, J., Molla, C. C., & Ortega, M. P. (2009). Consequences of the error in bibliographical references. Psicothema, 21(2), 300–303.Google Scholar
- Sorensen, A. A. (2009). Alzheimer’s disease research: Scientific productivity and impact of the top 100 investigators in the field. Journal of Alzheimers Disease, 16(3), 451–465.Google Scholar
- Thompson, D. F., Callen, E. C., & Nahata, M. C. (2009). Publication metrics and record of pharmacy practice chairs. Annals of Pharmacotherapy, 43(2), 268–275.Google Scholar