Scientometrics

, Volume 41, Issue 3, pp 325–333 | Cite as

Citation ranking versus expert judgment in evaluating communication scholars: Effects of research specialty size and individual prominence

  • C. Y. K. So
Article

Abstract

Numerous attempts have been made to validate the use of citation as an evaluation method by comparing it with peer review. Unlike past studies using journals, research articles or universities as the subject matter, the present study extends the comparison to the ranking of individual scholars. Results show that citation ranking and expert judgment of communication scholars are highly correlated. The citation method and the expert judgment method are found to work better in smaller research areas and yield more valid evaluation results for more prominent scholars.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes and references

  1. 1.
    R. C. Anderson, F. Narin, P. McAllister, Publication ratings versus peer ratings of universities,Journal of the American Society for Information Science, 29 (1978) 91–103.Google Scholar
  2. 2.
    M. E. D. Koenig, Determinants of expert judgement of research performance,Scientometrics, 4 (1982) 361–378.CrossRefGoogle Scholar
  3. 3.
    M. E. D. Koenig, Bibliometric indicators versus expert opinion in assessing research performance,Journal of the American Society for Information Science, 34 (1983) 136–145.Google Scholar
  4. 4.
    S. M. Lawani, A. E. Bayer, Validity of citation criteria for assessing the influence of scientific publications: New evidence with peer assessment,Journal of the American Society for Information Science, 34 (1983) 59–66.Google Scholar
  5. 5.
    A. L. Porter, D. E. Chubin, X. Jin, Citations and scientific progress: Comparing bibliometric measures with scientific judgments,Scientometrics, 13 (1988) 103–124.CrossRefGoogle Scholar
  6. 6.
    P. R. McAllister, R. C. Anderson, F. Narin, Comparison of peer and citation assessment of the influence of scientific journals,Journal of the American Society for Information Science, 31 (1980) 147–152.Google Scholar
  7. 7.
    M. D. Gordon, Citation ranking versus subjective evaluation in the determination of journal hierarchies in the social sciences,Journal of the American Society for Information Science, 33 (1982) 55–57.Google Scholar
  8. 8.
    A. Singleton, Journal ranking and selection: A review in physics,Journal of Documentation, 32 (1976) 258–289.Google Scholar
  9. 9.
    H. H. Garrison, S. S. Herman, J. A. Lipton, Measuring characteristics of scientific research: A comparison of bibliographic and survey data,Scientometrics, 24 (1992) 359–370.CrossRefGoogle Scholar
  10. 10.
    The citation data collected in 1985–87 and the questionnaire data collected in 1991 were both for the author's dissertation research. Collecting both types of data simultaneously should be more ideal, but the time lapse between the two sets of data should not pose a major threat to validity because overall citation patterns as well as peer judgment usually remain quite stable over a short period of time. For more details on the data collection methods, seeC. Y. K. So,Mapping the Intellectual Landscape of Communication Studies: An Evaluation of Its Disciplinary Status, Unpublished Ph.D. dissertation, Annenberg School for Communication, University of Pennsylvania, 1995 (UMI Number 9615129).Google Scholar
  11. 11.
    D. Crane, Social structure in a group of scientists: A test of the “Invisible College” hypothesis,American Sociological Review, 34 (1969), 335–352.CrossRefGoogle Scholar
  12. 12.
    E. Garfield,Citation Indexing: Its Theory and Application in Science, Technology, and Humanities, ISI Press, Philadelphia, 1979.Google Scholar

Copyright information

© Akadémiai Kiadó 1998

Authors and Affiliations

  • C. Y. K. So
    • 1
  1. 1.Department of Journalism & CommunicationChinese University of Hong KongShatin, N.T.(Hong Kong)

Personalised recommendations