Advertisement

Scientometrics

, Volume 85, Issue 2, pp 471–487 | Cite as

Citation analysis and peer ranking of Australian social science journals

  • Gaby Haddow
  • Paul Genoni
Article

Abstract

Citation analyses were performed for Australian social science journals to determine the differences between data drawn from Web of Science and Scopus. These data were compared with the tier rankings assigned by disciplinary groups to the journals for the purposes of a new research assessment model, Excellence in Research for Australia (ERA), due to be implemented in 2010. In addition, citation-based indicators including an extended journal impact factor, the h-index, and a modified journal diffusion factor, were calculated to assess whether subsequent analyses influence the ranking of journals. The findings suggest that the Scopus database provides higher number of citations for more of the journals. However, there appears to be very little association between the assigned tier ranking of journals and their rank derived from citations data. The implications for Australian social science researchers are discussed in relation to the use of citation analysis in the ERA.

Keywords

Citation analysis Social science journals Research Assessment Citation sources Australia Journal ranking 

References

  1. Australian Bureau of Statistics. (2008). Australian and New Zealand Standard Research Classification. Retrieved September 16, 2009, from http://www.abs.gov.au/Ausstats/abs@.nsf/Latestproducts/6BB427AB9696C225CA2574180004463E?opendocument.
  2. Australian Research Council. (2008). ERA Indicator Principles. Retrieved December 21, 2008, from http://www.arc.gov.au/era/indicators.htm.
  3. Australian Research Council. (2009a). Draft ERA submission guidelines: Physical, Chemical and Earth Sciences (PCE) and Humanities and Creative Arts (HCA) clusters. Retrieved January 29, 2009, from http://www.arc.gov.au/pdf/Draft_ERA_Sub_Guide.pdf.
  4. Australian Research Council. (2009b). ERA Indicators Consultation Paper. Retrieved October 4, 2009, from http://www.arc.gov.au/pdf/ERA_indicators_consult.pdf.
  5. Bakkalbasi, N., Bauer, K., Glover, J., & Wang, L. (2006). Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomedical Digital Libraries, 3(7).Google Scholar
  6. Bar-Ilan, J. (2008). Which h-index?—a comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257–271.CrossRefGoogle Scholar
  7. Bauer, K., & Bakkalbasi, N. (2005). An examination of citation counts in a new scholarly communication environment. D-Lib Magazine, 11(9), [np].Google Scholar
  8. Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008). Citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8, 93–102.CrossRefGoogle Scholar
  9. Bosman, J., Van Mourik, I., Rasch, M., Sieverts, E., & Verhoeff, H. (2006). Scopus Reviewed and Compared: The Coverage and Functionality of the Citation Database Scopus, Including Comparisons with Web of Science and Google Scholar. Utrecht: Utrecht University Library, http://igitur-archive.library.uu.nl/DARLIN/2006-1220-200432/UUindex.html.
  10. Bourke, P. (1994, 19–25 May). Quantitative research indicators: Citations treatment ‘deficient’. Campus Review, p. 9.Google Scholar
  11. Browman, H. I., & Stergiou, K. I. (2008). Factors and indices are one thing, deciding who is scholarly, why they are scholarly, and the relative value of their scholarship is something else entirely. Ethics in Science and Environmental Politics, 8, 1–3.CrossRefGoogle Scholar
  12. Butler, L. (2006). RQF Pilot Study Project: History and Political Science: Methodology for Citation Analysis: REPP. Retrieved September 8, 2009, from http://www.chass.org.au/papers/pdf/PAP20061102LB.pdf.
  13. Butler, L. (2008). Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian Research Quality Framework. Ethics in Science and Environmental Politics, 8, 83–92.CrossRefGoogle Scholar
  14. Butler, L., & Visser, M. S. (2006). Extending citation analysis to non-source items. Scientometrics, 66(2), 327–343.CrossRefGoogle Scholar
  15. Cameron, B. D. (2005). Trends in the usage of ISI bibliometric data: Uses, abuses, and implications. Portal: Libraries and the Academy, 5(1), 105–125.CrossRefGoogle Scholar
  16. De Solla Price, D. J. (1970). Citation measures of hard science, soft science, technology, and nonscience. In C. E. Nelson & D. K. Pollock (Eds.), Communication Among Scientists and Engineers (pp. 3–22). Lexington, MA: Heath Lexington Books.Google Scholar
  17. Earle, P., & Vickery, B. (1969). Social science literature use in the UK as indicated by citations. Journal of Documentation, 25, 123–141.CrossRefGoogle Scholar
  18. East, J. W. (2006). Ranking journals in the humanities: An Australian case study. AARL: Australian Academic & Research Libraries, 37(1), 3–16.MathSciNetGoogle Scholar
  19. Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152.CrossRefMathSciNetGoogle Scholar
  20. Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, Web of Science, and Google Scholar: Strengths and weaknesses. Faseb Journal, 22(2), 338–342.CrossRefGoogle Scholar
  21. Frandsen, T. F. (2004). Journal diffusion factors: A measure of diffusion? Aslib Proceedings, 56(1), 5–11.CrossRefGoogle Scholar
  22. Gavel, Y., & Iselid, L. (2008). Web of Science and Scopus: A journal title overlap study. Online Information Review, 32(1), 8–21.CrossRefGoogle Scholar
  23. Genoni, P., & Haddow, G. (2009). ERA and the ranking of Australian humanities journals. Australian Humanities Review, 46((May)), 7–26.Google Scholar
  24. Haddow, G. (2008). Quality Australian journals in the humanities and social sciences. AARL: Australian Academic and Research Libraries, 39(2), 79–91.Google Scholar
  25. Haddow, G., & Genoni, P. (2009). Australian education journals: Quantitative and qualitative indicators. AARL: Australian Academic & Research Libraries, 40(2), 88–104.Google Scholar
  26. Hayes, R. M. (1983). Citation statistics as a measure of faculty research productivity. Journal of Education for Librarianship, 23(3), 151–172.MathSciNetGoogle Scholar
  27. Hicks, D. (1999). The difficulty of achieving full coverage of international social science literature and the bibliometric consequences. Scientometrics, 44(2), 193–215.CrossRefMathSciNetGoogle Scholar
  28. Hicks, D. (2004). The four literatures of social science. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research: The use of publication and patent statistics in studies of S&T systems (pp. 473–496). Dordrecht: Kluwer Academic Publishers.Google Scholar
  29. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102, 16569–16572. Retrieved June 16, 2007, from http://www.pnas.org/content/102/46/16569.full.
  30. HM Treasury. (2006). Government Meeting Science Goals. Retrieved December 28, 2008, from http://www.hm-treasury.gov.uk/press_53_06.htm.
  31. Holmes, A., & Oppenheim, C. (2001). Use of citation analysis to predict the outcome of the 2001 Research Assessment Exercise for Unit of Assessment (UoA) 61: Library and Information Management. Information Research, 6(2).Google Scholar
  32. Jacso, P. (2005). As we may search—comparison of major features of the Web of Science, Scopus, and Google Scholar citation-based and citation-enhanced databases. Current Science, 89(9), 1537–1547.Google Scholar
  33. Jacso, P. (2008). The pros and cons of computing the h-index using Google Scholar. Online Information Review, 32(3), 437–452.CrossRefGoogle Scholar
  34. Jarvelin, K., & Persson, O. (2008). The DCI index: Discounted cumulated impact-based research evaluation. Journal of the American Society for Information Science and Technology, 59(9), 1433–1440.CrossRefGoogle Scholar
  35. Line, M. B. (1981). The structure of social science literature as shown by a large-scale citation analysis. Social Science Information Studies, 1, 67–87.CrossRefGoogle Scholar
  36. Markpin, T., Boonradsamee, B., Ruksinsut, K., Yochai, W., Premkamolnetr, N., Ratchatahirun, P., et al. (2008). Article-count impact factor of materials science journals in SCI database. Scientometrics, 75(2), 251–261.CrossRefGoogle Scholar
  37. Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar. Journal of the American Society for Information Science and Technology, 58(13), 2105–2125.CrossRefGoogle Scholar
  38. Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer.Google Scholar
  39. Norris, M., & Oppenheim, C. (2007). Comparing alternatives to the Web of Science for coverage of the social sciences’ literature. Journal of Informetrics, 1(2), 161–169.CrossRefGoogle Scholar
  40. Oppenheim, C. (1995). The correlation between citation counts and the 1992 Research Assessment Exercise ratings for British library and information science university departments. Journal of Documentation, 51(1), 18–27.CrossRefMathSciNetGoogle Scholar
  41. Oppenheim, C. (1997). The correlation between citation counts and the 1992 Research Assessment Exercise ratings for British research in Genetics, Anatomy and Archaeology. Journal of Documentation, 53(5), 477–487.CrossRefMathSciNetGoogle Scholar
  42. Oppenheim, C., & Summers, M. A. C. (2008). Citation counts and the Research Assessment Exercise, part IV: Unit of assessment 67 (music). Information Research, 13(2), paper 342.Google Scholar
  43. Research Evaluation & Policy Project. (2005). Quantitative Indicators for Research Assessment: A Literature Review Canberra: Research Evaluation and Policy Project, Australian National University. Retrieved March 17, 2007.Google Scholar
  44. Rowlands, I. (2002). Journal diffusion factors: A new approach to measuring research influence. Aslib Proceedings, 54(2), 77–84.CrossRefGoogle Scholar
  45. Royle, P. (1994). A citation analysis of Australian science and social science journals. Australian Academic and Research Libraries, 25(3), 162–171.Google Scholar
  46. Royle, P., & OVER, R. (1994). The use of bibliometric indicators to measure the research productivity of Australian academics. AARL: Australian Academic & Research Libraries, 25(2), 77–88.Google Scholar
  47. Smith, A. G. (2008). Benchmarking Google Scholar with the New Zealand PBRF research assessment exercise. Scientometrics, 74(2), 309–316.CrossRefGoogle Scholar
  48. Smith, K., & Middleton, M. (2009). Australian library and information studies (LIS) researchers ranking of LIS journals. AARL: Australian Academic & Research Libraries, 40(4), 1–20.Google Scholar
  49. Smyth, R. (1999). A citation analysis of Australian economic journals. AARL: Australian Academic & Research Libraries, 30(2), 119–133.Google Scholar
  50. Steele, C., Butler, L., & Kingsley, D. (2006). The publishing imperative: The pervasive influence of publication metrics. Learned Publishing, 19(4), 277–290.CrossRefGoogle Scholar
  51. Vaughan, L., & Shaw, D. (2008). A new look at evidence of scholarly citation in citation indexes and from web sources. Scientometrics, 74(2), 317–330.CrossRefGoogle Scholar
  52. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercise. Journal of Information Science, 26(6), 453–460.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2010

Authors and Affiliations

  1. 1.Department of Information StudiesCurtin University of TechnologyPerthAustralia

Personalised recommendations