Scientometrics

, Volume 101, Issue 3, pp 1715–1729 | Cite as

Rankings, research styles, and publication cultures: a study of American sociology departments

Article

Abstract

Rankings have become a major form of quality assessment in higher education over the past few decades. Most rankings rely, to varying extent, on bibliometric indicators intended to capture the quantity and quality of the scientific output of institutions. The growing popularity of this practice has raised a number of concerns, one of the most important being whether evaluations of this sort treat different work styles and publication habits in an unbiased manner and, consequently, whether the resulting rankings properly respect the particular modes of research characteristic of various disciplines and subdisciplines. The research reported in this paper looked at this issue, using data on more than one hundred US sociology departments. Our results showed that institutions that are more quantitative in character are more likely to favor journals over books as the dominant form of scientific communication and fare, in general, considerably better on the National Research Council’s assessment than their qualitative equivalents. After controlling for differences in publication practices, the impact of research style declined but remained statistically significant. It thus seems that the greater preference of qualitative departments for books over articles as publication outlets puts them at a disadvantage as far as quality assessments are concerned, although their lagging behind their quantitative counterparts cannot fully be explained by this factor alone.

Keywords

Academic publishing Research methods Scientific paradigm Sociology of science University rankings 

Mathematics Subject Classfication

62-07 62J05 

JEL Classification

B500 Y800 Z00 

Notes

Acknowledgments

The research reported in this paper was supported by the Hungarian Scientific Research Fund, Grant no. T049106. Katalin Bander, Hanna Kónya, Árpád Rab, Boglárka Simon, and Ágnes Sántha provided invaluable help in the data collection, which is gratefully acknowledged. We also thank Akos Rona-Tas, University of California, San Diego, for useful comments and suggestions.

References

  1. Aaltojärvi, I., Arminen, I., Auranen, O., & Pasanen, H.-M. (2008). Scientific productivity, web visibility and citation patterns in sixteen nordic sociology departments. Acta Sociologica, 51(1), 5–22.CrossRefGoogle Scholar
  2. Achen, C. (1991). What does “explained variance” explain? Political Analysis, 2, 173–184.CrossRefGoogle Scholar
  3. Allen, M. P. (2003). The ‘Core Influence’ of Journals in Sociology Revisited. ASA Footnotes, 31, Retrieved September 14, 2013 from the World Wide Web: http://ww.www2.asanet.org/footnotes/dec03/fn11.html.
  4. American Sociological Association. (2007). Guide to graduate departments of sociology. Reference HM9.A512 2007.Google Scholar
  5. American Sociological Association. (2011). Report to the American Sociological Association Council Regarding the 2010 National Research Council Assessment of Doctorate Programs.Google Scholar
  6. Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, scopus and Google Scholar. Scientometrics, 74, 257–271.CrossRefGoogle Scholar
  7. Bastedo, M. N., & Bowman, N. A. (2010). US news and world report college rankings—modeling institutional effects on organizational reputation. American Journal of Education, 116, 163–183.CrossRefGoogle Scholar
  8. Berk, R. A. (2005). Survey of 12 strategies to measure teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 17(1), 48–62.Google Scholar
  9. Biglan, A. (1973a). The characteristics of subject matter in different academic areas. Journal of Applied Psychology, 57, 195–203.CrossRefGoogle Scholar
  10. Biglan, A. (1973b). Relationships between subject matter characteristics and the structure and output of university departments. Journal of Applied Psychology, 57, 204–213.CrossRefGoogle Scholar
  11. Blalock, H. M. (1979). Social statistics (2nd ed.). New York: McGraw-Hill.Google Scholar
  12. Blalock, H. M. (1989). The real and unrealized contributions of quantitative sociology. American Sociological Review, 54, 447–460.CrossRefGoogle Scholar
  13. Bourke, P., & Butler, L. (1996). Publication types, citation rates and evaluation. Scientometrics, 37, 473–494.CrossRefGoogle Scholar
  14. Bowman, N. A., & Bastedo, M. N. (2009). Getting on the front page: Organizational reputation, status signals, and the impact of “US News and World Report” on student decisions. Research in Higher Education, 50(5), 415–436.CrossRefGoogle Scholar
  15. Clarke, M. (2007). The impact of higher education rankings on student access, choice, and opportunity. In Institute for Higher Education Policy (Ed.), College and university ranking systems: Global perspectives and American challenges (pp. 35–48). Washington: Institute for Higher Education Policy.Google Scholar
  16. Clemens, E. S., Powell, W. W., McIlwaine, K., & Okamoto, D. (1995). Careers in print: books, journals, and scholarly reputations. American Journal of Sociology, 101, 433–494.CrossRefGoogle Scholar
  17. Cohen, J. (1994). The earth is round (p < 0.05). American Psychologist, 49, 997–1003.CrossRefGoogle Scholar
  18. Cole, S. (1983). The hierarchy of the sciences? American Journal of Sociology, 89, 111–139.CrossRefGoogle Scholar
  19. Corsi, M., D’Ippoliti, C., & Lucidi, F. (2010). Pluralism at risk? Heterodox economic approaches and the evaluation of economic research in Italy. American Journal of Economics and Sociology, 1495–1529.Google Scholar
  20. Cronin, B., Snyder, H., & Atkins, H. (1997). Comparative citation rankings of authors in monographic and journal literature: A study of sociology. Journal of Documentation, 53, 263–273.CrossRefGoogle Scholar
  21. Dill, D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university ranking systems. Higher Education, 49, 495–533.CrossRefGoogle Scholar
  22. El-Khawas, E., DePietro-Jurand, R., & Holm-Nielsen, L. (1998). Quality assurance in higher education: Recent progress; challenges ahead. Working Paper, No. 21199. World Bank.Google Scholar
  23. Espeland, W., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate social worlds. American Journal of Sociology, 113, 1–40.CrossRefGoogle Scholar
  24. Franceschet, M. (2010). A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics, 83, 243–258.CrossRefGoogle Scholar
  25. Freeman, L. C. (2006). Editing a normal science journal in social science. Bulletin of Sociological Methodology, 91(July), 8–19.CrossRefGoogle Scholar
  26. Gillies, D. (2012). Economics and research assessment systems. Economic Thought, 1, 23–47. Retrieved September 4, 2013 from the World Wide Web: http://etdiscussion.worldeconomicsassociation.org/wp-content/uploads/Gillies_ET.pdf.
  27. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78, 1360–1380.CrossRefGoogle Scholar
  28. Griffith, A., & Rask, K. (2005). The influence of the US News and World Report collegiate rankings on the matriculation decision of high-ability students: 1995-2004 (CHERI Working Paper #76). Retrieved August 29, 2011 from the World Wide Web: http://www.ilr.cornell.edu/cheri/workingPapers/upload/cheri_wp76.pdf.
  29. Halász, G. (2004). A teljesítményértékelés hazai és európai uniós gyakorlata. [The practice of performance evaluation in Hungary and the European Union]. In M. Kónyáné Tóth & C. S. Molnár (Eds.), Közoktatásunk az Európai Unióban. VI. Országos Közoktatási Szakértői Konferencia (pp. 76–86). Debrecen: Suliszerviz Oktatási és Szakértői Iroda.Google Scholar
  30. Hargens, L. L., & Bott, D. M. (1991). Are sociologists’ publications uncited? Citation rates of journal articles, chapters, and books. American Sociologist, 22, 147–158.CrossRefGoogle Scholar
  31. Hargens, L. L., & Kelly-Wilson, L. (1994). Determinants of disciplinary discontent. Social Forces, 72, 1177–1195.CrossRefGoogle Scholar
  32. Harley, S., & Lee, F. (1997). Research selectivity, managerialism, and the academic labor process—the future of nonmainstream economics. Human Relations, 50, 1427–1460.Google Scholar
  33. Hicks, D. (1999). The difficulty of achieving full coverage of international social science literature and the bibliometric consequences. Scientometrics, 44, 193–215.CrossRefGoogle Scholar
  34. Jacsó, P. (2008). The pros and cons of computing the h-index using Google Scholar. Online Information Review, 32(3), 437–452.CrossRefGoogle Scholar
  35. Kapeller, J. (2010). Citation metrics: Serious drawbacks, perverse incentives, and strategic options for heterodox economics. American Journal of Economics and Sociology, 69, 1376–1408.CrossRefGoogle Scholar
  36. Kish, L. (1959). Some statistical problems in research design. American Sociological Review, 24, 328–338.CrossRefGoogle Scholar
  37. Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.Google Scholar
  38. Kyvik, S. (2003). Changing trends in publishing behaviour among university faculty, 1980–2000. Scientometrics, 58, 35–48.CrossRefGoogle Scholar
  39. Larivière, V., Gingras, Y., & Archambault, É. (2006). Canadian collaboration networks—a comparative analysis of the natural sciences, social sciences and the humanities. Scientometrics, 68, 519–533.CrossRefGoogle Scholar
  40. Lee, F. S. (2006). The ranking game, class, and scholarship in American mainstream economics. Australasian Journal of Economics Education, 3(1–2), 1–41.Google Scholar
  41. Lee, F. S. (2007). The research assessment exercise, the state and the dominance of mainstream economics in British universities. Cambridge Journal of Economics, 31, 309–325.CrossRefGoogle Scholar
  42. Lee, F. S., Cronin, B. C., McConnell, S., & Dean, E. (2010a). Research quality rankings of heterodox economic journals in a contested discipline. American Journal of Economics and Sociology, 69, 1409–1452.CrossRefGoogle Scholar
  43. Lee, F. S., Grijalva, T. C., & Nowell, C. (2010b). Ranking economics departments in a contested discipline: A bibliometric approach to quality equality between theoretically distinct subdisciplines. American Journal of Economics and Sociology, 69, 1345–1375.CrossRefGoogle Scholar
  44. Lee, F. S., Pham, X., Gu, G. (2012). The UK research assessment exercise and the narrowing of UK economics. MPRA Paper No. 41842. Retrieved September 4, 2013 from the World Wide Web: http://mpra.ub.uni-muenchen.de/41842/.
  45. Lodahl, J. B., & Gordon, G. (1972). The structure of scientific fields and the functioning of university graduate departments. American Sociological Review, 37, 57–72.CrossRefGoogle Scholar
  46. Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of Science vs. Scopus and Google Scholar. Journal of the American Society for Information Science and Technology, 58, 2105–2125.CrossRefGoogle Scholar
  47. Mishra, S. (2006). Quality assurance in higher education—An introduction. Bangalore: National Assessment and Accreditation Council.Google Scholar
  48. Mochnacki, A., Segaert, A., & Mclaughlin, N. (2009). Public sociology in print: a comparative analysis of book publishing in three social science disciplines. Canadian Journal of Sociology, 34, 729–764.Google Scholar
  49. Moed, H. F. (2005). Hirsch index is a creative and appealing construct but be cautious when using it to evaluate individual scholars. Retrieved April 17, 2009 from the World Wide Web: http://www.cwts.nl/hm/Comments_on_Hirsch_Index_2005_12_16.pdf.
  50. Moksony, F. (1999). Small is beautiful. The use and interpretation of R2 in social research. Review of Sociology, Special issue. 130–138.Google Scholar
  51. Monks, J., & Ehrenberg, R. G. (1999). The impact of U.S. News and World Report college rankings on admissions outcomes and pricing policies at selective private institutions (CHERI Working Paper #1). Retrieved August 10, 2011 from the World Wide Web: http://digitalcommons.ilr.cornell.edu/cheri/1.
  52. Nederhof, A. J., Zwaan, R. A., De Bruin, R. E., Dekker, P. J., et al. (1989). Assessing the usefulness of bibliometric indicators for the humanities and the social and behavioural sciences—a comparative study. Scientometrics, 15, 423–435.CrossRefGoogle Scholar
  53. Neumann, Y. (1977). Standards of research publication: Differences between the physical sciences and the social sciences. Research in Higher Education, 7, 355–367.CrossRefGoogle Scholar
  54. Oromaner, M. (2008). Intellectual integration and articles in core sociology journals, 1960–2000. The American Sociologist 39, 279–289.Google Scholar
  55. Ostriker, J. P., Kuh, C. V., & Voytuk, J. A. (2011). A data-based assessment of research-doctorate programs in the United States. Washington: National Academies Press.Google Scholar
  56. Papp, Z. (2004). A tudományos teljesítmény mérésének problémáiról. [On the problems of measuring scholarly production]. Magyar Tudomány, 49, 232–240.MathSciNetGoogle Scholar
  57. Pedhazur, E. J. (1982). Multiple regression in behavioral research: Explanation and prediction, 2nd ed. New York: Holt, Rinehart & Winston.Google Scholar
  58. Schachter, S., Christenfeld, N., Ravina, B., & Bilous, F. (1991). Speech disfluency and the structure of knowledge. Journal of Personality and Social Psychology, 60(3), 362–367.CrossRefGoogle Scholar
  59. Schachter, S., Rauscher, F., Christenfeld, N., & Tyson Crone, K. (1994). The vocabularies of academia. Psychological Science, 5(1), 37–41.CrossRefGoogle Scholar
  60. Smith, L. D., Best, L. A., Stubbs, D. A., Archibald, A. B., & Roberson-Nay, R. (2002). Constructing knowledge: The role of graphs and tables in hard and soft psychology. American Psychologist, 57, 749–761.CrossRefGoogle Scholar
  61. Stella, A., & Woodhouse, D. (2006). Ranking of higher education institutions. Retrieved August 10, 2011 from the World Wide Web: http://www.auqa.edu.au/files/publications/ranking_of_higher_education_institutions_final.pdf.
  62. Stinchcombe, A. L. (1968). Constructing social theories. New York: Harcourt, Brace & World.Google Scholar
  63. Ter Bogt, H. J., & Scapens, R. W. (2012). Performance management in universities: Effects of the transition to more quantitative measurement systems. European Accounting Review, iFirst Article, 1–47.Google Scholar
  64. Uçak, N. O., & Al, U. (2009). The differences among disciplines in scholarly communication. A bibliometric analysis of theses. Libri, 59(3), 17–166.CrossRefGoogle Scholar
  65. van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62, 133–143.CrossRefGoogle Scholar
  66. van Raan, A. F. J., van Leeuwen, T. N., & Visser, M.S. (2010). Germany and France are wronged in citation-based rankings. Retrieved August 10, 2011 from the World Wide Web: http://www.cwts.nl/pdf/LanguageRanking22122010.pdf.
  67. Well, A. D., & Myers, J. L. (2003). Research design and statistical analysis (2nd ed.). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  68. Winch, R. F., & Campbell, D. T. (1969). Proof? No. Evidence? Yes. The significance of tests of significance. The American Sociologist, 4, 140–143.Google Scholar
  69. Wolfe, A. (1990). Books vs. articles: two ways of publishing sociology. Sociological Forum, 5, 477–489.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2013

Authors and Affiliations

  • Ferenc Moksony
    • 1
  • Rita Hegedűs
    • 1
  • Melinda Császár
    • 1
  1. 1.Institute of Sociology and Social PolicyCorvinus University of BudapestBudapestHungary

Personalised recommendations