Skip to main content
Log in

Rankings, research styles, and publication cultures: a study of American sociology departments

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Rankings have become a major form of quality assessment in higher education over the past few decades. Most rankings rely, to varying extent, on bibliometric indicators intended to capture the quantity and quality of the scientific output of institutions. The growing popularity of this practice has raised a number of concerns, one of the most important being whether evaluations of this sort treat different work styles and publication habits in an unbiased manner and, consequently, whether the resulting rankings properly respect the particular modes of research characteristic of various disciplines and subdisciplines. The research reported in this paper looked at this issue, using data on more than one hundred US sociology departments. Our results showed that institutions that are more quantitative in character are more likely to favor journals over books as the dominant form of scientific communication and fare, in general, considerably better on the National Research Council’s assessment than their qualitative equivalents. After controlling for differences in publication practices, the impact of research style declined but remained statistically significant. It thus seems that the greater preference of qualitative departments for books over articles as publication outlets puts them at a disadvantage as far as quality assessments are concerned, although their lagging behind their quantitative counterparts cannot fully be explained by this factor alone.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. It is only from the second half of 2011 that SCI includes a select group of books published in 2005 or later. Google Scholar, which is sometimes recommended as an alternative to SCI, has a more extensive coverage of books and other similar items, but at the price, it seems, of sacrificing quality for quantity. It fairly often produces nonsensical results, such as when people mentioned in the main text are “promoted” to co-authors (for examples, see Jacsó 2008; see also his Google Scholar’s Hall of Shame at http://www.jacso.info/gs-hos/). It should be noted, though, that Google’s service is continuously improving and with the launch of Google Scholar Citations in 2011, researchers now are able to check and correct their publication records before requesting citation statistics.

  2. “General Search” does not at all takes references to books into account, whereas “Cited Reference Search” only gives references coming from journals covered by SCI. References coming from other books are completely ignored, which is no small issue, given that, as the study by Cronin et al. (1997) has shown, the list of sociologists most frequently cited in books only partially overlaps that of those most frequently cited in journal articles.

  3. Obviously, the presence or absence of a dominant paradigm is just one among the many factors that shape publication habits, albeit clearly a crucial one. Differences in the subject matter and substantive content of individual disciplines probably also play an important role, although their effects can be difficult to distinguish from those of paradigm development. The widespread use of statistical procedures, for example, has been shown to correlate negatively with the “hardness” or “scientificity” of research areas (Smith et al. 2002). Whether this finding indicates the impact of variations, across fields, in the level of scientific “maturity” is hard to tell. It is possible that the more frequent use of statistical methods in “softer” disciplines reflects the fact that phenomena investigated in these branches of science are more stochastic or multi-causal in nature and in the absence of access to controlled experiments, multivariate analytic techniques are needed to separate competing causal mechanisms.

  4. While not directly pertaining to publication practices, the studies by Schachter et al. (1991, 1994) are also worthy of mention here, as it illustrates the relationship between paradigm development and the compactness of knowledge expression. Analyzing tape-recorded lectures from various disciplines, the authors found that teachers in the natural sciences used fewer different words on average than their colleagues in the social sciences and humanities and the mean number of filled pauses during lectures was also much smaller.

  5. This is true even though authors’ affiliations are used to make some institutional comparisons, mainly between private and public universities. These comparisons, however, do not involve research styles as a separate variable.

  6. The list of these institutions was taken from Julian Dierkes’ web site at http://www.sociolog.com/links/index.html.

  7. In this part of the study, 54 of the 113 departments were rated independently by three persons, based on variables such as the topics of Ph.D. theses and the characteristics of special programs offered by departments. The three ratings were then subsumed into a single overall assessment, distinguishing quantitative, qualitative and mixed institutions.

  8. Often labeled as the “big 3”, these journals are commonly regarded as the most important and most prestigious outlets available for sociologists (see e.g., Hargens and Bott 1991; Allen 2003; Oromaner 2008).

  9. Ranges were produced by using repeated resampling. From the entire pool of faculty members asked about the importance of various program features, a large number of random samples were drawn one after the other, each providing one possible ranking of institutions. This process gave rise, for each department, to a whole range of rankings. Of these different rankings, the best and worst 5 % were then cut, leaving the lower and upper end points that were eventually published.

  10. Examination of scatter plots and analysis of residuals identified four outlying observations. To see how much these deviant data points might have affected our findings, we reran the regression with the outliers deleted, but both the coefficient for the independent variable and its significance level remained practically unchanged, so we decided to report results for the full data set.

  11. Adjusted mean ranks were obtained from analysis of covariance, with research style used as a fixed factor and the principal component score capturing publication habits entered as a covariate.

  12. To save space, these results are not shown in the paper but are available from the authors on request.

References

  • Aaltojärvi, I., Arminen, I., Auranen, O., & Pasanen, H.-M. (2008). Scientific productivity, web visibility and citation patterns in sixteen nordic sociology departments. Acta Sociologica, 51(1), 5–22.

    Article  Google Scholar 

  • Achen, C. (1991). What does “explained variance” explain? Political Analysis, 2, 173–184.

    Article  Google Scholar 

  • Allen, M. P. (2003). The ‘Core Influence’ of Journals in Sociology Revisited. ASA Footnotes, 31, Retrieved September 14, 2013 from the World Wide Web: http://ww.www2.asanet.org/footnotes/dec03/fn11.html.

  • American Sociological Association. (2007). Guide to graduate departments of sociology. Reference HM9.A512 2007.

  • American Sociological Association. (2011). Report to the American Sociological Association Council Regarding the 2010 National Research Council Assessment of Doctorate Programs.

  • Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, scopus and Google Scholar. Scientometrics, 74, 257–271.

    Article  Google Scholar 

  • Bastedo, M. N., & Bowman, N. A. (2010). US news and world report college rankings—modeling institutional effects on organizational reputation. American Journal of Education, 116, 163–183.

    Article  Google Scholar 

  • Berk, R. A. (2005). Survey of 12 strategies to measure teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 17(1), 48–62.

    Google Scholar 

  • Biglan, A. (1973a). The characteristics of subject matter in different academic areas. Journal of Applied Psychology, 57, 195–203.

    Article  Google Scholar 

  • Biglan, A. (1973b). Relationships between subject matter characteristics and the structure and output of university departments. Journal of Applied Psychology, 57, 204–213.

    Article  Google Scholar 

  • Blalock, H. M. (1979). Social statistics (2nd ed.). New York: McGraw-Hill.

    Google Scholar 

  • Blalock, H. M. (1989). The real and unrealized contributions of quantitative sociology. American Sociological Review, 54, 447–460.

    Article  Google Scholar 

  • Bourke, P., & Butler, L. (1996). Publication types, citation rates and evaluation. Scientometrics, 37, 473–494.

    Article  Google Scholar 

  • Bowman, N. A., & Bastedo, M. N. (2009). Getting on the front page: Organizational reputation, status signals, and the impact of “US News and World Report” on student decisions. Research in Higher Education, 50(5), 415–436.

    Article  Google Scholar 

  • Clarke, M. (2007). The impact of higher education rankings on student access, choice, and opportunity. In Institute for Higher Education Policy (Ed.), College and university ranking systems: Global perspectives and American challenges (pp. 35–48). Washington: Institute for Higher Education Policy.

    Google Scholar 

  • Clemens, E. S., Powell, W. W., McIlwaine, K., & Okamoto, D. (1995). Careers in print: books, journals, and scholarly reputations. American Journal of Sociology, 101, 433–494.

    Article  Google Scholar 

  • Cohen, J. (1994). The earth is round (p < 0.05). American Psychologist, 49, 997–1003.

    Article  Google Scholar 

  • Cole, S. (1983). The hierarchy of the sciences? American Journal of Sociology, 89, 111–139.

    Article  Google Scholar 

  • Corsi, M., D’Ippoliti, C., & Lucidi, F. (2010). Pluralism at risk? Heterodox economic approaches and the evaluation of economic research in Italy. American Journal of Economics and Sociology, 1495–1529.

  • Cronin, B., Snyder, H., & Atkins, H. (1997). Comparative citation rankings of authors in monographic and journal literature: A study of sociology. Journal of Documentation, 53, 263–273.

    Article  Google Scholar 

  • Dill, D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university ranking systems. Higher Education, 49, 495–533.

    Article  Google Scholar 

  • El-Khawas, E., DePietro-Jurand, R., & Holm-Nielsen, L. (1998). Quality assurance in higher education: Recent progress; challenges ahead. Working Paper, No. 21199. World Bank.

  • Espeland, W., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate social worlds. American Journal of Sociology, 113, 1–40.

    Article  Google Scholar 

  • Franceschet, M. (2010). A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics, 83, 243–258.

    Article  Google Scholar 

  • Freeman, L. C. (2006). Editing a normal science journal in social science. Bulletin of Sociological Methodology, 91(July), 8–19.

    Article  Google Scholar 

  • Gillies, D. (2012). Economics and research assessment systems. Economic Thought, 1, 23–47. Retrieved September 4, 2013 from the World Wide Web: http://etdiscussion.worldeconomicsassociation.org/wp-content/uploads/Gillies_ET.pdf.

  • Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78, 1360–1380.

    Article  Google Scholar 

  • Griffith, A., & Rask, K. (2005). The influence of the US News and World Report collegiate rankings on the matriculation decision of high-ability students: 1995-2004 (CHERI Working Paper #76). Retrieved August 29, 2011 from the World Wide Web: http://www.ilr.cornell.edu/cheri/workingPapers/upload/cheri_wp76.pdf.

  • Halász, G. (2004). A teljesítményértékelés hazai és európai uniós gyakorlata. [The practice of performance evaluation in Hungary and the European Union]. In M. Kónyáné Tóth & C. S. Molnár (Eds.), Közoktatásunk az Európai Unióban. VI. Országos Közoktatási Szakértői Konferencia (pp. 76–86). Debrecen: Suliszerviz Oktatási és Szakértői Iroda.

    Google Scholar 

  • Hargens, L. L., & Bott, D. M. (1991). Are sociologists’ publications uncited? Citation rates of journal articles, chapters, and books. American Sociologist, 22, 147–158.

    Article  Google Scholar 

  • Hargens, L. L., & Kelly-Wilson, L. (1994). Determinants of disciplinary discontent. Social Forces, 72, 1177–1195.

    Article  Google Scholar 

  • Harley, S., & Lee, F. (1997). Research selectivity, managerialism, and the academic labor process—the future of nonmainstream economics. Human Relations, 50, 1427–1460.

    Google Scholar 

  • Hicks, D. (1999). The difficulty of achieving full coverage of international social science literature and the bibliometric consequences. Scientometrics, 44, 193–215.

    Article  Google Scholar 

  • Jacsó, P. (2008). The pros and cons of computing the h-index using Google Scholar. Online Information Review, 32(3), 437–452.

    Article  Google Scholar 

  • Kapeller, J. (2010). Citation metrics: Serious drawbacks, perverse incentives, and strategic options for heterodox economics. American Journal of Economics and Sociology, 69, 1376–1408.

    Article  Google Scholar 

  • Kish, L. (1959). Some statistical problems in research design. American Sociological Review, 24, 328–338.

    Article  Google Scholar 

  • Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Kyvik, S. (2003). Changing trends in publishing behaviour among university faculty, 1980–2000. Scientometrics, 58, 35–48.

    Article  Google Scholar 

  • Larivière, V., Gingras, Y., & Archambault, É. (2006). Canadian collaboration networks—a comparative analysis of the natural sciences, social sciences and the humanities. Scientometrics, 68, 519–533.

    Article  Google Scholar 

  • Lee, F. S. (2006). The ranking game, class, and scholarship in American mainstream economics. Australasian Journal of Economics Education, 3(1–2), 1–41.

    Google Scholar 

  • Lee, F. S. (2007). The research assessment exercise, the state and the dominance of mainstream economics in British universities. Cambridge Journal of Economics, 31, 309–325.

    Article  Google Scholar 

  • Lee, F. S., Cronin, B. C., McConnell, S., & Dean, E. (2010a). Research quality rankings of heterodox economic journals in a contested discipline. American Journal of Economics and Sociology, 69, 1409–1452.

    Article  Google Scholar 

  • Lee, F. S., Grijalva, T. C., & Nowell, C. (2010b). Ranking economics departments in a contested discipline: A bibliometric approach to quality equality between theoretically distinct subdisciplines. American Journal of Economics and Sociology, 69, 1345–1375.

    Article  Google Scholar 

  • Lee, F. S., Pham, X., Gu, G. (2012). The UK research assessment exercise and the narrowing of UK economics. MPRA Paper No. 41842. Retrieved September 4, 2013 from the World Wide Web: http://mpra.ub.uni-muenchen.de/41842/.

  • Lodahl, J. B., & Gordon, G. (1972). The structure of scientific fields and the functioning of university graduate departments. American Sociological Review, 37, 57–72.

    Article  Google Scholar 

  • Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of Science vs. Scopus and Google Scholar. Journal of the American Society for Information Science and Technology, 58, 2105–2125.

    Article  Google Scholar 

  • Mishra, S. (2006). Quality assurance in higher education—An introduction. Bangalore: National Assessment and Accreditation Council.

    Google Scholar 

  • Mochnacki, A., Segaert, A., & Mclaughlin, N. (2009). Public sociology in print: a comparative analysis of book publishing in three social science disciplines. Canadian Journal of Sociology, 34, 729–764.

    Google Scholar 

  • Moed, H. F. (2005). Hirsch index is a creative and appealing construct but be cautious when using it to evaluate individual scholars. Retrieved April 17, 2009 from the World Wide Web: http://www.cwts.nl/hm/Comments_on_Hirsch_Index_2005_12_16.pdf.

  • Moksony, F. (1999). Small is beautiful. The use and interpretation of R2 in social research. Review of Sociology, Special issue. 130–138.

  • Monks, J., & Ehrenberg, R. G. (1999). The impact of U.S. News and World Report college rankings on admissions outcomes and pricing policies at selective private institutions (CHERI Working Paper #1). Retrieved August 10, 2011 from the World Wide Web: http://digitalcommons.ilr.cornell.edu/cheri/1.

  • Nederhof, A. J., Zwaan, R. A., De Bruin, R. E., Dekker, P. J., et al. (1989). Assessing the usefulness of bibliometric indicators for the humanities and the social and behavioural sciences—a comparative study. Scientometrics, 15, 423–435.

    Article  Google Scholar 

  • Neumann, Y. (1977). Standards of research publication: Differences between the physical sciences and the social sciences. Research in Higher Education, 7, 355–367.

    Article  Google Scholar 

  • Oromaner, M. (2008). Intellectual integration and articles in core sociology journals, 1960–2000. The American Sociologist 39, 279–289.

    Google Scholar 

  • Ostriker, J. P., Kuh, C. V., & Voytuk, J. A. (2011). A data-based assessment of research-doctorate programs in the United States. Washington: National Academies Press.

    Google Scholar 

  • Papp, Z. (2004). A tudományos teljesítmény mérésének problémáiról. [On the problems of measuring scholarly production]. Magyar Tudomány, 49, 232–240.

    MathSciNet  Google Scholar 

  • Pedhazur, E. J. (1982). Multiple regression in behavioral research: Explanation and prediction, 2nd ed. New York: Holt, Rinehart & Winston.

  • Schachter, S., Christenfeld, N., Ravina, B., & Bilous, F. (1991). Speech disfluency and the structure of knowledge. Journal of Personality and Social Psychology, 60(3), 362–367.

    Article  Google Scholar 

  • Schachter, S., Rauscher, F., Christenfeld, N., & Tyson Crone, K. (1994). The vocabularies of academia. Psychological Science, 5(1), 37–41.

    Article  Google Scholar 

  • Smith, L. D., Best, L. A., Stubbs, D. A., Archibald, A. B., & Roberson-Nay, R. (2002). Constructing knowledge: The role of graphs and tables in hard and soft psychology. American Psychologist, 57, 749–761.

    Article  Google Scholar 

  • Stella, A., & Woodhouse, D. (2006). Ranking of higher education institutions. Retrieved August 10, 2011 from the World Wide Web: http://www.auqa.edu.au/files/publications/ranking_of_higher_education_institutions_final.pdf.

  • Stinchcombe, A. L. (1968). Constructing social theories. New York: Harcourt, Brace & World.

    Google Scholar 

  • Ter Bogt, H. J., & Scapens, R. W. (2012). Performance management in universities: Effects of the transition to more quantitative measurement systems. European Accounting Review, iFirst Article, 1–47.

  • Uçak, N. O., & Al, U. (2009). The differences among disciplines in scholarly communication. A bibliometric analysis of theses. Libri, 59(3), 17–166.

    Article  Google Scholar 

  • van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62, 133–143.

    Article  Google Scholar 

  • van Raan, A. F. J., van Leeuwen, T. N., & Visser, M.S. (2010). Germany and France are wronged in citation-based rankings. Retrieved August 10, 2011 from the World Wide Web: http://www.cwts.nl/pdf/LanguageRanking22122010.pdf.

  • Well, A. D., & Myers, J. L. (2003). Research design and statistical analysis (2nd ed.). Mahwah: Lawrence Erlbaum Associates.

    Google Scholar 

  • Winch, R. F., & Campbell, D. T. (1969). Proof? No. Evidence? Yes. The significance of tests of significance. The American Sociologist, 4, 140–143.

    Google Scholar 

  • Wolfe, A. (1990). Books vs. articles: two ways of publishing sociology. Sociological Forum, 5, 477–489.

    Article  Google Scholar 

Download references

Acknowledgments

The research reported in this paper was supported by the Hungarian Scientific Research Fund, Grant no. T049106. Katalin Bander, Hanna Kónya, Árpád Rab, Boglárka Simon, and Ágnes Sántha provided invaluable help in the data collection, which is gratefully acknowledged. We also thank Akos Rona-Tas, University of California, San Diego, for useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ferenc Moksony.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Moksony, F., Hegedűs, R. & Császár, M. Rankings, research styles, and publication cultures: a study of American sociology departments. Scientometrics 101, 1715–1729 (2014). https://doi.org/10.1007/s11192-013-1218-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-013-1218-y

Keywords

Mathematics Subject Classfication

JEL Classification

Navigation