In 2009, leading English research universities are facing cuts in their public research funding that make them reconsider their plans for future investment and quite some of them are taking action for cost cuttings. Universities’ leadership are quoted with statements like “potentially the biggest shift in research fuding policy for 20 years” and “it looks like the end of the road for research concentration” (Time Higher Education, no. 1,877, p. 4). What had happened? In December 2008, the results of the most recent Research Assessment Exercise (RAE) had been published, and this RAE provided, for the first time, a ‘research profile’ for each department rather than a single summative score. The RAE thus highlighted not only the ‘critical mass’ of excellent research in leading universities but also small groups or individuals of excellence in departments that, overall, were not rated as excellent. This change in the rules of the game was well-known in advance. What was not well known was the related re-distribution of some research funding towards well evaluated groups in universities that do not figure highly in the well-established and well-defended prestige hierarchy of English universities. The RAE once shocked academe by its declared function of concentrating public research funding in leading universities; this time it was the leading universities that suffered from a change in funding allocations.

Obviously, research evaluations can make a difference, for the better or the worse, and research evaluations are on the rise as a prominent instrument in the changing governance of the sciences and their organizational hosts, the universities. The 26th Yearbook of the Sociology of Sciences edited by Richard Whitley and Jochen Gläser analyzes “The Advent of Research Evaluation Systems”. The volume highlights their evolution and instrumentation in various national settings; the responses of academics and universities to this new form of institutionalized, systematic and public retrospective evaluation of research; and its potential effects for the organization and performance of scientific knowledge production. Further contributions discuss the rise and problematic use of some of the most debated global phenomena related to research evaluations, i.e. university rankings and bibliometric evaluations. These articles are framed by two contributions from Whitley (Introduction) and Gläser (Conclusion) that do not only provide a summary of the book, but a systematic account to the study of new governance regimes for the sciences and universities, of what is known, and equally important, what is not known about them. Altogether, the book provides a rich account of new governance regimes from the point of view of political sociology. The book has been overdue given all the hopes (e.g. ‘value for money’, ‘critical mass and focus’, ‘world-class excellence’) and fears (‘the end of academic freedom’, ‘the ruin of unorthodox research’, ‘economic rationality rules’) that accompany the advent of research evaluation systems.

Most of the book is dedicated to national case studies addressing aspects of the governance of the sciences in Australia (Jochen Gläser and Grit Laudel), Germany (Stefan Lange) and Lower Saxony (Christof Schiene and Uwe Schimank), Japan (Robert Keller), the Netherlands (Barend van der Meulen), Spain (Laura Cruz-Castro and Luiz Sanz-Menéndez), Sweden (Lars Engwall and Thorsten Nybom), and the U.S. (Susan E. Cozzens). Altogether, they highlight national traditions and path dependencies as well as the quite divergent search for a new governance regime for the sciences and universities including the use of research evaluation systems. A few examples might suffice to illustrate the colorful international landscape:

  • Jochen Gläser and Grit Laudel analyze the impact of funding formulae on Australian university research. They demonstrate that the Australian research evaluation system had probably little direct steering effects but has contributed to a general shortage of recurrent funding, and to a strong dependence of researchers’ on a small number of principal external funding sources. They argue that growing resource dependency and concentration of research funding has led to an adaptive behavior of academic researchers in favor of “less diverse, less fundamental, and less reliable” research. Universities and researchers investigate in a ruinous competition that relies heavily on academic capabilities to fit external expectations as regards funding priorities while struggling for the survival of their self-selected research preferences.

  • Robert Kneller provides a rich account of the traditional broader institutional context of the Japanese university research system. He exemplifies that the (potential) effects of evaluation and funding procedures are influenced by other features of the Japanese science system, such as its traditional strong institutional stratification, uneven resource distribution and informal system of internal patronage for career promotion. He is skeptical that the effects of programmatic research funding and prospective peer review as well as the advent of retrospective research evaluations in Japan will go beyond a mere justification of budget cuts together with a reinforcement of the elite status of a few Japanese universities.

  • Lars Engwall and Thorsten Nybom analyze the allocation of research resources in Swedish universities. The authors look at the more recent history of governmental attempts to steer the field through institutional control (entry of new universities, allocation of research funds to them), input control (appointment and promotion of academic staff, resource allocation procedures), and output control (internal/external and informal/formal evaluations). They conclude that quasi-markets, managerial practices and retrospective evaluations have gained ground. Evaluations are not directly linked to funding but play a growing role in research councils’ funding decisions as well as resource allocation within universities and in tenure and employment procedures.

  • Susan E. Cozzens analyzes the instruments and effects of research evaluation systems in the U.S. in the context of overall systems of results-oriented management and its effects for adaptive behavior within the broader context of the innovation system. Interestingly enough, the most successful contemporary national system of academic research has so far avoided strong national research evaluation systems. Instead, the pluralism of the U.S. research system, the variety of potential funders with their specific missions strengthening goal-specific evaluative management instruments, and the strategic autonomy of universities are identified as building blocks for the standing of U.S. fundamental research as well as strategic research.

The introduction of Whitley to this volume provides an inspiring typological summary of the national case studies from a macro- and meso-sociological perspective as well as a gold mine for further hypotheses led research in the field. He identifies the main underlying characteristics of contemporary national research evaluation systems (such as their frequency, formalization, standardization, transparency, and, most importantly, effect on funding) and the relevant context factors that will mediate their functioning and impact (such as the variety of funders, the standing of scientific elites, the degree of organizational autonomy). This typology leads to a number of research hypotheses on the possible effects of research evaluation systems on different national science systems (e.g. in terms of organizational stratification, reputational competition, or intellectual diversity and innovativeness) and subsequently on different kinds of scientific disciplines. The conclusion of Gläser accomplishes the picture with a perspective on the mutual enforcement of research evaluation systems and the rise of the university as a more autonomous and managerial actor. Concurrently, he analyzes the possible success and failure of research evaluation systems within the increasingly complex governance environment of hierarchies (including government failures) and quasi-markets (including market failures) scientific communities as social networks have to live in.

The most important contribution of this inspiring volume is thus that it provides tools and hypotheses to investigate the frequently neglected question ‘Does governance matter?’ in a more systematic and comparative perspective. What actually are the effects of political steering and intervention on science systems, scientific communities and knowledge production? Has the English Research Assessment Exercise improved public trust in the academic research system as well as its performance? And has trust actually been undermined and performance mediocre? This volume provides rich incentives for future research into these academically and politically important questions. And it provides a rare account of the variety of ever changing conditions within which academic research can survive and sometimes even prosper.