Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In How Professors Think (2009), Michèle Lamont draws on in-depth analyses of five fellowship competitions in the United States to analyse the intersubjective understandings academic experts create and maintain in making collective judgments on research quality. She analyses the social conditions that lead panelists to an understanding of their choices as fair and legitimate, and to a belief that they are able to identify the best and less good proposals. The book contests the common notion that one can separate cognitive from non-cognitive aspects of evaluation and describes the evaluative process as deeply interactional, emotional and cognitive, and as mobilizing the self-concept of evaluators as much as their expertise. Studies of the internal functioning of peer review reveal various ‘intrinsic biases’ in peer review like ‘cognitive particularism’ (Travis and Collins 1991), ‘favouritism for the familiar’ (Porter and Rossini 1985), or ‘peer bias’ (Chubin and Hackett 1990; Fuller 2002).

These effects show that peer review is not a socially disembedded, quality-assessing process in which a set of objective criteria is applied consistently by various reviewers. In fact, the particular cognitive and professional lenses through which evaluators understand proposals necessarily shape evaluation. It is in this context that the informal rules peer reviewers follow become important, as are the lenses through which they understand proposals and the emotions they invest in particular topics and research styles. Thus, instead of contrasting ‘biased’ and ‘unbiased’ evaluation, the book aims to capture how evaluation unfolds, as it is carried out and understood by emotional, cognitive and social beings who necessarily interact with the world through specific frames, narratives and conventions, but who nevertheless develop expert views concerning what defines legitimate and illegitimate assessments, as well as excellent and less stellar research.

How Professors Think concerns evaluation in multidisciplinary panels in the social sciences and the humanities. It examines evaluation in a number of disciplines and compares the distinctive ‘evaluative cultures’ of fields such as history, philosophy and literary studies with those of anthropology, political science and economics. This paper first describes some of the findings from this study. Second, summarizing Lamont and Huutoniemi (2011), it compares the findings of How Professors Think with a parallel study that considers peer review at the Finish Academy of Science. These panels are set up somewhat differently from those considered by Lamont—for instance focusing on the sciences instead of the social sciences and the humanities, or being unidisciplinary rather than multidisciplinary. Thus we discuss how the structure of panels affects their functioning across fields. Finally, drawing on Guetzkow et al. (2004), we revisit aspects of the specificity of evaluation in the humanities, and more specifically, the assessment of originality in these fields. Thus, this paper contributes to a better understanding of the distinctive challenges raised by peer review in the humanities.

2 The Role of Informal Rules

Lamont interviews academic professionals serving on peer review panels that evaluate fellowship or grant proposals. During the interviews, panelists are asked to describe the arguments they made about a range of proposals, to contrast their arguments with those of other panelists, and to explain what happened in each case. Throughout the interviews, she asks panelists to put themselves in the role of privileged informer and to explain to us how ‘it’ works. They are encouraged to take on the role of the native describing to the observer the rules of the universe in which they operate. She also has access to the preliminary evaluations produced before panel deliberations by individual panelists and to the list of awards given.

Since How Professors Think came out, it has been debated within various academic communities, as it takes on several aspects of the evaluation in multidisciplinary panels in the social sciences and humanities. It is based on an analysis of twelve funding panels organized by important national funding competitions in the U.S.: those of the Social Science Research Council, the American Council for Learned Societies, the Woodrow Wilson Fellowship Foundation, a Society of Fellows at an Ivy League university and an important social science foundation in the social sciences. It draws on 81 interviews with panelists and program officers, as well as on observation of three panels.

A first substantive chapter describes how panels are organized. A second one concerns the evaluative culture of various disciplines, ranging from philosophy to literary studies, history, political science and economics. A third chapter considers how multidisciplinary panels reach consensus despite variations in disciplinary evaluative cultures. This is followed by two chapters that focus on criteria of evaluation. One analyses the formal criteria of evaluation provided by the funding organization to panelists (originality, significance, feasibility, etc.) as well as informal criteria (elegance, display of cultural capital, fit between theory and data, etc.). The following chapter considers how cognitive criteria are meshed with extra-cognitive ones (having to do with diversity and interdisciplinarity), finding that institutional and disciplinary diversity loom much larger than gender and racial diversity in decision making. A concluding chapter considers the implications of the study of evaluation cultures across national contexts, including in Europe.

The book is concerned not only with disciplinary compromise, but also with the pragmatic rules that panelists say they abide by, which lead them to believe that the process is fair (this belief is shared by the vast majority of academics interviewed). How Professors Think details a range of rules, which include for instance the notion that one should defer to expertise, and that methodological pluralism should be respected.

3 The Impact of Evaluation Settings on Rules

In an article with Katri Huutoniemi, Lamont explores whether these customary rules apply across contexts, and how they vary with how panels are set up. Their paper, ‘Comparing Customary Rules of Fairness’, (Lamont and Huutoniemi 2011) is based on a dialogue between How Professors Think and a parallel study conducted by Huutoniemi of the four panels organized by the Academy of Finland. These panels concern: Social Sciences; Environment and Society; Environmental Sciences; and Environmental Ecology. This analysis is explicitly concerned with the effects of the mix of panelist expertise on how customary rules are enacted. The idea is to compare panels with varying degrees of specialization (unidisciplinary vs. multidisciplinary panels) and with different kinds of expertise (specialist experts vs. generalists). However, in the course of comparing results from the two studies, other points of comparison beyond expert composition emerge—whether panelists ‘rate’ or ‘rank’ proposals, have an advisory or decisional role, come from the social sciences and humanities fields or from more scientific fields, etc. The exploratory analysis points to some important similarities and differences in the internal dynamics of evaluative practices that have gone unnoticed to date and that shed light on how evaluative settings enable and constrain various types of evaluative conventions.

Among the most salient customary rules of evaluation, deferring to expertise and respecting disciplinary sovereignty manifest themselves differently based on the degree of specialization of panels: there is less deference in unidisciplinary panels where the expertise of panelists more often overlap. Overlapping expertise makes it more difficult for any one panelist to convince others of the value of a proposal when opinions differ; unlike in multidisciplinary panels, insisting on sovereignty would conflict with scientific authority. There is also less respect for disciplinary sovereignty in panels composed of generalists rather than experts specialized in particular disciplines and in panels concerned with topics such as Environment and Society that are of interest to wider audiences. In such panels, there is more explicit reference to general arguments and to the role of intuition in grounding decision-making.

While there is a rule against the conspicuous display of alliances across all panels, strategic voting and so-called ‘horse-trading’ appear to be less frequent in panels that ‘rate’ as opposed to ‘rank’ proposals and in those that have an advisory as opposed to a decisional role. The evaluative technique imposed by the funding agency thus influences the behaviour of panelists. Moreover, the customary rules of methodological pluralism and cognitive contextualism (Mallard et al. 2009) are more salient in the humanities and social science panels than they are in the pure and applied science panels, where disciplinary identities may be unified around the notion of scientific consensus, including the definition of shared indicators of quality. Finally, a concern for the use of consistent criteria and the bracketing of idiosyncratic taste is more salient in the sciences than in the social sciences and humanities, due in part to the fact that in the latter disciplines evaluators may be more aware of the role played by (inter)subjectivity in the evaluation process. While the analogy of democratic deliberation appears to describe well the work of the social sciences and humanities panels, the science panels may be best described as functioning as a court of justice, where panel members present a case to a jury.

The customary rules of fairness are part of ‘epistemic cultures’ (Knorr-Cetina 1999) and essential to the process of collective attribution of significance. In this context, considering reasons offered for disagreement, how those are negotiated, as well as how panelists interpret agreement is crucial to capture fairness as a collective accomplishment. Together, these studies demonstrate the necessity for more comparative studies of evaluative processes and evaluative culture. This remains a largely unexplored but promising aspect of the field of higher education, especially in a context where European research organizations and universities aim to standardize evaluative practices.

4 Defining Originality

We now turn to a closer examination of forms of originality scholars from different disciplines tend to favour, with a focus on contrasting the social sciences and the humanities. As described in Guetzkow et al. (2004), we construct a semi-inductive typology of originality. We use this typology to classify panelists’ statements about the originality of scholarship, whether it is in reference to a proposal, the panelists’ own work, their students’ work, or that of someone whose work they admire. The typology is anchored in five broad categories. These categories concern which aspect of the work respondents describe as being original. They include the research topic, the theory used, the method employed, the data on which it is based and the results of the research (i.e. what was ‘discovered’). It also includes two categories that have not been noted in previous research: ‘original approach’ (explained below) and ‘under-studied area’ (proposals set in a neglected time period or geographical region). As shown in Table 1, there are seven mutually exclusive categories of originality regarding the approach, under-studied area, topic, theory, method, data, and results.

Each of these generic categories consists of more specific types of originality, which are included in Table 1. Whereas ‘Generic Types’ refer to which aspects of the proposal are original, ‘Specific Types’ describe the way in which that aspect is original. Where applicable, the first specific type we list under each generic category refers to the most literal meaning that panelists attribute to this generic category, followed by other specific types in order of frequency. For instance, the first specific type for the generic category ‘original approach’ is ‘new approach’ and the other specific types are more particular, such as asking a ‘new question’, offering a ‘new perspective’, taking ‘a new approach to tired or trendy topics’, using ‘an approach that makes new connections’, making a ‘new argument’, or using an ‘innovative approach for the discipline’. Table 1 also describes the distribution of the 217 mentions of originality we identify across the seven generic categories and their specific types.

Table 1 shows that the panelists we interviewed most frequently describe originality in terms of ‘original approach’. This generic category covers nearly one third of all the mentions of originality made by the panelists commenting on proposals or on academic excellence more generally. Other generic categories panelists often use are ‘original topic’ (15 %), ‘original method’ (12 %) and ‘original data’ (13 %). Originality that involves an ‘under-studied area’ is mentioned only 6 % of the time.

Table 1 Typology of originality

5 What Is an Original Approach?

Previous research on the topic of peer review has not uncovered the category we refer to as ‘original approach’, and yet it appears that panelists place the greatest importance on this form of originality. But what is it, and how does it differ from original theory or method? ‘Original approach’ is used to code the panelists’ comments on the novelty of the ‘approach’ or the ‘perspective’ adopted by a proposal, or on the innovativeness of the questions or arguments it formulates. In contrast to original theory or method, an ‘original approach’ refers to originality at a greater level of generality: the comments of panelists concern the project’s meta-theoretical positioning, or else the broader direction of the analysis rather than the specifics of method or research design. Thus in speaking of a project that takes a new approach in her discipline, an art historian applauds the originality of a study that is going to ‘deal with [ancient Arabic] writing as a tool of social historical cultural analysis’. She is concerned with the innovativeness of the overall project, rather than with specific theories or methodological details. Whereas discussions of theories and methods start from a problem or issue or concept that has already been constructed, discussions of new approaches pertain to the construction of problems rather than to the theories and methodological approach used to study them. When describing a new approach, panelists refer to the proposals’ ‘perspective’, ‘angle’, ‘framing’, ‘points of emphasis’, ‘questions’, or to their ‘take’ or ‘view’ on things, as well as their ‘approach’. Thus a scholar in Women’s Studies talks of the ‘importance of looking at [Poe] from a feminist perspective’; a political scientist remarks on a proposal that has ‘an outsider’s perspective and is therefore able to sort of have a unique take on the subject’; a philosopher describes his work as ‘developing familiar positions in new ways and with new points of emphasis and detail’; and an historian expresses admiration for an applicant because ‘she was asking really interesting and sort of new questions, and she was asking them precisely because she was framing [them] around this problem of the ethics of [empathy]’. That ‘original approach’ is used much more often than ‘original theory’ to discuss originality strongly suggests a need to expand our understanding of how originality is defined, especially when considering research in the humanities and history, because the original approach is much more central to evaluation of research in these disciplines than in the social sciences, as we will soon see.

6 Comparing the Humanities, History and the Social Sciences

Can we detect disciplinary variations in the categories of originality that reviewers use when assessing the quality of grant proposals? We address this question only at the level of generic categories of originality, because the specific types include too few cases to examine disciplinary variation. For the purpose of our analysis we compare the generic categories of originality referred to by humanists, social scientists and historians.

Table 2 shows aggregate differences in the use of generic types of originality across disciplines and disciplinary clusters. A chi-square test (\(\chi ^{2}=34.23\) on 12 \(d.f.\)) indicates significant differences between the disciplines in the way they define originality at a high level of confidence (p < 0.001). The main finding is that a much larger percentage of humanists and historians than social scientists define originality in terms of the use of an original approach (with respectively 33 %, 43 % and 18 % of the panelists referring to this category). Humanities scholars are also more likely than social scientists and historians to define originality in reference to the use of original ‘data’ (which ranges from literary texts to photographs to musical scores). Twenty-one percent of them refer to this category, as opposed to 10 % of the historians and 6 % of the social scientists. Another important finding is that humanists and historians are less likely than social scientists to define originality in terms of method (with 4 %, 8 % and 27 % referring to this category, respectively). Moreover humanists, and to a greater extent, historians, clearly privilege one type of originality—originality in approach—which they use 33 % and 43 % of the time, respectively. In contrast, social scientists appear to have a slightly more diversified understanding of what originality consists of, in that they privilege to approximately the same degree originality in approach (used by 18 % of the panelists in this category), topic (19 %) and theory (19 %), with a slight emphasis on method (27 %).

Table 2 Generic definitions of originality by disciplinary cluster

This suggests clearly that the scholars from our three categories privilege different dimensions of originality when evaluating proposals: humanists value the use of an original approach and new data most frequently; historians privilege original approaches above all other forms of originality; while social scientists emphasize the use of a new method. But this comparison is couched at a level of abstraction that allows us to compare these disciplinary clusters according to categories like ‘approach’, ‘data’ and ‘methods’. This risks masking a deeper level of difference between the meaning of these categories for the social sciences, humanities and history. For example, when social scientists we interviewed refer to original ‘data’, they generally mean quantitative datasets; historians usually refer to archival documents and use the word ‘evidence’; humanities scholars typically refer to written texts, paintings, photos, film, or music and often use words like ‘text’ and ‘materials’ to refer to the proposal’s ‘data’.

Likewise, there are sometimes distinct ways in which humanists and social scientists talk about taking a new approach. For example, humanists will often refer to a canonical text or author that is being approached in a way that is not novel per se, but is novel because nobody has approached that author or text in that way (e.g. a feminist approach to Albert Camus). In contrast, social scientists rarely refer to novelty with regard to something that is ‘canonical’. Relatively few social scientists describe originality in terms of approach and those who do so talk overwhelmingly in terms of ‘new questions’ (accounting for 8 out of 12 social science mentions of original approach). References to original approaches by historians and humanists are spread more evenly across the specific subtypes of ‘original approach’. One third of humanists (8 out of 27) define it in terms of taking a ‘new approach to an old/canonical topic’, but refer to all the other types with nearly equal frequency. And although historians mention ‘new questions’ more than any other specific type of approach (32 % or 9 out of 28), they often mention other specific types as well. And, although we define ‘methods’ broadly to categorize the way that humanists, social scientists and historians describe original uses of data, this should not be taken to mean that ‘method’ means the same thing to all of them. Social scientists sometimes describe innovative methods as those which would answer ‘unresolved’ questions and debates (e.g. the question of why the U.S. does not have corporatism), whereas humanists and historians never mention this as a facet of methodological originality. Reviewers in the social sciences tend to refer to more methodological detail than others concerning, say, a research design. For instance, a political scientist says that an applicant ‘inserted a comparative dimension into [his proposal] in a way that was pretty ingenious, looking at regional variation across precincts’. In contrast, an historian describes vaguely someone as ‘read[ing] against the grain of the archives’ and an English scholar enthuses about how one applicant was going to ‘synthesize legal research and ethnographic study and history of art’, without saying anything more specific about the details of this methodological mélange.

Arguably, the differences we find are linked to the distinct rhetorics (Bazerman 1981; Fahnestock and Secor 1991; Kaufer and Geisler 1989; MacDonnald 1994) and epistemological cultures (Knorr-Cetina 1999) of the different disciplines. We do not wish to make sweeping generalizations about the individual disciplines that compose each cluster. However, research on the distinct modes of knowledge-making in some of their constituent disciplines can inform the patterns we find.

In her comparison of English, history and psychology, MacDonnald (1994) shows that generalizations in English tend to be more text-driven than in the social sciences, which tend to pursue concept-driven generalizations. History is pulled in both directions (also see Novick 1988). In text-driven disciplines, the author begins with a text, which ‘drives the development of interpretive abstractions based on it’. In contrast, with conceptually driven generalization, researchers design research ‘in order to make progress toward answering specific conceptual questions’ (MacDonnald 1994, p. 37). These insights map well onto our findings: original data excites humanities scholars because it opens new opportunities for interpretation. Social scientists value most original methods and research designs, because they hold the promise of informing new theoretical questions. The humanists’ and historians’ emphasis on original approaches is an indication that, while they are not as focused on the production of new generalized explanations (‘original theories’) or on innovative ways of answering conceptual questions (‘original methods’), they value an ‘original approach’ that enables the researcher to study a text or an archive in a way that will yield novel interpretations, but which does not necessarily aim at answering specific conceptual questions.

7 Conclusion

Together, the publications summarized in this paper suggest a research agenda for developing a better empirical understanding of the specific characteristics of peer review evaluation in the humanities as compared to other disciplinary clusters. More needs to do be done in order to fully investigate how the composition of panels and the disciplines of their members influence the customary rules of evaluation as well as the meanings associated with the criteria of evaluation and the relative weight put on them.

The comparative empirical study of evaluative cultures is a topic that remains in its infancy. Our hope is that this short synthetic paper, along with other publications which adopt a similar approach, will serve as an invitation to other scholars to pursue further this line of inquiry. More information is needed before we can draw clear and definite conclusions about the specific challenges of evaluating scholarship in the humanities. However, we already know that the role of connoisseurship and the ability to make fine distinctions is crucial given the centrality of ‘new approaches’ as a criterion for evaluating originality. Whether and how bibliometric methods can capture the real payoff of this type of original contribution is only one of the many burning topics that urgently deserve more thorough exploration.