Keywords

1 Introduction

We face a fascinating, yet strange contradiction in the humanities: On the one hand, they disapprove of any bibliometric assessments of academic performance, and, on the other hand, they cherish quotations as a core component of their academic culture. Their dissatisfaction with quantitative bibliometrics may seem to be a mere matter of principle: The humanities are supposed to avoid numbers wherever they can. But this would be an explanation much too simple to account for the intricacies of the quotation culture in the humanities. What is odd is the fact that many disciplines in the humanities quote but do so very rarely. Particularly, LiteratureFootnote 1 shows a strong dislike for a systematic compilation of references. Literature is an extreme case within the spectrum of humanities but, as such, is characteristic of a specific academic condition. Literature’s aversion to bibliometrics seems partly legitimate because statistics can be meaningful only if they rely on sufficiently large numbers. But at the same time, this antipathy raises questions about the academic culture itself. The contradiction could be located in the self-perception of certain disciplines—rather than in a conflict between citational practices and quantitative methods.

In the second section, I will bring forth a historical and systematic argument. It follows the epistemic patterns of the humanities. I will outline the traditions of quoting other works in Literature. These may be compared to the practices in the sciences; and these have to be related to the common critique of quantitative methods. In the third section, I will present some statistical data; I do not create new data but simply use existing information. My focus will be on the small numbers involved, that is, I will show how few quotations actually occur in Literature.

Since I need to combine results from both sections, I will only then proceed to the discussion and reserve for it a section of its own. I will consider possible explanations, some that approve of the citational practices in the humanities and others that are in disfavour of their academic culture. After all, if my initial claim about the intrinsic operational contradictions within the humanities proves true, more research must be undertaken to understand the present-day tense situation.

2 Quotation Culture in the Humanities

2.1 Characteristics of Quotations in the Humanities

Quotations have always been part of the core techniques in Literature. Let me give a short historic overview (for a more detailed version and for references, see Bunia 2011b). Even before the surge of modern science, all philosophical disciplines quoted the ‘authorities’ and, thus, worshipped canonized authors. Book titles were even invented because Aristotle needed to quote himself (cf. Schmalzriedt 1970). With the advent of the rationalist and empiric movements in the 17th century and their icons, René Descartes and Francis Bacon, respectively, in all disciplines, novelty became prestigious, and both scholars and scientists started quoting their peers rather than Ancient authorities. Not until the late 19th century did quoting that completely covers the field become a moral obligation. Before, it was sufficient to cite what lay at hand; it was not the researcher’s task to show blatantly that he was up to date. The increase of publications led to new worries and, finally, caused the need for citation analysis as pioneered by Eugene Garfield.

In Literature, it has always been mandatory to quote as much as possible to prove that one is well read. In fact, ‘monster footnotes’ (Nimis 1984) are particularly popular in the humanities: they consist of lengthy enumerations of papers related to the topic of the citing paper (see also Hellqvist 2010, pp. 313–316). As Hüser (1992) notes, an impressively long list of references is one of the most important prerequisites for a doctoral dissertation to be accepted in Literature. These observations are not in conflict with the (very debatable) claim that humanities, in general, do not aim to convey pieces of ‘positive’ knowledge (MacDonnald 1994), since it does not matter whether one quotes to present knowledge or more obscure forms of excellence. Since the broad usage of references came up in the 19th century, when humanist disciplines tried to become ‘scientific’ (Hellqvist 2010, p. 311), the difference between the humanities and the sciences should not be taken to be very strong. In brief, literary scholars are usually expected to quote one another extensively, not to omit any possible reference, and to provide comprehensive lists of preceding publications.

Many disciplines limit the obligation to quote comprehensively to recent years and choose other forms of worship for their great minds (e.g. name of theorems in mathematics, see Bunia 2013). Contrary to this practice, literary scholars often cite old canonical works, thus evoking the very roots of their approach. Even more frequent is the practice of using quotations to signal the in-group the scholar belongs to (see Bunia and Dembeck 2010). This is why publications in Literature (in fact, in all disciplines in the humanities) tend to include large lists of old texts.

Two practices challenge my short outline. First, literary scholars also quote the objects of their investigation, e.g. literary, philosophical, or other texts. These appear in the references, too, thus complicating the analysis (see Sect. 3.3). Second, in very conservative circles—and, fortunately, such circles are not numerous—highly established professors are no longer expected to quote unknown young scholars; they restrict their open quotations to peers equal in rank and to classic authors such as Aristotle (see Bunia 2013).

Reputation is highly important (see Luhmann 1990 [Reprint 1998], p. 247; Ochsner et al. 2013, pp. 83, 84, in particular, item 14 ‘Research with reception’). As is the case in most disciplines, literary scholars hold intellectual impact on their own community in high esteem (Hug et al. 2013, pp. 374 and 382, for English Literature and German Literature). This is one of the criteria to be used to judge young researchers’ performance. Intellectual influence becomes manifest due to quotations. In sum, citation analysis should be a method adequate to the disciplinary traditions of Literature.

2.2 Disapproval of Bibliometrics and of ‘Quantities’ Per se

The most widespread criticism advanced by scholars in the humanities attacks bibliometric analysis for its inability to measure quality. Unfortunately, this attack suffers from a basic misconception. First, it neglects the circumspection that fuels much of the bibliometric debate. For instance, bibliometric research papers are replete with doubts, questionings and reservations about using bibliometric parameters to rate an individual researcher’s intellectual performance (e.g. Bornmann 2013). The central misapprehension, however, is the product of a more fundamental skepticism that asks: How is it possible that quantitative analysis can account for qualitative evaluations? Consequently, bibliometric analyses are thought to be structurally inadequate to express qualitative judgments.

This deduction is a misconception of citation analysis because it ignores the abstract separation of qualitative judgments and their mapping on quotations. When we look at the impact system prevalent in many disciplines, such as Medicine, we see that the qualitative assessment takes place in peer review. This process is not influenced or even compromised by the impact factor culture (see also Bornmann 2013, p. 3). Of course, the impact factor culture produces, stabilizes and usually boosts the differentiation between journals. The effect is that some journals receive the most attention and the best submissions because these journals have the biggest impact. This eventually means that these journals can have the most rigorous selection process. The decisive factors within the selection process remain ‘qualitative’, that is, they are not superseded by mathematical criteria. This is why all peer review systems have been repeatedly demonstrated to be prone to failure (see the editorial by Rennie 2002; see also Bohannon 2013).

For review processes to warrant optimal evaluation, it is mandatory that the review process rely on accepted and mutually intelligible criteria. The problems with peer review result from the imperfections of the process: careless reviewers, practical limits of verifiability, or missing criteria. Slightly neglectful reviewers do not impair the review process to a dramatic degree; the review process must no longer, as has been previously done, be mistaken for a surrogate of replications. The combination of peer review and bibliometrics provides a suitable technique to map qualitative evaluations on quantities.

However, the situation is the inverse if disciplinary standards of assessment are deficient. If shared criteria of evaluation are weak and if parochialism prevails, peer review can have negative effects on the average quality of evaluations (Squazzoni and Gandelli 2012, p. 273). As a consequence, the humanist disciplines that oppose bibliometrics might be right in doing so—but for the wrong reasons: The only sensible reason to object to bibliometric assessment is to admit an absence of qualitative criteria.

2.3 The European Reference Index for the Humanities

The disciplines in the humanities feel increasing pressure from funding agencies and governments to expose their strategies of evaluation (cf. Wiemer 2011). Due to the widespread and virtually unanimous refusal to participate in common ranking systems as those provided by bibliometric analysis, the European Science Foundation (http://www.esf.org) initiated the European Reference Index for the Humanities (ERIH) project. The project decisively dismisses all statistical approaches as inadequate for the humanities and replaces them by a survey conducted among highly distinguished scholars who were asked to name the most prestigious journals in their respective fields. The result is a list grouped into three categories: ‘INT1’, ‘INT2’ and ‘NAT’. This order indicates the (descending) importance of the journals in the respective category. Again, quite resolutely, the list is meant to be no ranking: ‘[Question:] Is ERIH a ranking system? [Answer:] ERIH is not a billiometric [sic] tool or a reanking [sic] system. The aim of ERIH is to enhance the global visibility of high-quality research in the Humanities across all of Europe and to facilitate access to research journals published in all European languages; it is not to rank journals or articles’ (European Science Foundation 2014). Compiled by only four to six European scholars per discipline, the list is not undisputedly acknowledged; as far as I know, it is not even widely known.

2.4 Rigor and Quotations

Garfield himself has always pointed out that the citation analysis of journals refers only to the usage of a published text; it does not say anything about approval or disapproval, nor does it assess the quality of a paper (Garfield 1979, p. 148). He then notices that the citation network allows its users to know what new developments emerge. It thus enables them to focus on prevalent trends. This idea can be put differently: High quotation rates and dense subnets show a strong cohesion of the group.

There may be two main reasons for the cohesion that becomes visible because of the quotation network. (1) First, it can derive from shared convictions about scientific rigor. Only publications that comply with the methodological demands of the respective discipline will have a chance to be cited. Regardless of the quality, originality and importance of the paper, cohesion makes the author belong to the specific group. Anecdotally, Kahneman reports that his success in Economics is due to only one improbable and lucky event: one of his articles being accepted in an important economic (rather than psychological) journal (Kahnemann 2011, p. 271). In this first case, cohesion warrants at least minimal standards of scientific procedure. (2) Then again, cohesion can simply result from a feeling of mutual affection and enthusiasm. In this second case, the cohesion comes first and stabilizes itself. It relies on the well-known in-group bias, i.e. the preference for one’s own group. For example, members of pseudoscientific communities will cite one another (such as followers of homeopathy). If such a group is large enough, it will produce high quotation levels.

As a consequence, impressive quotation rates do not say what kind of agreement or conformity a respective group chooses as its foundation. It can be scientific rigor; but it can also be anything else. This conclusion is not new and not important for my argument. However, its reverse is. If a group shows low quotation levels, it necessarily lacks cohesion. It possesses neither clear standards of methodological rigor nor a feeling of community.

3 Low Quotation Frequencies in Literature

3.1 Materials and Methods

To analyse citation rates in Literature, I am going to use citation indices provided by commercial services. Among the available databases, only the Scopus database (run by Elsevier B.V.) covers a sufficient number of Literature journals to calculate journal rankings. Therefore, this database is my only resource. For its ranking, Scopus uses the indicator SJR2, which depicts not only the frequency of its articles being cited but also the prestige of each journal (Guerrero-Botea and Moya-Anegón 2012). Despite certain differences, this database is comparable to the Impact Factor. The indicator, however, will not play a major role in my argument; it will be used only to find journals that are supposed to be cited at an above-average rate.

Table 1 The five highest ranking publications in the subject category Literature and Literary Theory in 2012 (citation data by Scopus)

As of 2012, the ISI Web of Knowledge, provided by Thomson Reuters, does not include any journals that belong to the ‘hard-core’ disciplines within the humanities. Although the Web of Science—also operated by Thomson Reuters and the company’s main trademark which also includes the ISI Web of Knowledge—lists Literature journals, it does not provide any rankings or helpful statistics. Likewise, Google Scholar, run by Google Inc., does not allow any inferences from its data. Unlike its competitors (cf. Mikki 2009), Google Scholar browses all kinds of research publications (including books) and retrieves quotations by analyzing the raw text material. It thus covers books—this being an advantage over Elsevier and Thomson Reuters. However, Google Scholar is so unsystematic that the data contain artifacts and detect fewer quotations than Google Scholar’s competitors (as of 2013).

My analysis focuses on two aspects. On the one hand I am interested in the absolute numbers of citations. They are the cause of the methodological difficulties in citation analysis; but, at the same time, they are an important fact that deserves attention of its own. On the other hand, I concentrate on the ratios of cited and uncited articles across different disciplines. For the sake of simplicity, I limit my analysis to Medicine. I choose to compare the aforementioned ratios (despite the problem of validity) because this is the only parameter that at least can be examined.

3.2 Results

Let us examine the citation analysis provided by Scopus for the subject category Literature and Literary Theory and the year 2012 (see Table 1). The absolute numbers of the top five most influential journals are strikingly low. The top journal, Gema Online Journal of Language Studies, which, by the way, I had never heard of before, does not appear in the ERIH ranking at all (Sect. 2.3). This journal is ranked first with regard to the SJR2 indicator implemented by Scopus. The strange phenomenon is easily explained: The journal focuses on linguistics; in the respective ranking (‘Language and Linguistics’), it holds only position 82. Since it sometimes publishes articles in Literature, too, it is included in both lists; since the SJR2 indicator does not detect disciplinary boundaries, a comparatively mild impact in Language and Linguistics can make it the most prestigious journal in Literature and Literary Theory. Presumably, this effect must follow from the small numbers involved in quotations in Literature and Literary Theory so as to allow an interdisciplinary journal to move to the first position.

The second journal might be worth a closer look. New Literary History belongs to the highest ERIH category (‘INT1’); personally, I would have guessed it might be among the top journals. This prestigious periodical, however, does not seem to be quoted very often, if one inspects the numbers provided by Scopus (see Table 2). For the 142 articles published between 2009 and 2011, only 68 citations were found. If one takes the small ratios between cited and uncited documents into account, viz. 26 % for this time window, the hypothesis seems acceptable that these few citations concentrate on few articles. The only undisputable inference is the mean citation frequency per article: We find two citations per article on average.

It is possible to compare these numbers to those of the most influential journal in Medicine (as ranked by the SJR2 indicator again), the New England Journal of Medicine. In the same time window (i.e. 2009–2012), we find 5,479 articles and 65,891 citations; on average, an article garnered 12 citations, and 46 % of these articles were cited within the time window.

Table 2 Development of citations between 2004 and 2012 for the high ranking international journal New Literary History (data by Scopus)

As for the New Literary History, I discuss one of the journals that at least do receive some attention (in terms of citation analysis). Let us turn to Poetica, one of the most prestigious German journals. Within the ERIH ranking, Poetica, too, belongs to the highest category, ‘INT1’. Yet it ranks only 313th in the Scopus list. The more detailed numbers are disconcerting (see Table 3). Between 2009 and 2011, the journal published altogether 48 articles, among which only three received at least one citation (within this time window). In the long run, the quotation ratio never exceeds 16 %; but the 6 %, which can be found in three columns (2006, 2007, 2012), is not an exception. More astonishingly, only four citations were found. This is to say that two articles garnered exactly one citation, and one article can be proud to have been cited twice.

Table 3 Development of citations between 2004 and 2012 for the high ranking German language journal Poetica (data by Scopus)

The problems that I mention apply to all entries in the ranking. On the one hand, the absolute numbers are so low that small changes affect the position of journals; on the other hand, interdisciplinary journals automatically move up (this effect could be dubbed ‘cross-listing buoyancy’). The ranking does not reflect the ‘qualitative’ assessment of the European Science Foundation. These figures have significance only as they show that quotations in Literature are rare.

3.3 Possible Objections

My approach may face three major objections. First, absolute numbers have limited value. They are not embedded in a statistical analysis, and, therefore, they cannot characterize the phenomenon in question. I will not deny the cogency of this objection. However, the point is that the low numbers themselves are the phenomenon to be explained. My analysis also comprises the comparison of relative quantities. By contrasting the ratios of uncited and cited papers across disciplines, I can increase the plausibility of my claims. I am confident that the synopsis of all data corroborates the hypothesis that literary scholars’ quotation rates are altogether marginal.

The second possible objection concerns the available data about research in the humanities. Currently, the most widespread attempt to remedy the tiny absolute numbers is the inclusion of books. The idea is that the databases are deficient—not the citation culture (e.g. see Nederhof 2011, p. 128). The inclusion of monographs is Hammarfelt’s (2012, p. 172) precept. In 2011, Thomson Reuters launched its Book Citation Index covering books submitted by editors from 2005 onward and continuously has worked on improving the Book Citation Index ever since. However, the inclusion of monographs will not provide an easy solution. There are three obstacles:

(1) Primary versus secondary sources. In the humanities, some books are objects of analysis, and some provide supporting arguments. In the first case, we speak of primary, in the latter case of secondary sources. In many contexts, the distinction between both types is blurry (see Hellqvist 2010, p. 316, for an excellent discussion).Footnote 2 Hammarfelt’s (2012) most radiant example, Walter Benjamin’s Illuminationen, which he states to have spread across disciplines (p. 167), is a compilation of essays from the 1920s and 1930s. The book is cited for very different reasons. The quotations in computer science and physics (Hammarfelt 2012, p. 167) will probably have an ornamental character; Benjamin is a very popular supplier of chic epigraphs. Within the humanities, Benjamin is one of the authors whose works are analysed rather than used, that is, he is a primary source. So are other authors whom (Hammarfelt 2012, p. 166) counts among the canonized: Aristotle, Roland Barthes, Jacques Derrida, etc. Even more, some of his canonized authors wrote just fiction (Ovid and James Joyce). Hence, these monographs must be primary sources.

An algorithm that distinguishes between primary and secondary sources is difficult to implement. The software has to discriminate between different kinds of arguments, which requires semantic analysis. As is well known, we are far away from any sensible linguistic analysis of texts without specific ontology (in the sense of semantics); so even the effort will be futile. The only reliable possibility would be a systematic distinction between primary and secondary sources in the bibliographies, a practice common in many scholarly publications, but far from ubiquitous. With this problem realized, it is difficult to implement an automatic analysis.

Recent publications, of course, can be counted as secondary sources per convention. This would be reasonable and useful, even if we know that the transition from ‘secondary scholar’ to ‘primary author’ is what scholars in the humanities dream of and what they admire (cf. Ochsner et al. 2013, pp. 83–85). Quite often this happens late, often after the scholar’s death (and his reincarnation as ‘author’), as was the case with Benjamin, too, who was even refused a university position during his lifetime. The usage of recent publications remains only a possibility.

The inclusion of books would not change the whole picture. The absolute numbers would remain low. In a more or less systematic case analysis, Bauerlein (2011) shows that scholars do not cite books either (p. 12). Quite on the contrary, Bauerlein (himself a professor of English Literature, by the way) concludes that the production of books is an economic waste of resources and should be stopped. Google Scholar confirms that literary scholars quote but do so rarely. As stated above, the service includes books. Since Google has scanned and deciphered incredibly many books, including those from the past decade, for its service Google Books (prior to the service’s restriction on account of massive copyright infringements), it has a pretty good overview of the names dropped in scholarly books. Nonetheless, Google’s services show that books are quoted as rarely as articles (if not even less frequently). We thus count the documents cited. Scholars quote numerous sources; at least nothing indicates that lists of references are shorter in the humanities than they are in other disciplines. But all signs point at the possibility that only a few scholars can hope to be quoted by their peers. The fact remains that literary scholars quote each other but do so rarely.

(2) Reading cycles. Another remedy being discussed involves larger time windows. Literary scholars are supposed to have ‘slower reading cycles’, to stumble upon old articles and to unfold their impact much later than the original publication. Unfortunately, there is little evidence for this myth. Of course, there are many ‘delayed’ quotations in the humanities. But the problem is that they do not change the whole picture. In the vast majority of cases, their distribution is as Poisson-like as the ‘instantaneous’ quotations, and they are as rare. Again, the sparse data Google provides us with do not indicate any significant increase of citations caused by a need for long-lasting contemplation. Nor does Bauerlein find any hint of boosting the effects of prolonged intellectual incubation periods. Nederhof (1996) claims that in some humanist disciplines, the impact of articles reaches a peak in the third year; hence, the chosen citation window appears adequate and meaningful.

(3) What quotations stand for. The third obstacle is different in kind. Since the figures show small numbers, citations that do not refer to the content of the cited articles may distort the results of the statistical analysis to a significant extent. As recently demonstrated by Abbott (2011), a considerable percentage of citations does not relate in any conceivable way to the cited article, which could indicate that this article has never been actually read. Examples are easily at hand. In one of the top journals in Literature, Poetics Today (‘INT1’), the Web of Science records two citations of an article of mine. Unfortunately, these citations come from scholars who use my article to introduce a notion established by Plato around 400 B.C. With two citations, my text belongs to the very small cohort of highly cited articles, but the actual quotations are disastrously inappropriate. This problem cannot be ruled out in other disciplines either. There is no clue whatsoever indicating that inappropriate quotations occur more often in the humanities than in other disciplines. Nonetheless, we have to consider the possibility that even the small numbers found in the figures are not the result of attentive reading, but of the need to decorate an article with as many references as possible.

We eventually have to reconcile two apparently contradictory observations. On the one hand, scholars present us with long lists of references and are expected to quote as much as possible. On the other hand, each scholar can expect only little attention and very few (if any) citations by peers. This miracle can be easily resolved: Partly, scholars quote from other disciplines, partly, quotations cluster around certain few ‘big names’, who are quoted abundantly. There is no contradiction between long lists of references and few citations, that is, between many incidents of citing and only a few of being cited.

4 Discussion

As we have seen, the disciplinary culture of Literature requires scholars to quote one another extensively, but only few citations can be found. How can this be explained? Although I have expressed my doubts about the importance of coverage, first, more data must be obtained: Books must be extensively included in the analysis, and the citation windows must be enlarged, maybe up to decades. Such an improvement of the databases does not add to the bibliometric assessment of individual scholarly performance; instead, it adds to the understanding of the intellectual configuration of Literature and of other related fields in the humanities. Before we start understanding the criteria of excellence and develop a means of mapping qualitative judgments on quantities, we must first understand why citations occur so rarely.

Perhaps publications in Literature do not contain pieces of positive information that can be used to support one’s own argument straightforwardly. Publications present the scholar with helpful or dubious opinions, useful theoretical perspectives, or noteworthy criticisms, but, possibly, a publication cannot be reduced to a simple single result. If this is the case, the question is which (societal) task Literature is committed to. If this is not case, the lack of quotations raises the question of why so many papers are written and published that do not attract any attention at all.

I can conceive of two explanations. (1) The first explanation concerns a possible ‘archival function’ of Literature (and related fields in the humanities). As Fohrmann (2013) recently put it, the disciplines may be responsible for the cultural archive (pp. 616, 617). Indeed, scholars count ‘fostering cultural memory’ among the most important factors that increase excellence in the humanities (Hug et al. 2013, pp. 373, 382). Teaching and writing in the humanities do aim to increase knowledge and to stabilize our cultural memory. As a consequence, seminars and scholarly publications are costly and ephemeral, but still are necessary byproducts of society’s wish to uphold and to update its cultural heritage.

At first glance, this may sound sarcastic, but, in fact, this explanation would imply that the current situation might harm both the humanities and the university’s sponsors (in Europe, these are mostly the governments and, therefore, the taxpayers). In the 1980s, the humanities had to choose whether they would adapt to the institutional logic of the science departments, or to move out of the core of academia and to become cultural institutions, such as operas and museums. The humanities chose to remain at the heart of the university and thus accepted the slow adoption of mechanisms such as the competition for third-party funding and the numerical augmentation of publications. Now, the humanities produce texts that no one reads, that the taxpayer pays for and that distract the scholars from their core task: to foster the cultural archive, to immerse oneself in old books for months and years, to gain erudition and scholarship, and to promote the cultural heritage to young students and to society as a whole. (This is maybe why scholars are reluctant to cherish the scholars’ impact on society, as Hug et al. (2013, pp. 373, 382) also show. In the scholars’ view, their task is to expose the impact of the cultural heritage on society. In a way, giving too much room to the scholars seems to be a kind of vanity at the expense of the actual object of their duties.) Maybe, releasing the humanities from the evaluations and structures made for modern research disciplines would free the humanities from their bonds, reestablish their own self-confidence and decrease the costs their current embedding in the universities impose on the sponsors. It would be a mere question of labeling whether the remaining and hopefully prosperous institutions could still be called ‘academic’.

(2) The second explanation, however, is less flattering. It could also turn out that low citation frequencies indicate the moribund nature of the affected disciplines. When I recall that citations and debates have been core practices in the humanities for centuries, another conclusion pushes itself to the foreground: Scholars in the affected fields feel bored when they have to read other scholars’ publications.

In the 1980s and the early 1990s, there were fierce debates, and the questions at stake could be pinpointed (see Hüser 1992). Today, the very questions vanish; scholars have difficulties stating what they are curious about (Bunia 2011a). If no scholar experiences any intellectual stimulation instilled by a peer’s publication, she will tend to read less, to turn her attention to other fields and to quote marginally. With regard to cohesion (see Sect. 2.4), such a situation would also imply that the scholars in the affected fields no longer form a community that would identify itself as cohesive; one no longer feels responsible for the other and for the discipline’s future. If all debates have ended, the vanishing quotations simply indicate a natural death that no one has to worry about.

Both explanations will easily provoke contestations. As for the first one, one would have to ask why scholars have never realized that they had been cast in the wrong movie. As for the second one, there are only few hints at a considerable change in the past 20 years. Did scholars cite each other more fervently in the 1970s and 1980s than today? I do not know. Therefore, we need more research on the scholars’ work. For instance, we need to know why they read their peers’ work and if they enjoy it. It is good that researchers, namely, Hug, Ochsner and Daniel, began asking scholars about their criteria to understand how the scholars evaluated their peers’ performance. But we also have to take into account the deep unsettledness reigning in Literature and related fields (see Scholes 2011; see again Bauerlein 2011; Bunia 2011b; Lamont 2009; Wiemer 2011). We have to thoroughly discuss a ‘criterion’, e.g. ‘rigor’, which is a virtue scholars expect from others (Hug et al. 2013, pp. 373, 382). But ‘rigor’ is characterized by ‘clear language’, ‘reflection of method’, ‘clear structure’ and ‘stringent argumentation’, which are virtues the humanities are not widely acclaimed for and are qualities that may be assessed differently by different scholars. In brief, these self-reported criteria have to be compared to the actual practice. It may be confirmed that a criterion such as rigor is being consistently applied to new works; but it may equally well turn out that the criterion is a passe-partout that conceals a lack of intellectual cohesion in the field. Again, this means that we first must understand what the humanities actually do before we start evaluating the outcome of their efforts by quantitative means.