Keywords

Introduction: Narratives of Neglect

Does quality in research affect how universities matter? Expectations of how universities should matter to society have increased over time. This is particularly true with regard to the so-called “knowledge society” of the 1980s and 1990s and its subsequent research policy (Benner & Widmalm, 2011; Gibbons, 1999), wherein knowledge production was increasingly recognized as the future basis of the economy. These ideas had a major impact on Swedish research policy, particularly during the 1990s, when the ideal of global competition in knowledge production as a recipe for economic growth (Benner & Holmqvist, 2023) led to increased funding for research and higher education. However, the increased funding came with strings attached. Within the new “policy regime” (Ekström & Sörlin, 2022) of the 1990s, research quality was increasingly perceived as something that could—and should—be constantly improved and assured (de Miranda, 2003; Gulbrandsen, 2000). Problem formulations of “quality and relevance” were connected to an idea about investments in research yielding specific returns (such as citations, innovation, or internationalization) (Ekström & Sörlin, 2022). Previous research has concluded that this period marked a shift in perceptions about research quality in general (Langfeldt et al., 2020; Schwach, 2022; Sörlin, 2018). The research policy of the knowledge society was occupied with, among other things, enabling quality, but we do not know much about how quality was articulated and how it changed over time.

Important to the understanding of how research quality articulations developed is that notions of quality did already exist, notably in specific disciplinary cultures (Becher & Trowler, 1989), which meant that an enhanced situation of “coexisting” quality articulations (Langfeldt et al., 2020) was emerging. Such policy changes affected disciplines in different ways (Borlaug & Langfeldt, 2020; Söderlind & Geschwind, 2020).

According to previous studies on Swedish research policy, one particular area, the humanities, has been relatively disregarded in policy discussions (Ekström & Sörlin, 2012; Salö, 2021). A review of research bills in Sweden revealed a consistently low articulation of the humanities (Ekström & Sörlin, 2012, p. 42). Influences from the “entrepreneurial turn” and New Public Management (NPM) have been understood as particularly unfit for the humanities (Benner & Sörlin, 2007), and quantitative performance indicators, academic capitalism, and the “publish or perish” culture, often connected to neoliberal reforms, have also been recognized within the debate as having particularly severe impacts on the humanities (Benneworth et al., 2016; Hammarfelt & de Rijcke, 2015; Rider et al., 2013). This discursive neglect, combined with the normative accounts of neoliberal influences on the humanities, has resulted in a distorted account of the recent history of research policy and the humanities. Given that discussing how universities matter involves the whole university, why should scholarship not pay due attention to the humanities?

In public and academic debates, scholars have reacted to this neglect of the humanities (Bod, 2020; Holm et al., 2015; Nussbaum, 2010). This is illustrated, for example, through certain narratives on the so-called crisis of the humanities (Östh Gustafsson, 2022). A recurring issue of debate has been the inadequacy of evaluation procedures and, more specifically, how research quality is evaluated (Sörlin, 2018). Recent work on the Swedish history of humanities shows how marginalization in policy and a protective attitude have led humanities scholars to view themselves as being primarily engaged in “defensive and reactive modes of critique” (Ekström & Östh Gustafsson, 2022). Thus, to some extent, humanities scholars have been understood as outsiders to the system, uninterested in complying with the conditions of the policy regime (Ekström, 2022; Östh Gustafsson, 2020a, 2020b). Historical accounts of the humanities in Swedish policy have also largely been occupied with the reactive critiques of scholars, providing an understanding of the humanities as perceived by society (Tunlid, 2022).

The following study builds on a line of recent work on the history of humanities (see Bod et al., 2016; Ekström & Östh Gustafsson, 2022; Östling et al., 2022; Paul, 2022) but aims to contribute to this field with a new focus, namely, how research quality articulations circulate between various contexts and what sorts of articulations might emerge through this movement. The chapter thus offers a historical contribution to how particular articulations of quality came to matter as part of the reformed research policy regime that was starting to take shape in Sweden in the 1980s.

The aim is to understand the changes in research quality articulations related to more general developments in Swedish research policy, and how these developments in turn shaped research quality articulations in humanities policy spaces. The chapter concerns how research quality articulations of the humanities changed between 1980 and 2010. It focuses on the interactions of policy layers, and on the quality articulations these interactions generated. The chapter aims to answer the following questions: How have research quality articulations coexisted, interacted, and changed in Swedish research policy spaces between 1980 and 2010? Why did certain articulations of research quality become established and used from, within, and related to the humanities?

Co-produced and Responsive Research Quality Articulations

A premise of this study is that quality articulations are shaped through interaction and can therefore be studied by tracing their development and circulation between different contexts and over time (Lamont, 2009; Langfeldt et al., 2020; Wouters, 2019). This differs from the notion of research quality as an inherent trait, possible to detect by a sorting process.

Previous studies have primarily examined research quality as something that is articulated externally or internally (Langfeldt et al., 2020). While these approaches have been very useful in grasping such an elusive phenomenon as quality, this study takes a different approach. The recent developments in theorizing research quality, which are followed here, suggest that research quality should not be studied as something fixed but should instead be understood as contextually contingent (Langfeldt et al., 2020; Schwach, 2022). Internally and externally defined articulations on quality are thus always intimately co-developing, and with different relations of dominance. The suggested approach of studying research quality as coexisting and conflicting, formulated by Langfeldt et al. (2020), is used in this chapter as a theoretical framework; however, the framework is also developed to empirically examine the interaction, coexistence, and development of quality articulations as responsive. A historical approach is therefore used to uncover changes over time.

In focus of the study are changing quality articulations in separate but still intrinsically interwoven layers of policy. In order to describe these relations, I make use of the concept of co-production, referring to how a new type of knowledge is generated in the meeting between policy and science (Jasanoff, 2004). A recent contribution to theorizing these interactions has suggested the term “co-production space” to explain the knowledge exchange arenas, with their intricate networks (Thune et al., 2023). Salö et al. (2024, this volume) study a similar process that they label “knowledge brokering”; however, they focus on how science is taken up into policy.

The empirical analysis stems from documents and publications from Swedish research policy spaces. The central material used to study the overarching national developments in research quality articulations is the Swedish research bills, which can provide an understanding of the research and higher education policy carried out by the government at the time (Bjare & Perez Vico, 2021). The Swedish research bills have been described as where discussions about knowledge are turned “into flesh” since these are where research policy discourse is turned into reality through political decisions (Widmalm, 2016). Also, humanities research policy spaces have been studied, through disciplinary evaluations initiated by the central university and higher education authorities and carried out by the Swedish Research Council for the Humanities and Social Sciences (HSFR) and, since 2001, by the Swedish Research Council. I also draw on reports on the humanities and internal university evaluations on the humanities.

The research policy layers in focus in the present study serve as good examples of co-production spaces, and quality articulations become examples of what the knowledge exchange generates. Humanities policy spaces are captured primarily through policy documents from the HSFR and disciplinary evaluations that refer to the humanities. Research councils have been theorized as “semi-independent agencies” that are, on the one hand, closely linked to research communities, and on the other, to the government, leading to a situation where the loyalty might be bi-directional, having to balance the interests of both researchers and policymakers (Slipersæter et al., 2007, p. 401). Guston (2001) described research councils as “boundary organizations,” and Rip (2000) referred to them as “aggregation machines” due to the increased pressure on them to take in proposals and convert them into decisions. Thus, research councils are placed “at the nexus of contemporary demands of the NPM and growing expectations over the social and economic benefits of scientific research” (Sá et al., 2013, p. 106).

I make use of these descriptions in understanding the humanities policy spaces as a sort of “mid-layer,” where quality articulations are negotiated and adapted in order to work in a bi-directional way. These policy spaces thus work to co-produce the interests of scientists and policymakers, and this is how they create responsive quality articulations. Since quality is here taken to be both interactive and contextual, it is here understood to be primarily articulated neither within a particular scientific disciplinary community nor by policy governance. The co-production takes place in both spaces at the same time, constantly reconnecting and renegotiating internal and external values and criteria when articulating quality.

Responsive Research Quality Articulations in the Knowledge Society

In the following sections, I move back and forth between an analysis of the governmental research bills and the humanities research policy spaces in the different sections. I trace research quality articulations in Swedish governmental research policy during the 1980s, as expressed in the recurring research bills. I focus on how quality has been articulated in response to surrounding changes in the research policy regime, drawing on previous descriptions of a policy regime as the priorities of research as stated in policy documents and practices such as the research bills (Ekström & Östh Gustafsson, 2022, p. 18; Ekström & Sörlin, 2022). I further trace how the articulations found in the research bills coexisted with and/or co-produced those found in the humanities research policy spaces, and follow the responsive research quality articulations in different policy layers of the Swedish knowledge society.

Research Quality as a Matter of Disciplinary Expertise

The first research bill was finalized in 1982, and in it, the concept of research quality was linked to evaluations. However, how the evaluations were to be carried out was not clearly specified; according to the instructions, it was up to the research councils to carry out evaluations within their areas of expertise.

Societal use of the knowledge produced was also underlined as a key component of research quality, and this would be guaranteed through quality evaluations organized by the researchers themselves. With regard to policy in the early 1980s, it was assumed that researchers knew best what type of knowledge would benefit society.

However, there were also attempts to introduce systems to standardize quality assessment, particularly of doctoral theses. Suggestions on how to standardize previously ad hoc methods for evaluating quality marks the turn to a more elaborated research policy in terms of involvement in research (prop. 1981/82:106, p. 3).Footnote 1 The succeeding bills during the 1980s followed similar patterns in terms of how research quality was articulated. There was, for example, a focus on strengthening basic research in order to improve quality, and on further strengthening quality control within the community of researchers (prop. 1983/84:107, pp. 1–2). Improving work conditions for researchers would enable them to produce research of quality, and it was described as a necessity to continue these efforts to make sure that the upcoming generation of researchers would have the support they needed (prop. 1986/87:80).

Translating scientific quality standards and ventures was recognized as the responsibility of the research society; however, it was more specifically stated that this happened through international research collaborations and their ongoing evaluation of earlier and ongoing research to assure its quality (prop. 1986/87:80, p. 35).

During the 1980s, the bills highlighted the agency of researchers, and the role of research policy was largely formulated so as to improve conditions for researchers in order to improve the quality of research. For humanities scholars, the bills favored the quality articulations of the Swedish Council for Research in the Humanities and Social Sciences (HSFR). However, there are many indications in the research bills of how the space for discipline-specific articulations started to change, such as the processes of standardizing the assessment of doctoral theses. Also, the 1989 bill brought up the contradiction of leaving a significant measure of “social responsibility” for researchers, while also leaving them largely free to increase the “common knowledge” on their own (prop. 1989/90:90 p. 5). However, such a “division of labor and responsibility” was supposedly possible in a “genuine democracy,” thus, creating the right conditions for trust between research and the public was understood as one of the main tasks to be carried out in order to increase research quality (prop. 1989/90:90, p. 5). This indicates how research quality was increasingly becoming a matter of interest beyond researchers.

The research quality articulations changed in many ways over this early period of the research policy regime, connected as it was to the ideas of the knowledge society; however, they still maintained some of their previous meanings. The changes did affect all policy spaces in the same way, why humanities policy spaces are further studied in order to better understand the changing, coexisting, and responsive research quality articulations in other layers of the Swedish policy landscape.

Responsive Evaluations in the Humanities

HSFR, the Swedish Research Council for the Humanities and Social Sciences, was founded in 1977 by a merger of the Governmental Council for Societal Research (Statens råd för samhällsforskning) and the Governmental Council for Humanities Research (Statens humanistiska forskningsråd) with the main task of stimulating as well as financing basic research and scholarship within the humanities and social sciences. HSFR members interpreted the task as involving quality evaluation of research, and in its early days, this quality evaluation was primarily understood to be carried out in connection with the yearly applications for funds from the research community (HSFR-nytt, 1981). However, HSFR started to commission evaluations of its disciplines in 1985 in response to what had been stated in the governmental research bills in the early 1980s. The first disciplines that underwent this type of evaluation, starting in 1985, were history and sociology. Later on, in 1989, economics, linguistics, and cognitive and biological psychology were also evaluated by HSFR (Härnqvist et al., 1997, p. 11). The disciplinary evaluations were contrasted with the continuous scrutinizing processes of research communities, which usually would concern individual scholars or research projects. At this time, the HSFR members described an emerging pressure to establish a “comprehensive overview of the state of the art of Swedish research in an international perspective” (Härnqvist et al., 1997, p. 5).

However, when the idea to carry out disciplinary research evaluations of quality was first raised, it was “not met with particular enthusiasm by the HSFR members” (Härnqvist et al., 1997). The reason for this skepticism was that such evaluations were understood to be difficult to carry out within the cultural disciplines. In the first edition of HSFR’s own journal in 1981, HSFR-nytt [HSFR-news], the chief secretary Pär-Erik Back wrote that the public debate indicated that something was wrong with the humanities, but a more precise diagnosis was lacking, why the government had commissioned the HSFR to investigate the conditions for bringing about improvements in the field of research in the humanities and social sciences and research relating to cultural expressions and cultural issues (HSFR-nytt, 1981). HSFR was thereby assigned to carry out an analysis of the state of the humanities and social sciences in preparation for the next research bill, and to propose measures to improve the quality of research and working conditions of researchers in the field (with an emphasis on research in the humanities).

While accepting the idea of disciplinary research evaluations, suggested by the government, HSFR themselves did initiate a report in the early 1980s where researchers in the humanities got to write personal observations from their own work environment. It was titled “Six voices about the everyday life in science”, and the aim of this initiative was to complement more standardized forms of evaluation that had been initiated from a top-down perspective (Löfgren, 1982, p. 7). Thus, there were various attempts to formulate how and why research quality where to be evaluated.

Experimenting with “Top-Down” Humanities Quality Evaluation

In 1988, the report on history that had been initiated in 1985 was published, setting out to evaluate the state of historical research in Sweden but also to test the “opportunities and problems inherent to research evaluation at a national level” (Danielsen, 1988). This evaluation involved six historians giving their views on the state of the field while also reflecting on how and why evaluations of history and other humanities fields were to be carried out. In the preface, the HSFR director reflects on the process of evaluating research quality as something neither new nor uncommon—critical assessments of scientific practice were constantly present in academia. However, the usual ways of assessment were only focused on individual scholars or individual research projects.

The report starts with reflections by the historian and principal secretary of the National Research Council Committee Hans Landberg under the headline “An Experiment in Evaluation.” There had been a desire to extend systematic evaluation efforts beyond the natural sciences, since the Natural Sciences Research Council (NFR) had been making systematic, and what were considered successful, efforts since the late 1970s to also include the social sciences and humanities (Danielsen, 1988). The considerations of how to carry out an evaluation of historical research entailed discussions with a representative group of Swedish historians, particularly since the experience was lacking—it was, after all, an “experiment.” In the case of humanities, this was the first attempt at a major evaluation at the national level in the Nordic countries.

However, Landberg thoroughly problematized the very practice of evaluating research on this scale, stating that evaluations were never unproblematic, regardless of disciplinary area. He described how the humanities were compared to the natural sciences, where methodological, theoretical as well as “other quality criteria” were understood as both internationally established and “fairly unambiguous and well-defined,” with a majority of the scientific community broadly adhering to the same criteria. Altogether, it was perceived as more manageable to evaluate the quality of natural science research compared to humanities research. But even in the natural sciences, questions had been raised as to what the additional evaluations actually provided. With this in mind, Landberg felt compelled to raise the question of whether a discipline such as history would actually benefit from a “top-down” evaluation.

Despite this general critique of national evaluations, the evaluation group decided to evaluate history on their own terms with the aim to stimulate a concrete and positively critical evaluation discussion at the collegiate level. This evaluation was based on analyses by Nordic historians in a few thematic areas and the historians had been assigned to evaluate the general direction, development, and quality of research in a Nordic and, if possible, wider international perspective. The intention was not to “make a grading comparison between institutions or research groups, let alone to try to make an individual top ten list” (Danielsen, 1988, pp. 14–15). There was, in these statements, also a critique of making comparisons based on methods that were seen as too “simple.”

Bibliometric methods were, at this time, perceived as particularly ill-suited for the field of history. Historical research was understood as “too widely-branched in terms of content and method, the research groups too loosely-knit, and the institutions by their very smallness too susceptible to changes in personnel and other shifts in research conditions to make such exercises meaningful” (Danielsen, 1988, p. 15). The evaluation was therefore structured so that four historians from Norway and Denmark, as well as one historian of science and ideas from Sweden, got to evaluate Swedish historical research on their own terms in five different thematically defined areas.

Production Results and Citations for International Comparisons

The discussions on making comparisons between research groups and between countries, including with bibliometric tools, were lively in the research bills during the 1990s. In the 1989 bill, it was stated that an analysis of research policy could not focus solely on the financial and organizational aspects; one also needed to learn about the results of the investments in research (prop. 1989/90:90). But how would the quality of results then be evaluated, as suggested by the bill? Two methods of evaluation were presented: one entailed regular assessments by international experts, and the other looked at the number of publications and citations of research results (prop. 1989/90:90).

The focus on results rather than on planning anticipated the restructuring of universities and higher education that took place in 1992, when the governing structure was changed in order to correspond to demands from the government for a more independent organization and increased power for each university to decide on the use of its resources (Lundberg, 2007). This happened in parallel with increasing demands for evaluation of the results, a direction that could be described as freedom under research quality evaluation.

Now, it was argued that, when evaluating the quality of results, the most important thing was that they be presented to other researchers internationally. This was best accomplished through publication in scientific journals. These arguments were considered to refer mainly to the natural sciences and medicine—even though it was “also important in some social sciences and humanities disciplines” (prop. 1989/90:90, p. 16). Despite recognizing the unequal measure of international journal articles as a sign of research quality between disciplinary fields, bibliometric methods were introduced as a good way to get a picture of Swedish research quality in comparison to other countries. Citation numbers were thereby connected to national research policy articulations of quality, even though this was understood as unfavorable for the humanities.

The 1992 research bill was coupled with the introduction of performance management to research and higher education. The bill stated that the future of Sweden depended on investments in knowledge, and that “systems for resource allocation and evaluation must be designed to stimulate the emergence of creative research environments and promote high quality” (prop. 1992/93:170). It was no longer enough for research just to be of “scientific quality”—it had to be of high quality to have any value! This was the first bill where a focus on excellence—beyond solely research quality—was heavily pushed in order to ensure that Swedish research could compete in a global arena. One suggested way of achieving this quality was to create centers of excellence that could integrate research of the highest quality of a “different but complementary nature within a subject area, thereby generating synergies leading to better performance and use of resources” (prop. 1992/93:170, p. 35). It was hoped that this would contribute to the development and competitiveness of Swedish industry. Thus, the highest possible research quality was articulated as something that would make Swedish industry more competitive internationally. Research was thus formulated as a resource for economic growth in global competition; the excellence of these centers would be guaranteed through reoccurring quality evaluations with international participation, and the results would also guide the allocation of resources (prop. 1992/93:170, p. 35).

The following bill further increased societal relevance as a criterion of research quality, for example, by highlighting the benefits of a funding structure based on other criteria than intra-scientific quality criteria (prop. 1996/97:5). The production of scientific articles was commonly used as an indicator of research quality, and Sweden was in second place among the OECD countries, “with more than 1500 published articles per resident” during 1994 (prop. 1996/97:5, p. 32). Only Switzerland was ranked higher in this regard. In general, international auditing and competition were seen as key to achieving high quality—and quality assurance procedures would have to increase even further, including within areas where they were still uncommon (prop. 1996/97:5, p. 37). Now, it was stated that research quality could only properly be valued from an international comparative perspective, and when it came to quality, all research funders should base their decisions on reviews that international experts had contributed (unless there were some particular circumstances making a national evaluation better suited) (prop. 1996/97:5, p. 47).

A Range of Views on the Humanities and Quality

The government’s research advisory board launched a series of seminars in 1996 with the aim to “provide an overview of the direction and quality of Swedish research” (Forskningsberedningen, 1997). The context was that the increasingly central role of research and knowledge in society, and Sweden’s membership in the EU, was accompanied by new demands from society. The first seminar, held in May 1997, focused on the humanities. The seminar, according to the instructions, addressed issues such as the development and quality of humanities research, the benefits of humanities research, and patterns of resource allocation (Forskningsberedningen, 1997).

The contributors were of quite diverse backgrounds, though the majority were professors in humanities disciplines. Inge Jonsson, professor in literature as well as chief secretary of HSFR 1987–88, problematized what he saw as a fixation on the present in humanities and research policy. To exemplify this fixation, Jonsson referred to a recent doctoral thesis on a contemporary Swedish author; he observed how it used “foreign theories” that the doctoral student did not entirely comprehend, and Jonsson also noted that almost no doctoral student these days would go further back than the nineteenth century (Forskningsberedningen, 1997, p. 12). He concluded with the observation that, over his active years as a researcher, something had “changed in the very core of the valuation of the humanities” (Forskningsberedningen, 1997, p. 12). This statement by Jonsson referred to how the talk of research being of societal use had, over these years, come to be about the natural sciences, excluding humanities—which was not how it had been when he started out in academia.

Another contributor, Aant Elzinga, focused on summarizing an evaluation of the humanities from Switzerland. Elzinga stated how the evaluation focused on research quality and made use of a wide range of sources for analyzing the state of Swiss humanities. The Swiss evaluators tried to make use of the Arts and Humanities Citation Index (A&HCI), but it proved not to work well for the humanities, and they stated that it should not be used unless with complementary instruments. Even then, the Swiss advice stated that it should be used only as a “diagnostic tool to develop a dialogue between representatives of research fields” and not as a basis for deciding on how to allocate resources (Forskningsberedningen, 1997, p. 16). The A&HCI was also discussed in a contribution by Olle Persson, a sociologist influential within the field of bibliometrics, in a text on Swedish publication patterns in international humanities journals. Despite all of the limitations of the A&HCI, Persson thought a study of the Swedish humanities would be useful while stressing that publishing activity should not be perceived as a measure of quality. He argued that, instead of being related to quality, international publishing was mainly about contributing to a wider dissemination of results. In other words, the main purpose of the publication was the exchange of information. The results from Persson’s study showed great variety between areas, but he argued that this had to do with varying publication cultures and therefore little to do with the “volume or quality of research activity.”

Research quality in the humanities was therefore not to be evaluated using citation measurements, according to these responsive quality articulations. Instead, citation measurements would only be suitable for information gathering. Thus, research quality at this point in time was articulated as unrelated to citation indexes within humanities research policy—but there was a general push to further engage with the possible uses of these databases.

Humanities Quality as Something Particularly Complex

The University of Gothenburg’s humanities department commissioned a “strategic evaluation” of their research during the 1990s by a Nordic group of evaluators, which was finalized in 2000 (Sörlin et al., 2001). The evaluators’ initial understanding of research quality was that:

…quality is not a simple, one-dimensional and measurable thing, hardly in any field of knowledge, and certainly not in the humanities. At the same time, it is clear that humanities research, like other research, is subject to increasing demands to report on the results of its activities. Even if the most important long-term outcomes are “insight” and “knowledge”, the client, the state and citizens, have a right to know whether the resources, as used, really serve these purposes. (Sörlin et al., 2001, p. 18)

Research quality was here articulated as something complex, which was valid for all fields, but in particular when discussing the humanities. However, due to the increasing demands for reporting research results, this group of evaluators decided that they were not satisfied with solely explaining quality as something particularly complex. They, for example, described the abstract concepts of “insight” and “knowledge” as the most valuable long-term results when reporting on the results of research, but stated that this was not enough to explain the use of the resources to the non-academic world.

Due to general use of quality articulations, which included explainable results of the resources spent, the evaluators highlighted how the “output” of humanities research was also of importance. One task was understood as to more precisely learn about how research with “high productivity and strong publication patterns” correlated with insight and knowledge (Sörlin et al., 2001, p. 18). The third chapter of the evaluation, for example, used a number of dimensions that were to be understood as central to research quality: the production of publications, production of PhDs, external funding, and international contacts (Sörlin et al., 2001, p. 42).

However, the overarching work of the evaluators was described as focused on quality rather than quantity, in terms of their methodology, and they used a combination of qualitative and quantitative means, combining interviews, questionnaires, peer review, bibliometrics, and more. The evaluators understood their task as a qualitative assessment of the existing work at the faculty but at the same time also an attempt to value the more general developments in humanities research “in the light of the transformation and changing needs of society” (Sörlin et al., 2001, p. 19). Their task was to evaluate humanities research “for the society we will live in tomorrow, not humanities for the time we have left behind,” though this would not discount the fact that “many of the meanings of the humanities have existed for a long time and will continue to exist as long as we can envision” (Sörlin et al., 2001, p. 19).

This group of evaluators had a more positive stance toward the relationship between citation indexes and quality compared to the seminar reports described in the prior section, from a few years earlier. The evaluators stated that the quality of humanities research had to be evaluated with qualitative criteria but found that quantitative measurements of research productivity and quality could constitute a useful complement to the qualitative analysis (Sörlin et al., 2001, p. 7). Thus, compared with, for example, Olle Persson’s opinion in 1997, it was here understood as a possibility to use bibliometrics to evaluate research quality.

The responsive research quality articulations expressed in this evaluation of the humanities in the 1990s at Gothenburg University were thus shaped in interaction with notions about how societal usefulness should be expressed, responding to the surrounding changes in knowledge politics.

Resources for Quality in National Research Policy

The 2000s were characterized by growing investments in research, with almost a doubling in governmental funding during one decade, landing at about 4% of the total state budget (Vetenskapsrådet, 2018, p. 31). This was reflected in the following bills, from 2000 and 2004, where the policy goal was to encourage high quality in all areas of research and to make Sweden a “leading knowledge nation.” This demanded great investments by the government as well as industry (prop. 2000/01:3, p. 10). The discussion on research quality was centered on international competition in the 2004 bill, where citations were used to compare how frequently Swedish publications were cited and how different areas were “doing” internationally in terms of quality (prop. 2004/05:80, p. 24). Thus, quality was clearly articulated as something to find in comparison with other countries, preferably by studying citation numbers. Since its presidency in the EU during the spring of 2001, Swedish policy was also working for a European research council that would have scientific quality as its leading star, which would work to strengthen European research globally (prop. 2004/05:80, p. 11).

Achieving the highest scientific quality was the direction set out for the reorganization of research funding in 2001, when the Swedish Research Council (Vetenskapsrådet), Fas (later renamed Forte), Formas, and Vinnova were formed with the goal of making Swedish research interdisciplinary to make it successful, competitive and “world-class.” Sweden would be one of the most research-intensive countries in the world—and all Swedish research was supposed to be of high quality—which would contribute to making Sweden Europe’s most “competitive, dynamic and knowledge-based economy” (prop. 2004/05:80, p. 9). However, in this context, this meant prioritizing medicine, technology, and sustainability (prop. 2004/05:80). In 2007, the governmental reports Resources for Quality and Career for Quality were placing quality at the center, primarily articulating it in terms of something to further enhance through competitiveness among Swedish institutions as well as in a global research landscape. According to Resources for Quality, the increased international competition had been the strongest external factor in creating strategies for universities, helping to identify priorities and niches in competition with a global research landscape. This also meant that composing university-wide strategies was “generating resources for quality” (Resursutredningen, 2007, p. 21). Thus, quality here encompassed adaptation on a university level to an international research landscape.

A small proportion of the funds allocated for universities by governmental means were exposed to competition, and this was articulated as a way to further increase quality (Vetenskapsrådet, 2018, p. 31). Further, quality indicators were supposed to be used for allocating resources, and the universities’ ability to attract external funds would also serve as an indicator of quality (prop. 2008/09:50, p. 1). This was described as a “quality-driven reform that strengthens research and facilitates the institutions’ internal quality work” (prop. 2008/09:50, p. 1).

Concluding Discussion

Over the period studied here, research quality shifted from being primarily a matter for the research society to articulate and the individual research communities to decide on, to something that became important to govern through policy tools such as bibliometric indicators and resource allocation. In the national research policy arena, quality articulations shifted from an emphasis on the self-defined quality standards of a scientific community to a generally acknowledged benchmark that in turn was supposed to generate other values—primarily in the form of economic gain, and more generally what was understood to be of societal use. Research quality became an increasingly important matter for policymakers. The new research policy regime that developed in Swedish knowledge politics during the period here studied entailed that previously silent, ad hoc knowledge on research quality became increasingly articulated with the emergence of a cross-university research-policy-oriented regime that involved a focus on articulating quality. This is, for example, exemplified in the standardization of doctoral education through the national research policy bills during the 1980s.

By studying the coexistence of research quality articulations in different research policy spaces, this study has contributed to an empirical examination of coexisting quality notions, which was sought in a recent study by Langfeldt et al. (2020). But by also studying the historical developments and changes in research quality articulations in different spaces, the study has, in addition, shown empirically how research quality articulations develop over time. This is what the historical approach, combined with the concept of responsive research quality articulations, helps us see. The humanities research policy spaces have proven to respond to the changes in research quality articulations observed in the national research policy spaces while also contributing their understanding of how research quality should be articulated. This then prompted new articulations. And responsive quality articulations were not only a matter for the disciplinary research councils such as HSFR; the governmental research bills can also be understood as an arena where research quality was articulated in response to changing knowledge politics.

Responsive research quality articulations differ from the previously defined framework of coexisting since they point to how quality articulations not only coexist, as suggested in the 2020 article by Langfeldt et al., but also develop into something new—thus, they might both coexist and co-produce. Within science studies, separation between internal and external research quality articulations has usually been assumed. Internal quality articulations are then understood as those emerging within particular fields over time, often tacitly, and knowledge gets legitimized through one’s specific discipline, or “tribe,” which decides whether it should make it through quality approval (Becher and Trowler, 1989). These internal evaluation procedures have also been a central theme in the sociology of science, for example, in the well-known works of scholars such as Fleck (thought collectives), Kuhn (paradigms), and Bourdieu (scientific authority), and also within more recent work on quality cultures in academia such as in Lamont’s How Professors Think (Lamont, 2009; see also Bourdieu, 1988; Fleck, 1935/1979; Kuhn, 1962). Internal quality cultures or articulations have been relatively well investigated, including in Swedish academia (Ganuza & Salö, 2023; Gunvik-Grönbladh, 2014; Hammarfelt, 2017, 2021; Hylmö, 2018; Joelsson et al., 2020; Nilsson, 2009; Salö, 2017). A common trait of these studies is their examination of meriting processes, where peers articulate why someone should get a position, and thus an understanding of quality emerges through negotiation.

Articulated external demands developed as a consequence of a coherent research policy in the postwar period, even though different forms of external quality articulation to some extent have always been present in research. These entailed more explicitly describing quality as something to be governed by means of organizational tools (Dahler-Larsen, 2012, p. 139; de Miranda, 2003; Gulbrandsen, 2000), since expectations of societal use of research increased and implied an increasing presence of research quality articulations connected to the social contract between science and society (Gibbons, 1999).

Other studies have focused on how researchers behave in relation to the changing external evaluation procedures, where a “misalignment between valuation regimes” has been recorded by Wouters (2017). The consequence of the misalignment is that researchers perceive that they have to behave in ways that do not match their internal quality articulations, which in practice might mean that they are adapting their publications and overall disciplinary norms to meet what they understand as the expectations of the research evaluation systems (Fochler & Rijcke, 2017; Hammarfelt & de Rijcke, 2015; Nästesjö, 2021; Robinson-Garcia et al., 2023). These types of studies have all been much-needed additions to research on how governance structures shape research content. However, they have still been focused on the actions of the individual scholar who is part of a disciplinary culture and is thus situated in the context of the “internal” being governed by the “external.”

Based on the findings, it is clear that both research policy spaces can be understood as affected by an overarching process of increased presence of quality articulations, but that the process had different consequences due to the responsive quality articulations. We learn, however, that quality became increasingly prevalent in research policy during this period and changed understandings of how knowledge would be valued. This adds to the historical knowledge about the Swedish knowledge politics of this period, c. 1980–2010, where it seems to be feasible to argue for an increased presence of quality articulations in research policy in general. As recently illustrated in the history of Norwegian research quality articulations, it seems as if a similar change in the perceptions of the relationship between research quality and societal relevance also took place in Sweden during this period (Schwach, 2022). Quality went from being a result of societal relevance to being what was expected in order to result in societal relevance.

The findings made here have been possible to detect only through historical analysis, which provides tools to study change over longer periods of time, making it possible to detect both finished and ongoing trends and processes, compared to normative policy work. The approach used could, however, be compared to previous studies on Swedish research evaluation, where hiring reports by individual universities have been studied to understand how quality practices correspond to the policy framework they are faced with—this also illustrates responsiveness, however, on another level (Hammarfelt et al., 2020; Nilsson, 2009). A recent study of Norwegian sociologists’ understandings of their societal impact also uses a similar empirical entry point, drawing on evaluations created by the Norwegian Research Council, making it a study at the intersection of policy affecting researchers’ self-understanding (Tellmann, 2022). However, the focus of this study, has not been the individual researchers and their practices of quality evaluation but the interactions between layers of policy, the “co-production spaces.” This study has focused on research quality articulations in policy documents on a policy level. Thus, it does not aim to explain, for example, how individual researchers have reacted and responded to changing research quality articulations—that would require other methods, such as interviews or ethnographic approaches, as used by Mufic (2022), for example, when studying the micro-politics of quality in Swedish adult education.

The neoliberal influences of universities have been central in the prevailing narrative on changes in evaluation procedures since the 1980s, and there are many previous studies on policy and the neoliberalization of the university sector (Bulaitis, 2020; Nicholls et al., 2021; Rider et al., 2013). In this chapter, the aim has not been to show how the university was influenced by neoliberalism, but rather to highlight the history of how certain quality articulations came to matter, how they coexisted with other articulations, and how they interacted.

A recent study on the neoliberalization of the Swedish university sector observed that even if neoliberalism is a strong and central ideology, it is still “met and confronted by local practices that are fuzzy and eclectic and outcomes that do not satisfy the neoliberal maxim of ‘value for money’” (Benner & Holmqvist, 2023, p. 15). Another study has highlighted the interplay between the global and the local, looking at how global developments in research policy have been feeding into local configurations, creating new forms of variety, while at the same time forcing local actors to respond (Simon et al., 2019, p. 475). In the context of this chapter, these observations can be compared with how the processes of quality articulation were not set in motion by humanities scholars, while they still might have led to counterintuitive effects. These are visible only when empirically tracing the research quality articulations throughout the historical developments.

Current descriptions of what has happened in the Swedish university sector since the 1980s have thus been insufficient to explain the multiple and changing logics of research quality. Drawing on an understanding of the complex and changing nature of interaction between various layers of policy and internal logic has made it possible to understand changing research quality articulations as more complex than, for example, solely a result of neoliberal ideology and its auditing systems. It entails taking the agency of humanities policy spaces seriously while also acknowledging the shifting power relations within these.

Since this study encompasses responsive quality articulations of humanities policy arenas, rather than primarily the reactive critique on what quality in the humanities was not, we now have a more nuanced understanding of how the perceived marginalization of humanities quality evaluation developed and might have changed the direction of humanities research. The focus on responsive quality articulations also might have implications for the understanding of the role of the humanities in research policy today, since this study has contributed to the history of humanities with examples of how humanities researchers have been acting responsively rather than reactively. As argued by humanities scholars before me, critique is never enough (Ekström & Sörlin, 2022). In this sense, this is a contribution drawing on the history of humanities to better understand how humanities might be able to contribute to the research policy discussions of today.

However, the findings also create new questions for further research. If humanities scholars did not fully subordinate themselves to the quantitative or neoliberal regime, as the public debate might sometimes lead us to believe, the question of what quality articulations are actually leading humanities scholars of today remains.

The chapter has been driven by a desire to move away from the binary theoretical framework commonly used to understand the implications of research policy, particularly in terms of research quality as either/first internal quality or/then performance-based and NPM-driven. This has enabled a shift in perspective, from a narrative of humanities as a field of neglect, crisis, or decline, and toward a new narrative of humanities based on historical analysis and agency. The shift in perspective thus also entails a simultaneous reckoning with past debates on the crisis state of the reactive humanities, as well as opening up for further considerations of the implications of drawing on humanities thinking in the history and future of research policy. What research quality is and how it develops thus proves to be more complex than previously thought and, above all, has an open and challenging future.