Advertisement

Minerva

pp 1–23 | Cite as

Co-existing Notions of Research Quality: A Framework to Study Context-specific Understandings of Good Research

  • Liv LangfeldtEmail author
  • Maria Nedeva
  • Sverker Sörlin
  • Duncan A. Thomas
Open Access
Article

Abstract

Notions of research quality are contextual in many respects: they vary between fields of research, between review contexts and between policy contexts. Yet, the role of these co-existing notions in research, and in research policy, is poorly understood. In this paper we offer a novel framework to study and understand research quality across three key dimensions. First, we distinguish between quality notions that originate in research fields (Field-type) and in research policy spaces (Space-type). Second, drawing on existing studies, we identify three attributes (often) considered important for ‘good research’: its originality/novelty, plausibility/reliability, and value or usefulness. Third, we identify five different sites where notions of research quality emerge, are contested and institutionalised: researchers themselves, knowledge communities, research organisations, funding agencies and national policy arenas. We argue that the framework helps us understand processes and mechanisms through which ‘good research’ is recognised as well as tensions arising from the co-existence of (potentially) conflicting quality notions.

Keywords

Research quality notions Research policy Research fields Research organisations Knowledge communities 

Introduction

In this paper, we revisit the contextual, complex and dynamic nature of research quality (Paradeise and Thoenig 2015). We mobilise a historical overview of the concept of, and concerns regarding, research quality, and how this developed alongside other major elements of the modern constitution of research. From this we develop a novel framework to study and understand research quality based upon three key dimensions. First, we distinguish between co-existing research quality notions that originate in research fields (F(ield)-type) and research spaces (S(pace)-type). Second, we draw upon existing studies of research quality (Polanyi 1962; Gulbrandsen 2000; Lamont 2009) to explicate its attributes. Third, we use contemporary studies of the science system and its dynamics (Whitley 2000; Whitley et al. 2018; Nedeva 2013) to identify the organisational contexts where notions of research quality emerge, are contested and institutionalised. This multi-dimensional framework and its components, we believe, affords opportunities to study issues around research quality as an empirical question beyond general, user driven definitions. It also shifts the focus of study towards the processes and mechanisms through which ‘good research’ is recognised.

Issues around research quality have been around, in different guises, for as long as modern science. Early 19th-century debates on the demarcation between science and non-science dominated logical positivism (Caldwell 2010). Later, debates on quality in science were anchored in notions of truth, pragmatic acceptance (Chalmers 2013) or the social and intellectual conditions of scientific knowledge (Merton 1942). These, and we argue more recent, approaches to study research quality share one key feature: they regard judgements about the quality of science as an endogenous matter best left to the discretion of knowledge communities.

Here we argue that following the advent of research policy, and associated imperatives for increasing levels of accountability and legitimacy, mechanisms for constituting research quality notions that were once reserved for highly professionalised knowledge communities have extended to encompass notions generated within policy and funding domains. Most importantly, we see that research quality notions originating in research fields, or knowledge communities, and in policy and funding domains coexist.

This coexistence of research quality notions creates complex and multi-dimensional dynamics implying that research quality cannot adequately be studied and understood as a unitary notion. Approaches allowing more nuanced, structured and multi-faceted investigation are required. Current approaches to study research quality, we contend, are deficient to provide tools for such nuanced understanding. Hence, we propose to shift focus away from questions that essentially address what research quality is, and/or how to measure it, towards the mechanisms through which dominant notions of research quality become established and the social and intellectual tensions arising from co-existence of (potentially) conflicting quality notions.

Our paper is structured as follows. First, we discuss the context of previous approaches to research quality. Second, we introduce in turn the three component dimensions of our framework, encompassing: (i) types of quality notions; (ii) attributes of quality; and (iii) sites where notions are established and institutionalised. Third, we combine these three dimensions into an overall framework to study and understand research quality notions. We then discuss ways in which this proposed approach stands to change the research agenda.

Context for (Re)framing Notion(s) of Research Quality

Throughout most of its history the science system has been comprised of a multiplicity of research fields characterised by specific, and diverse, quality cultures (Feuer et al. 2002). Research field quality notions and standards were developed, to a large extent tacitly and without theoretical articulation, to meet the demands of the knowledge domain and, later, of users of knowledge results. This included more applied domains and communities, the achievements of which would be validated using primarily bureaucratic, military, and/or industrial procedures (as outlined by Shapin 2008).

Diverse knowledge communities worked within specific structures and communication forums, predominantly university departments, national and international conferences, and international journals and publishing houses. Even though research fields, in this sense, have always been predominantly global – a point stressed by Ben-David (1971) and Merton (1942) – domestic languages and local conditions functioned as important complements, evidenced in national modes and styles of work and communication, most notably in humanities and social sciences (Fourcade 2009), but to some degree visible in all fields (Salö 2015, 2017). Furthermore, trans-disciplinary communities have blended and transferred methods and theories across institutional and organisational settings, e.g. molecular biology and physics (Keller 1990), and humanities (Sörlin 2018).

This historical co-existence of diverse research fields with specific structure, communication systems and organisational arrangements implies that variable, context-dependent and flexible notions of research quality have also traditionally co-existed. However, despite being different in specifics, these quality notions, importantly, include notions predominantly intrinsic to science concerns.

A different type of research quality notions started to emerge in the mid-20th century. These were marked by the advent of (national) science policy, initially focused mainly on national security and competitiveness (Mukerji 1989) but later shifting attention to the efficiency of the science system on several scales, including regionally and globally (Nedeva and Boden 2006; Geuna and Martin 2003; Sörlin 2007; Vessuri et al. 2014; Lepori et al. 2018) and commercial application of results (Etzkowitz et al. 1998; Jacob et al. 2002; Howells et al. 1998).

Recently, and partly driven by the global scope of research, understanding of research quality by national governments has been somewhat converging. Notions of ‘research excellence’, building on expectations for international research influence, and hegemony, underpin many national evaluation regimes and systems (Flink and Peter 2018). Furthermore, this alignment of quality notions is enabled by the sophistication of techniques to monitor international linkages and visibility of research (cf. Larédo and Mustar 2001; Oreskes and Krige 2014), including the widespread application of indicators to summarise and often commodify performance and understandings of research quality (van Raan et al. 1989).

Rankings, and other instruments for institutional comparisons, draw heavily on international publications and their impact to present a more fine-grained analysis of scientific contributions (Piro and Sivertsen 2016). Journals have become proactive in cultivating their impact profiles, soliciting research with potentially high impact and enticing researchers to publish more ‘sensational’ results. Significant differences between countries and fields in their quality cultures have been challenged by global communication forums, mobility of scientific elites, and convergence pressures from international standard setters in different scientific areas (Douglass 2015; Sarewitz 2016; cf. Fourcade 2009 on the diminishing role of national quality cultures in economics).

A rapidly growing set of techniques for selection and assessment has developed, largely replacing the trust-based system of science. Funding for science and research has become subject to selectivity and competition. Universities, previously rigorously regulated but seldom or never compared or ranked, are now subjected to global quality standards largely outside their influence; and governments, funders, students and citizens pay attention to their performance in these ‘quality contests’ (see, for example, Paradeise and Thoenig 2015).

A seemingly global quality standard has emerged, in which individual researchers as well as scientific organisations, subject fields, and even countries can gauge their relative positioning. This has been emulated by funding agencies, which increasingly see themselves as either leader in setting quality ‘gold standards’ (the European Research Council is an example, see Chou and Gornitzka 2014; Edler et al. 2014; Nedeva et al. 2012; Flink 2016), or as followers adopting these standards, like the Research Council of Norway (Benner and Öquist 2014). Measurable output catches attention, and countries (and institutions, and individuals) adapt to this.

Overall, this crude overview suggests a historical path that we frame as the gradual emergence of two separate but co-existing types of research quality notions. These represent different social contexts: the context of the research field and that of the research funding and policy space (after Nedeva 2013).

Framework Dimensions to Study and Understand Research Quality

The above review of background and context leads us to the first dimension of our framework: two co-existing types of research quality notions. One type originates within research fields and is negotiated and established by the specialised knowledge communities to which these are assumed to have validity. This we label as F(ield)-type research quality notions. The other originates within research policy and funding spaces (i.e. research spaces, see note 1), and is advanced by knowledgeable lay groups and will often be assumed to have validity across different fields. We refer to it as S(pace)-type research quality notions. In this section we describe this first component in more detail. We then introduce the two further dimensions we assert are required to produce an overall framework for nuanced, multi-dimensional study and understanding of research quality: attributes of quality notions; and sites of contestation/institutionalisation.

First Dimension: Co-existing F(ield) and S(pace) Type Research Quality Notions

F(ield)-type notions of research quality originate in research fields. They are shaped by specialised, albeit fragmented, knowledge communities characterised by high level entry requirements, professional training, unified research practices and recognised bodies of knowledge (Cahan 2003; Höhle 2015). Hence, quality judgements are anchored in knowledge pools and/or conditions necessary to enhance these pools. For example, research is judged as extending and validating the existing knowledge or as pushing boundaries by developing theories, methods and approaches in the field.

This type of research quality notions may incorporate criteria, and standards, around properties of knowledge (e.g. original, reliable, relevant to the field, useful for further knowledge production, reproducible etc.), professional competence (reputation, ethics etc.) and intellectual and material conditions for research (method, theoretical grounding, instrumentation, experimental set up etc.).

Finally, F-type research quality notions are enforced predominantly through peer judgement and peer review practices. These are used at multiple selection points, including recruitment and promotion of research staff, publishing, conference participation, and access to resources in the field like instrumentation, materials and funding.

S-type notions of research quality, on the other hand, originate in policy and funding spaces (‘research spaces’, see Nedeva 2013).1 They are developed and established by knowledgeable lay groups. These may include policy groups, administrators, research organisation leaders and research funding agency staff. Notably, researchers from neighbouring and more epistemically distant research fields could also be considered to be knowledgeable but lay groups.2

Here judgement is anchored in considerations exogenous to the field’s specific knowledge pools. These considerations have historically been manifested as concerns for the social and economic contribution of, and from, science. When knowledgeable lay groups develop research quality notions, quality standards may become proxies, as opposed to substantive standards and/or general reputation. Using the reputation, and impact factor, of specific journals as a proxy for the quality of research papers, for instance, is an example. Lastly, S-type research quality notions are enforced through evaluation regimes that may or may not involve some variant of peer review.

The main features of F-type and S-type research quality notions are summarised in Table 1. This separates out the subject of the notions, how judgement is anchored and enforced, and whether quality standards involve substantive (F-type) or proxy (S-type) based judgements. This provides us with some key differences in the origins, mechanisms and processes associated with these two ‘pure’ types.
Table 1

Types of research quality notions

 

F-type quality notions

S-type quality notions

Subject: who forms quality notions

Specialised knowledge communities

Knowledgeable lay groups, incl. researchers in neighbouring fields

Judgement anchor

Knowledge pools and conditions to advance scientific knowledge

Exogenous considerations incl. social and economic concerns

Enforcement

Peer judgement and peer review practices

Regional, national and local evaluation regimes

Judgement standards

Substantive judgement of: properties of knowledge; professional competence; conditions for research

Proxy(ies) based judgement of: properties of knowledge; professional competence; conditions for research

From our earlier context review we see that F-type quality notions preceded S-type quality notions. With the distinction between the two types thus made, opening quality standards from research fields to scrutiny and influence of actors in research spaces would seem to be one of the key contemporary changes in the dynamics of the science system.

Most importantly to study research quality we should also understand that the two types – F-type and S-type – co-exist. This co-existence, and the possibility for tensions generated by it, opens a novel, and exciting, research agenda on research quality, research quality standards and how these are constituted and established.

Second Dimension: Research Quality Attributes

An overview of the, otherwise diverse, literature on quality yields three attributes of research considered important for the consensus of what is ‘good’ research. These are originality/novelty, plausibility/reliability, and the value or usefulness of the research. Notably, these are composite categories of attributes and may have very different content in different types of research.

Back in the 1960s, Michael Polanyi (1962/2000), outlined ‘standards of scientific merit accepted by the scientific community’ (Polanyi 2000: 4) including originality, plausibility/reliability and scientific value. In Polanyi’s terms, plausibility refers to rejecting fraud and conclusions, which “appear to be unsound in light of current scientific knowledge” (Polanyi 2000: 5). Scientific value denotes the “systematic importance” of a contribution, “the intrinsic interest of the subject-matter”, as well as its accuracy, whereas originality is assessed by the degree of “unexpectedness of a discovery” (Polanyi 2000: 5–6).

Weinberg, on the other hand, argued the need for criteria to prioritise scientific fields and added external criteria such as technological and social merit (Weinberg 1963).3 Hence, still in the 1960s we find emphasis on basic scientific standards to describe research quality, more specifically scientific merit, and a clear distinction between criteria internal and external to science.

This key distinction outlined by Polanyi and Weinberg reappears in later empirical studies of researchers’ notions of research quality. A study based on interviews with merited senior researchers in ten fields of research, explicated dimensions of research quality in line with Polanyi’s, namely solidity and originality, and split relevance/value into two: scientific as well as societal relevance/value (Gulbrandsen 2000; Gulbrandsen and Langfeldt 1997).

In a study of research grant application review, Lamont (2009) came up with more or less the same aspects in a list of key review criteria including: methods (another manifestation of plausibility), intellectual and/or social significance, and originality. Lee (2015: 1276), referring to Lamont (2009), briefly explained these three criteria as follows: “novelty promotes the discovery of new truths, methodological soundness assesses the likely truth of study conclusions by evaluating the reliability of data collection and analysis strategy, and determinations of significance tell us which novel truths are most interesting or important” (italics added).

The study of research quality has also been approached through a focus on norms. This has included perspectives from sociology and philosophy of science. For instance, Tranøy (1976, 1986) outlined general scientific norms, related to scientific methodology matters – like truth/probability, testability, coherence, simplicity/completeness, honesty, openness and impartiality/objectivity – as well as originality and relevance/fruitfulness/value (Tranøy 1986: 144ff). Merton approached the issue of research quality through formulating the social imperatives (norms) of science as a social system: communism (openness), universalism (impersonal criteria and reproducibility of results), disinterestedness (impartiality and imperviousness to interests exogenous to science) and organised scepticism (scrutiny and thoroughness) (Merton 1942/1973: 269). Merton also argued originality is one of the institutional norms of science (Merton 1957/1973: 293).

Here it suffices to note that Merton and Tranøy, whilst using conventional perspectives of sociology and philosophy of science, converged on similar norms. Empirical studies of researchers’ research quality notions (Hemlin 1991; Gulbrandsen 2000; Lamont 2009) have also found similar attributes. We now explore these three key attributes below, as part of the second component dimension of our framework.

Originality/Novelty

Originality or novelty refers primarily to providing new knowledge and innovative research. These are key attributes for scientific knowledge to become a legitimate contribution to the knowledge pools of research fields. Still, according to the literature, there are multiple ways in which research can be original (Lamont 2009: 171–174; Gulbrandsen and Langfeldt 1997: 87).

Lamont (2009) and Hemlin (1991) find that originality relates to different aspects of research, such as the research ideas, topics, approaches, theories, data, methods, or the outcomes/findings. Originality may be incremental or radical and there may be different views on whether radical originality is desirable and/or acceptable, and notions vary between fields of research (Gulbrandsen 2000: 116). Generally, originality is often linked to curiosity and creativity as beneficial properties of the researchers and/or the research environment (Bazeley 2010; Lamont et al. 2007; Gulbrandsen 2000: 138).

Plausibility/Reliability

Empirical studies of researchers’ notions of research quality have identified a number of notions around plausibility, or reliability, of research. These include correctness, rigor, sound methods, thoroughness and clarity, as well as research integrity and ethics. Different fields emphasise different kinds of reliability as the important ones (Lamont 2009: 167; Gulbrandsen 2000: 115; Hemlin 1991). Gulbrandsen found experimental fields are concerned with replicability, whereas engineers sometimes consider successful industrial implementation an important indicator of reliable research. In the humanities, researchers emphasise the importance of thorough arguments, whereas economists value well-specified models, consistency and testability (Gulbrandsen 2000: 114–115).

Lamont found clarity, rigor, methodological soundness and craftsmanship to be important (Lamont 2009). In Swiss humanities, Hug et al. identified stringent argumentation, presentation of relevant documents and evidence, clear language, clear structure, reflection of method, and adherence to standards of scientific honesty, to be standards of plausibility and reliability (Hug et al. 2013: 374). In an Australian survey of science, social science and humanities, ‘methodologically sound’ was a primary descriptor of research performance (Bazeley 2010: 895).

Mårtensson et al. identified credibility as one of four main characteristics of research quality in a multidisciplinary context, with sub-characteristics such as rigorous, reliable, coherent and transparent (Mårtensson et al. 2016: 599). Another main characteristic was conforming, including research ethics and basic conditions for plausibility/reliability, like avoiding plagiarism and fraud, and preventing harmful social consequences. This provides a novel way to describe key dimensions of research quality, more aligned with contemporary policy emphases on open and responsible science.

Value/Usefulness

We can distinguish two aspects of the value, or usefulness, of research: its scientific value and its value outside science. Scientific value/usefulness concerns how research progresses a research field and advances scholarly debate. Societal value/usefulness addresses multiple social domains and time horizons, e.g. environment, welfare, health, economy, equity, technological development, cultural heritage.

Lamont (2009) found research evaluation panels concerned with intellectual significance as well as the political and social importance of research topics. Impact on academia, knowledge and the field, as well as politically and socially were important (Lamont 2009: 175). For humanities, Hug et al. identified scholarly exchange, connecting to other research, impact on research community and for future research as consensus standards (Hug et al. 2013: 374–175). Researchers in fields oriented towards practical applications tend to place more emphasis on societal relevance, e.g. in engineering sciences and clinical medicine (Gulbrandsen and Langfeldt 1997; Hemlin 1991).

Overall, for this second dimension we find that ‘research quality’ is a distributed notion referring to multiple attributes. Decomposing the notion into attributes such as originality, plausibility and value, may reveal significant differences in what is understood as ‘good’ research in varied research fields and organisational contexts. From the empirical studies referred to above, we see that the various attributes have quite different contents in e.g. humanities and engineering. It also raises questions regarding inherent social dynamics in establishing dominant notions of research quality and (potential) tensions this may entail. This context dependency leads us to the third dimension of our framework: organisational sites.

Third Dimension: Organisational Sites

The third dimension of our framework addresses and identifies organisational contexts where we believe dominant notions of research quality are established and institutionalised. Research quality, as we have argued, is a multi-dimensional and context-dependent notion. To unpack the social processes through which dominant notions of research quality are established we must understand the organisational contexts where they occur and interactions between them.

Building on understanding the science system as a set of authority relationships (Whitley 2011) between research spaces and research fields (Nedeva 2010, 2013), we identify five organisational sites where notions of research quality are constituted, negotiated and institutionalised. At each site we posit there will be a number of key concerns – relating to our previous two dimensions of ‘types’ and ‘attributes’ – as well as specific authority and institutionalisation characteristics. Table 2 provides an overview of the five sites and their characteristics that we then discuss in turn below.
Table 2

Organisational sites

Site

Key concerns/interests

Authority/control over

Mechanisms for institutionalisation

Individual researchers/groups

Individual (often intrinsic to science)

Expertise/peer judgement

Socialisation in graduate school/knowledge networks

Knowledge communities/networks (journals, conferences etc.)

Intrinsic to science

Dominant approaches/theories/methods; reputation

Peer review

Research organisations

Intrinsic and extrinsic (may vary by balance of block grants versus project funding)

Recruitment/organisational careers; local infrastructure/resources

Negotiated between researchers and organisational elites, e.g. recruitment criteria

Research funding agencies

Broader socio-economic and/or intrinsic to science

Funding/research resources (incl. reputation)

Criteria and review guidelines. Negotiated between selected stakeholders (incl. users)

Regional/national policy

Broader socio-economic (incl. ideology)

Regional/national agendas; evaluation regimes

Negotiated between e.g. political elites, organisational elites and field elites (incl. users)

Individuals and Research Groups

When assessing research quality, individual researchers use their scholarly ‘luggage’, resulting from e.g. socialisation during doctoral studies (Becher 1989: 25–27) and interaction with scholarly networks in their field, as well as influences from outside the field. Their key concerns can be expected to be intrinsic to science, with F(ield)-type notions. Still, as noted above, in applied fields this may include strong emphasis on societal relevance (Gulbrandsen and Langfeldt 1997; Hemlin 1991). Within a specific research field or research group, notions may vary considerably between individual researchers and be dynamic. They may cluster around a fairly stable core of professional standards acquired through the early stages of socialisation in academe, but vary, and change, depending on the context of assessments (Lamont 2009). For example, the underlying quality notions when assessing a PhD thesis may be different from those used to review proposals for large research grants.

Knowledge Communities/Networks

For knowledge communities across their various networks, quality notions would be constituted, negotiated and signalled through selection practices, e.g. conferences, journals, seminars, workshops and academic training arrangements. Knowledge communities tend to influence (or control) perceived dominant approaches, theories and methods in their research field (Whitley et al. 2010), peer review being a decisive selection practice, and reflecting the key quality notions. The degree of codification (stability and explicitness) of these notions may vary, but we expect all research fields have notions of research quality underlying refereeing and reviewing criteria, to develop training programmes and to establish rules for professional conduct.

Research Organisations

Research organisations are where research quality is negotiated between researchers (using quality notions from the ‘research field’) and organisational elites (translating policy pressures from the policy and funding-related ‘research space’). The institutionalisation processes for quality notions here would therefore include such aspects as criteria to recruit staff, the academic career system and allocation of research resources. Research organisations can attempt to affect research lines – individual and collective (Gläser 2016) – and send signals back to policymakers that can potentially transform funding/policy notions of research quality.

Research Funding Agencies

Funding agencies operationalise ‘research quality’ through their governance mechanisms (Hellström 2011; Borlaug 2015; Kuhlmann and Rip 2014). These include applying peer review to assess research proposals and to allocate funds. Here research quality is a ‘boundary object’ (Star and Griesemer 1989). Funding agencies provide a space for interactions between policy and research communities, and are arenas for constant negotiations of quality notions (Rip 1994; van der Meulen 1998; Jasanoff 1990). Consequently, funding agencies are – like research organisations – sites where S-type and F-type research quality notions co-exist, interact and are negotiated. They have quite different responsibilities – and purposes for assessing research/researchers – than research organisations and may set the terms for competition between research fields and topics far more explicitly.

Regional/National Policy

The policy site is not constrained to just ‘science policy’ but includes other essential relationships between research organisations and their regional (or indeed trans-national) and national funding/policy environment. Quality notions may here involve ‘system’ properties, such as functional institutions for resource allocation, procedures for priority setting, efficient collaboration between parts of the research and innovation system, and, not least, an intention of the system to assure performance by individual researchers and/or research organisations.

Overall Framework to Study Research Quality Notions

We now present our proposed overall framework to study research quality notions. Table 3 incorporates the three dimensions of the framework. For simplicity, dimension 1, F/S type notions, includes two key components: research quality judgement anchor and quality standard (substantive or proxy-based). For dimensions 2 (research quality attributes) and 3 (the organisational sites) all components are included.
Table 3

Overall framework: Research quality notions (dimension 1: anchors and standards), attributes (dimension 2) and organisational sites (dimension 3)

Sites

Anchors; standards

Originality/novelty

Plausibility/reliability

Value/usefulness

Individual researchers/groups

Anchored in:

Personal knowledge of knowledge pool in the field

Theory, method/methodology, infrastructure, facilities

Personal and field research agenda

 

Quality standard:

Substantive judgement of novelty of knowledge

Robustness, professionalism, reputation/credibility

Value for own research programme; enabling further knowledge in field; citations, accolades

Knowledge communities/networks

Anchored in:

Collective research pool of field (sometimes out of field)

Dominant theory/approaches/methodology

Collective research goals and values

 

Quality standard:

Substantive: Novelty for research field; path-breaking (vs. mainstream) research

Robustness, professionalism, reputation/credibility

Epistemic properties of knowledge (varies across fields); enabling further research, pushing boundaries of field, expanding research field’s knowledge pool

Research organisations (universities, research institutes)

Anchored in:

Dual nature of judgement: according to personal and collective research lines plus exogenous expectations

Creating enabling conditions for research (possible tensions if focus is on outputs rather than conditions)

Dual nature of judgement: Exogenous usefulness notions and/or according to organisation/department priorities

 

Quality standard:

Dual nature of standards: substantive and proxy-based (potential for tensions)

Research culture and quality assurance processes (internal and external evaluations)

External judgement and proxies for scientific value and/or societal impact (potential tensions between substance and proxies)

Research funding agencies

Anchored in:

Individual and collective research lines and knowledge

Theory, method, facilities, research environment, resources

Dual nature of judgement (between research fields and policy context): Dominance depends on structural position of funding agency

 

Quality standard:

Path-breaking potential, novel approaches etc.

Feasibility; plausibility of producing reliable knowledge

Value for e.g. disciplinary/interdisciplinary domains of knowledge and/or society/specific programme objectives

National/regional policy space

Anchored in:

Potential for applications (may overlap with value/usefulness)

Applicability/robustness for use

Concerns for application, wealth creation and quality of life.

 

Quality standard:

Use of proxies (e.g. patents)

Proxies/(external) expert evaluation

Impact assessment, proxies and/or expert advice. (Immediate applications and/or expected impacts.)

This framework holds three potential analytical strengths. First, it enables us to formulate expectations about quality notions. Second, it helps us to explicate differences in meaning depending on organisational contexts (e.g. to highlight that ‘usefulness’ may mean very different things within knowledge communities as compared to policy sites). Third, it allows us to unpack possible tensions that might be developing in certain organisational contexts, for example, because of some specific interplay between F-type and S-type notions over time in particular fields, organisations, or national funding/policy settings.

Below, we address important points under each of the organisational sites.

Individual Researchers/Groups and Quality Notions

Notions of research quality become manifest at the level of researchers, individually and in groups, but may be hard to pinpoint because they are rarely codified. Researchers’ quality notions can be tacit, and derived from comparison (e.g. to peers, to past and present research). They may also be highly context-dependent and dynamic. Individual perceptions of quality, and criteria for judgement, can evolve and norms can shift within short time spans. For example, when individual researchers are asked to serve on panels to assess research proposals the norm(s) against which they judge ‘excellence’ (or quality) may change for different batches of proposals (Lamont 2009). Assessments of research ‘involves the making of a number of subtle, indeed tacit judgements’, and criteria are too specialised and science too rapidly changing for formal categories of research quality to be established (Ravetz 1971: 274).

Nevertheless, researchers are a useful empirical entry point to unpack issues around research quality and tensions that might arise when differing quality notions are applied and contested. Here we should keep in mind that notions are also personal, multi-dimensional and relate to the (sub)field of the researcher. Typically, a key aim of researchers is to advance knowledge in their specific field so their quality standards can be expected to be anchored in the substance of their personal knowledge pool and agenda, and address robustness as well as novelty and value of the research topics, issues and problems upon which they work.

Knowledge Communities and Quality Notions

Research fields include knowledge communities, such as journals and conferences that are important in (re)defining the field. Here quality notions are anchored in the collective knowledge pool of the field (when addressing originality/novelty and value/usefulness), and the dominant theory, approaches and methodology (when addressing plausibility/reliability).

Knowledge communities are key sites for the constitution of quality notions, but there appears to be little empirical research on how criteria and standards of knowledge communities are formed. The literature indicates that in assessing manuscripts submitted to scientific journals, importance and relevance for the audience of the particular journal are key criteria – and are thus very context/field-specific – and reviews sometimes fail to detect basic weaknesses in solidity of data, methods and analysis (Lee 2015). In other words, there is a possible tension to explore, for instance, in (some) knowledge communities when peer review might favour originality/novelty over plausibility/reliability.4

At this site, assessments and formulation of criteria for ‘good research’ are ideally part of the ‘dynamic and critical self-reflection of the scientific community’ (Niiniluoto 1987: 22) and aim at advancement of knowledge. However, literature on journal peer review are often concerned with reviewer disagreement and bias (Weller 2001; Johnson and Hermanowicz 2017).5

Notably, the scope and focus of this site are genuinely different from the broader and policy-involved sites (research organisations, funding agencies, policy spaces, as discussed below). Assessment of single research works can address their value for the research goals of a specific field or research topic (and at a specific point in time). They are normally not intended for research policy purposes or for comparing/measuring research quality across fields.6

Research Organisations and Quality Notions

Research organisations are sites where F-type and S-type research quality notions most obviously meet (and quite possibly collide). Diverse F-type research quality notions are likely to permeate any given research organisation. At the same time, S-type quality notions can enter research organisations through increasing professionalization of management and leadership (Gornitzka and Larsen 2004; Sauder and Espeland 2009; Elken and Røsdal 2017).

Empirical study can also show the interplay of F/S types notions – for instance, the assessment of candidates for academic positions may involve the assessments of the productivity and international position of the candidates (Hemlin 1991). In our terms these would be more S-type quality notions, reflecting concerns when assessing individual researchers in this institutional context, beyond simply a researcher’s contribution to a specific research field (F-type notions).

There may be differences between research organisations in the range of tensions and conflicts experienced in negotiating quality notions. This can vary by, e.g. level of strategic and operational autonomy of the organisation, balance of block grants and expectations of return on investment and performance (Gläser 2016). Research organisations host and allocate resources internally to a multitude of research fields. Fields may have very different (and conflicting) notions of value/usefulness, plausibility/reliability and originality/novelty, and choice of criteria (and proxies) can have a large impact on resource allocations and impact various local quality notions (e.g. Laudel and Weyer 2014). These issues might play out about the amenability of conditions for, and even the continued existence of specific research fields within a research organisation, and tensions may be acute. Hence, when access to resources is limited, F-type quality notions may be contested by representatives of different knowledge communities.7 When resources are plentiful, different F-type quality notions may more easily co-exist.

Research Funding Agencies and Notions of Research Quality

As intermediate bodies, research funding agencies are expected to mediate and negotiate research quality notions. They are likely to embody both F- and S-type notions. Tensions experienced in trying to absorb such different notions may depend on the agency’s structural position in its regional/national research space. If it is an executive agency of government, tensions could be serious; if it is part of a ‘republic of science’, tensions might be minimal.

An empirical illustration here from recent decades is negotiations between F-type and S-type notions expressed through funding agency adopted terms of ‘societal impact’ and ‘scientific excellence’. Numerous funding instruments have focused on potential societal impacts of research, and on funding ‘excellent’ research and research with highest potential for scientific breakthroughs (OECD 2014; Aksnes et al. 2012; Frodeman and Briggle 2012; Heinze 2008). There are also potential tensions not only in negotiating F/S-type notions and criteria setting the conditions for grants competition, but also at the micro-level. Some studies have for instance found biases and unwanted dynamics in grant panel decision-making (Arensbergen et al. 2014; Langfeldt 2001). And whilst researchers “tend by default to focus on scientific criteria in their judgements” (Nightingale and Scott 2007: 551), funding agencies may want to steer them to comply with S-type policy aims.8

There are multiple empirical entry points to study quality notions of research funding agencies. One entry point could be the kinds of stakeholders allowed to define programme objectives and criteria, the profile and objectives of its funding schemes, and its review guidelines and/or selection criteria. There are also micro-level activities, such as the rules and procedures agencies use to appoint researchers to grant proposal review panels, and to select reviewers more generally. At both levels important decisions are taken regarding who is to mediate and resolve differing research quality notions.

National/Regional Policy and Quality Notions

Negotiation of quality notions at policy sites is a complex process, involving various, and differing interest groups. Some tensions to explore here would be any that emerge between policy elites, research organisational elites and research field elites. Overall concerns might relate to the value/usefulness of science for society and how to allocate public funding. Judgements of quality attributes (originality/novelty, plausibility/reliability and value/usefulness) can be anchored in concerns evidenced by white papers and other policy documents. Notions of quality in this context are also usually embodied in evaluation regimes and research evaluation systems.

Quality criteria and standards used in evaluation regimes might discriminate little between different research fields. Similarly, because some quality notions in policy arenas are developed and used by lay groups these could use proxies, e.g. perceptions of the quality of journals as a proxy for the quality of research papers (Seglen 1997; Adler and Harzing 2009; Rafols et al. 2012; Nedeva et al. 2012).

A crucial point here is that notions of research quality are most often enforced through evaluation regimes. Hence to unpack the notions of research quality at this organisational site we may address: the ideology of the policy/funding research space (i.e. all the explicit and implicit assumptions of the value of science in society and how to support it); research funding modalities and flows; resource distribution; evaluation regimes and performance management approaches (e.g. the quality notions underpinning performance-based funding).

Discussion

In this paper we revisit the notions and understanding of research quality and elaborate a novel framework for its study. As outlined in the second section, research quality notions and concerns regarding the quality of research can be traced back to the beginnings of modern science. Still, the concept of ‘quality’ itself was not in much use until relatively recently. A discourse on research quality appeared gradually, alongside other major elements of the modern constitution of research. One such element was the emergence of a competitive and pluralistic system of funding with (mostly) public sources. This demanded transparency, fairness and easy to use criteria, and expanded quality concerns beyond specific research fields.

Another element was work in the sociology of science repeatedly demonstrating that scientific inquiry yields highly differentiated results. Some research generated more interesting and useful results that were widely circulated and cited, and influenced other research more. This brought about a perception of distinctions between leaders and followers, ‘metropolis’ and ‘province’, and centres and peripheries, all linked to differences in research quality (Shils 1961a/1972; 1961b/1975; 1988). In turn, this led to a hierarchical understanding of the organisation of science, whereby some organisations, and indeed individuals, constituted the ‘elite’ again in an assumed relation to quality (Zuckerman 1977).

Hence, the sheer growth of the research system and the need for criteria to distribute funds and allocate prestige across this rapidly growing system propelled a demand to articulate an idea of research quality in the immediate post-WWII decades. This explains the surge of definitions in the early generation of seminal work in the philosophy and sociology of science by authors such as Merton, Polanyi, Ben-David, and others. However, these thinkers still operated very much in a classical paradigm and did not use a concept of quality much themselves.9 Notably, Merton talked about ‘norms’ or ‘imperatives’. Polanyi used the term ‘merit’. The terms suggest that what they had in mind was an understanding of research quality that was essentially rooted in method, appropriate procedure and virtuous application.

In the following decades, the conditions for a discussion of research quality changed fundamentally. The use of the concept of ‘quality’ started to grow rapidly in the 1980s and 1990s when also its meaning widened to encompass quality processes and quality management under influence from private industry and New Public Management. That was also when indicators first came into more widespread use and even in early attempts to analyse their emergence and application there was an awareness that they sometimes reflected a drift, or dilution, of research quality (van Raan et al. 1989).

It is now apparent that ‘research’ is a very diverse activity taking place in and across an equally diverse set of nation states, cultures, and organisations. New notions of research quality have emerged and supplement those of the 20th century foundational thinkers of Western sociology and philosophy of science. However, these notions have not been very well articulated and, above all, the growing diversity of notions of research quality has lacked a meaningful framework that can link the institutional conditions and diversity to the empirical manifestations of quality and its criteria.

To build such a framework also means a new theory of research quality, better suited to explain and organise the plethora of quality definitions that are currently in circulation. The backbone of a theory of research quality is the increased diversity of research itself. The classical notion was predicated on the discipline as the singular, exclusive road to in-depth scientific knowledge. Empirical work on how science is conducted suggests that the discipline itself is increasingly becoming a phenomenon of the past (Weingart and Stehr 2000). Most of the classical disciplines have become so large that their internal diversity must allow a very large spread of methodological approaches that make distinct definitions of quality within a disciplinary culture hard to uphold. In addition, hybrid areas develop and there seems to be less of a concern among the researchers themselves to articulate their disciplinary homes. Funding agencies and policies have clearly stimulated the growth of such hybrid areas over the last few decades and notions of quality are correspondingly adrift.

These, along with the sheer growth of the societal research enterprise and its multiple mission orientations has created a need for more pluralistic approaches and a new theorizing of research quality. This is ultimately a discussion about the concept of ‘research’. A narrow definition of the concept is more compatible with the classical understanding of research quality, what we have called F-type notions, rooted in the dynamics of research fields and disciplinary cultures. A wider definition sits more comfortably with S-type notions, linked closer to policy and societal applications.

Conclusion

In this paper we distinguish between two types of quality notions – F(ield)-type and S(pace)-type – and use these to elaborate a framework for the study and understanding of research quality. We outline three (potentially conflicting) attributes of research quality notions and the organisational sites where the notions emerge and get contested and institutionalised.

In short, the framework provides: a) empirical entry points and access through research fields; b) wider empirical coverage for analytical comparison; and c) an overall structure for information collection and analysis. The attributes of research quality derived from the literature – originality, plausibility, and value – serve as analytical devices, helping to understand differences in emphases between research fields, funding and policy spaces and organisational sites (e.g. research journals and conferences, research organisations, funding agencies etc.). Key aims of such studies would be to understand the role and interaction of F-type and S-type notions of research quality in defining good research, and in developing, and contesting, criteria and indicators. Furthermore, this begs a set of questions about the ways in which research quality criteria impact research practice and content.

Studying research quality notions implies trying to capture diverse and tacit notions which are expressed through context-dependent assessments on what projects are most worth funding, what papers are publishable, which researchers should be employed or promoted, or expressed in e.g. national evaluation regimes. Such formal assessments are triggered by the need to allocate resources, not to define quality as such, and conclusions result from the combination of the selected reviewers, the evaluation objects, and the organisational sites and their quality notions. In other words, the co-existence of quality notions also depends on the purpose of evaluation. However, we know little about how formal peer review and research evaluation interact with more general notions of research quality, or how notions are impacted by the increasing availability of quantitative indicators of research performance.

The framework has implications for the study and understanding of research quality and of how research fields organise themselves around research quality notions. It re-focuses attention to include: research on the social processes through which dominant notions of quality are established and institutionalised; research on the organisational and institutional tensions that different notions of research quality generate in research organisations, research funding agencies and research evaluation regimes; comparative study of research quality notions specific to different research fields and the ways these ‘travel’ across research fields and to research spaces; study of assessment of individual researchers with regard to their research profile, or career stage, or whether they try to adapt to F- or S-type quality notions. Methodologically, applying the proposed framework implies that comparative studies of notions of research quality take research fields as entry points, and extend to national policy and funding spaces, rather than compare the assumptions of blanket, national evaluation and quality regimes. Last but not least, using this framework makes it possible to formulate expectations about tensions at different junctures of the science system and the ways in which these can be alleviated and resolved.

This framework also has two important practical applications. First, members of research fields can use it to understand, and if necessary change, the ways in which structures using quality notions are organised; e.g. all structures and arrangements demanding selection. Second, there are implications for policy in signalling the necessity for the development and implementation of more nuanced evaluation systems that account for the specific research quality attributes and notions in diverse research fields, and hedge against irresponsible, intended or unintended, use of proxies for research quality.

The primary strength and usefulness of our approach, we believe, is that it brings the concept of research quality in contact with the multiple domains of activity where research is taking place. We also believe that our framework may offer a navigation tool, both for scholars reflecting on the interrelationship of science and policy and for practitioners in policy, funding, and evaluation who have so far had little systematic and conceptual support in their work to identify and reward research quality. We hope that future work, by ourselves and others, will be able to go deeper in its empirical and operational manifestations in these domains. To investigate research quality empirically and theoretically is to a large extent a work that lies ahead.

Footnotes

  1. 1.

    This paper uses a conceptualisation of the science system that incorporates two kinds of dynamics: research space dynamics and research field dynamics (Nedeva 2010, 2013). In this context, ‘research spaces’ are funding and policy environments outlined by the key relationships of research organisations within which the rules of knowledge production and knowledge use are negotiated. ‘Research fields’, on the other hand, can be empirically accessed through converging knowledge communities (networks), coherent bodies of knowledge and research organisations.

  2. 2.

    In some contexts, the pool of qualified peers may be very small (Chubin and Hackett 1990). Hence, for evaluation of specialised and small fields most of the research community would be considered non-peers.

  3. 3.

    Weinberg’s external criteria also include ‘scientific merit’ as assessed from neighbouring fields (Weinberg 1963/2000: 259).

  4. 4.

    This may be a conscious priority and/or because of limited reviewer ability for, or even interest in, controlling plausibility/reliability.

  5. 5.

    A comment in Nature, summing up conclusions from the literature, states that “how and whether peer review identifies high-quality science is unknown” (Rennie 2016: 31). Concerns in later years address the inability of journal peer review to detect invalid results, fraud and misconduct.

  6. 6.

    Nevertheless, assessments made by peers and knowledge communities can have impact far beyond a specific context and site, or at least are perceived so: “We give people more credit for publications in prestigious journals. We think more highly of people who have received grants, fellowships, awards, memberships in prestigious organizations – all based on the evaluation of others” (Cole 1983: 137).

  7. 7.

    As, for example, in Sweden during the economic crisis years in the 1990s (Sörlin 2005; Benner and Sörlin 2007).

  8. 8.

    This can be, for example, by introducing separate criteria and ratings of societal impact (Langfeldt and Scordato 2016). Studies of grant review indicate that the feasibility of projects is often highly emphasised, and grant review is accused of being ‘conservative’ and inhibiting unconventional projects (Lee 2015; Laudel and Gläser 2014; Luukkonen 2012). This suggests a counter-emphasis in relation to review of manuscripts for publication (noted above). According to Lee (2015), matters should be the other way around: more emphasis on significance/value when assessing grant proposals, and more emphasis on methodological soundness/solid methods when assessing manuscripts for publication.

  9. 9.

    Even if Merton did not use the term quality much, he covered the topic well, in e.g. “Recognition and Excellence” (Merton 1960).

Notes

Acknowledgements

The research was funded by the Research Council of Norway, Grant Number 256223 (the R-QUEST centre). We are thankful to Professor Mats Benner, Professor Paul Wouters and the rest of the R-QUEST team for their input and comments to earlier versions of the paper. We are also indebted to the anonymous reviewers whose thoughtful, and thought provoking, comments helped us make the paper better.

References

  1. Adler, Nancy J., and Anne-Wil Harzing. 2009. When Knowledge Wins: Transcending the Sense and Nonsense of Academic Rankings. Academy of Management Learning and Education 8(1): 72–95.Google Scholar
  2. Aksnes, Dag, Mats Benner, Siri Brorstad Borlaug, Hanne Foss Hansen, Egil Kallerud, Ernst Kristiansen, Liv Langfeldt, Antti Pelkonen, and Gunnar Sivertsen. 2012. Centres of Excellence in the Nordic Countries: A Comparative Study of Research Excellence Policy and Excellence Centre Schemes in Denmark, Finland, Norway and Sweden. Oslo: NIFU Working Paper 4/2012.Google Scholar
  3. Bazeley, Pat. 2010. Conceptualising Research Performance. Studies in Higher Education 35(8): 889–903.Google Scholar
  4. Becher, Tony. 1989. Academic Tribes and Territories: Intellectual Enquiry and the Cultures of Disciplines. Buckingham: Open University Press.Google Scholar
  5. Ben-David, Joseph. 1971. The Scientist’s Role in Society. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  6. Benner, Mats, and Gunnar Öquist. 2014. Room for Increased Ambitions? Oslo: Research Council of Norway.Google Scholar
  7. Benner, Mats, and Sverker Sörlin. 2007. Shaping Strategic Research: Power, Resources, and Interests in Swedish Research Policy. Minerva 45(1): 31–48.Google Scholar
  8. Borlaug, Siri Brorstad. 2015. Moral Hazard and Adverse Selection in Research Funding: Centres of Excellence in Norway and Sweden. Science and Public Policy 43: 352–362.Google Scholar
  9. Cahan, David. 2003. Institutions and Communities. In From Natural Philosophy to the Sciences: Writing the History of Nineteenth-Century Science, ed. David Cahan, 291–328. Chicago: University of Chicago Press.Google Scholar
  10. Caldwell, Bruce J. 2010. Beyond Positivism: Economic Methodology in the Twentieth Century. London: Routledge.Google Scholar
  11. Chalmers, Alan F. 2013. What is This Thing Called Science, 4th ed. Indianapolis: Hackett Publishing Company.Google Scholar
  12. Chou, Meng-Hsuan, and Åse Gornitzka (eds.). 2014. Building the Knowledge Economy in Europe: New Constellations in European Research and Higher Education Governance. Cheltenham: Edward Elgar.Google Scholar
  13. Chubin, Daryl, and Edward J. Hackett. 1990. Peerless Science: Peer Review and U.S. Science Policy. New York: State University of New York Press.Google Scholar
  14. Cole, Stephen. 1983. The Hierarchy of the Sciences? The American Journal of Sociology 89(1): 111–139.Google Scholar
  15. Douglass, John A. (ed.). 2015. The New Flagship University: Changing the Paradigm from Global Ranking to National Relevancy. Basingstoke: Palgrave Macmillan.Google Scholar
  16. Edler, Jakob, Daniela Frischer, Michaela Glanz, and Michael Stampfer. 2014. Funding Individuals—Changing Organisations: The Impact of the ERC on Universities. In Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation, eds. Richard Whitley and Jochen Gläser, 77–109. Bingley: Emerald Group Publishing Limited.Google Scholar
  17. Elken, Mari, and Trude Røsdal. 2017. Professional Higher Education Institutions as Organizational Actors. Tertiary Education and Management 4(23): 376–387.Google Scholar
  18. Etzkowitz, Henry, Andrew Webster, and Peter Healey (eds.). 1998. Capitalizing Knowledge: New Intersection Between Industry and Academia. Albany: State University of New York Press.Google Scholar
  19. Feuer, Michael J., Lisa Towne, and Richard J. Shavelson. 2002. Scientific Culture and Educational Research. Educational Researcher 31(8): 4–14.Google Scholar
  20. Flink, Tim. 2016. Die Entstehung des Europäischen Forschungsrates: Marktimperative—Geostrategie—Frontier Research. Weilerswist-Metternich: Velbrück Wissenschaft.Google Scholar
  21. Flink, Tim, and Tobias Peter. 2018. Excellence and Frontier Research as Travelling Concepts in Science Policymaking. Minerva 56(4): 431–452.Google Scholar
  22. Fourcade, Marion. 2009. Economists and Societies. Princeton: Princeton University Press.Google Scholar
  23. Frodeman, Robert, and Adam Briggle. 2012. The Dedisciplining of Peer Review. Minerva 50(1): 3–19.Google Scholar
  24. Geuna, Aldo, and Ben R. Martin. 2003. University Research Evaluation and Funding: An International Comparison. Minerva 41(4): 277–304.Google Scholar
  25. Gläser, Jochen. 2016. German Universities on Their Way to Performance-Based Management of Research Portfolios. Sociologia Italiana 8(October): 151–176.Google Scholar
  26. Gornitzka, Åse, and Ingvild Marheim Larsen. 2004. Towards Professionalisation? Restructuring of Administrative Work Force in Universities. Higher Education 47: 455–471.Google Scholar
  27. Gulbrandsen, J. Magnus. 2000. Research Quality and Organisational Factors: An Investigation of the Relationship. Trondheim: Department of Industrial Economics and Technology Management, Norwegian University of Science and Technology.Google Scholar
  28. Gulbrandsen, Magnus, and Liv Langfeldt. 1997. Hva er forskningskvalitet? En intervjustudie blant norske forskere [What is research quality? An interview study among Norwegian Researchers]. Oslo: NIFU-rapport 9/97.Google Scholar
  29. Heinze, Thomas. 2008. How to Sponsor Ground-Breaking Research: A Comparison of Funding Schemes. Science and Public Policy 35(5): 302–318.Google Scholar
  30. Hellström, Tomas. 2011. Homing in on Excellence: Dimensions of appraisal in Center of Excellence Program Evaluations. Evaluation 17: 117–131.Google Scholar
  31. Hemlin, Sven. 1991. Quality in Science. Researchers’ Conceptions and Judgements. Göteborg: Department of Psychology, University of Göteborg, Doctoral dissertation.Google Scholar
  32. Höhle, Ester. 2015. From Apprentice to Agenda-Setter: Comparative Analysis of the Influence of Contract Conditions on Roles in the Scientific Community. Studies in Higher Education 40(8): 1423–1437.Google Scholar
  33. Howells, Jeremy, Maria Nedeva, and Luke Georghiou. 1998. Industry-Academic Links in the UK. Bristol: HEFCE.Google Scholar
  34. Hug, Sven E., Michael Ochsner, and Hans-Dieter Daniel. 2013. Criteria for Assessing Research Quality in the Humanities: A Delphi Study Among Scholars of English Literature, German Literature and Art History. Research Evaluation 22(5): 369–383.Google Scholar
  35. Jacob, Merle, Tomas Hellström, Niclas Adler, and Flemming Norrgren. 2002. From Sponsorship to Partnership in Academy-Industry Relationships. R&D Management 30(3): 255–262.Google Scholar
  36. Jasanoff, Sheila. 1990. The Fifth Branch. Cambridge: Harvard University Press.Google Scholar
  37. Johnson, David R., and Joseph C. Hermanowicz. 2017. Peer Review: Sacred Ideals and Profane Realities. In Higher Education: Handbook of Theory and Research, ed. Michael B. Paulsen, 485. Dordrecht: Springer.Google Scholar
  38. Keller, Evelyn Fox. 1990. Physics and the Emergence of Molecular Biology: A History of Cognitive and Political Synergy. Journal of the History of Biology 23(3): 389–409.Google Scholar
  39. Kuhlmann, Stefan, and Arie Rip. 2014. The Challenge of addressing Grand Challenges. A think piece on how innovation can be driven towards the “Grand Challenges” as defined under the European Union Framework Programme Horizon 2020, Report to ERIAB;  https://doi.org/10.13140/2.1.4757.184.
  40. Lamont, Michèle. 2009. How Professors Think: Inside the Curious World of Academic Judgment. Cambridge, MA: Harvard University Press.Google Scholar
  41. Lamont, Michèle, Marcel Fournier, Joshua Guetzkow, Grégoire Mallard, and Roxane Bernier. 2007. Evaluating Creative Minds: The Assessment of Originality in Peer Review. In Knowledge, Communication and Creativity, eds. Arnaud Sales and Marcel Fournier, 166–181. London: SAGE Publications Ltd.Google Scholar
  42. Langfeldt, Liv. 2001. The Decision-Making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome. Social Studies of Science 31(6): 820–841.Google Scholar
  43. Langfeldt, Liv, and Lisa Scordato. 2016. Efficiency and Flexibility in Research Funding. A Comparative Study of Funding Instruments and Review Criteria. Oslo: NIFU Report 9/2016.Google Scholar
  44. Larédo, Philippe, and Philippe Mustar (eds.). 2001. Research and Innovation Policies in the Economy. Cheltenham: Edward Elgar.Google Scholar
  45. Laudel, Grit, and Jochen Gläser. 2014. Beyond Breakthrough Research: Epistemic Properties of Research and Their Consequence for Research Funding. Research Policy 43: 1204–1216.Google Scholar
  46. Laudel, Grit, and Elke Weyer. 2014. Where have all the Scientists Gone? Building Research Profiles at Dutch Universities and its Consequences for Research. In Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation, eds. Richard Whitley and Jochen Gläser, 111–140. Bingley: Emerald Group Publishing Limited.Google Scholar
  47. Lee, Carole J. 2015. Commensuration Bias in Peer Review. Philosophy of Science 82(5): 1272–1283.Google Scholar
  48. Lepori, Benedetto, Emanuela Reale, and Andrea Orazio Spinello. 2018. Conceptualising and Measuring Performance Orientation in Research Funding Systems. Research Evaluation 27(3): 171–183.Google Scholar
  49. Luukkonen, Terttu. 2012. Conservatism and Risk-Taking in Peer Review: Emerging ERC Practices. Research Evaluation 21(1): 48–60.Google Scholar
  50. Mårtensson, Pär, Uno Forsb, Sven-Bertil Wallinc, Udo Zanderd, and Gunnar H. Nilsson. 2016. Evaluating Research: A Multidisciplinary Approach to Assessing Research Practice and Quality. Research Policy 45(3): 593–603.Google Scholar
  51. Merton, Robert K. 1942. The Normative Structure of Science, in Merton, R.K. (1973). In The Sociology of Science. Chicago: University of Chicago Press.Google Scholar
  52. Merton, Robert K. 1957. Priorities in Scientific Discoveries, in Merton, R.K. (1973). In The Sociology of Science. Chicago: University of Chicago Press.Google Scholar
  53. Merton, Robert K. 1960. “Recognition and Excellence”: Instructive Ambiguities, in Merton, R.K. (1973). In The Sociology of Science. Chicago: University of Chicago Press.Google Scholar
  54. Mukerji, Chandra. 1989. A Fragile Power. Scientists and the State. Princeton: Princeton University Press.Google Scholar
  55. Nedeva, Maria. 2010. Public Sciences and Change: Science Dynamics Revisited. In Society, Culture and Technology at the Dawn of the 21st Century, eds. Janusz Mucha and Katarzyna Leszczynska, 19–38. Cambridge: Cambridge Scholars Publishing.Google Scholar
  56. Nedeva, Maria. 2013. Between the Global and the National: Organising European Science. Research Policy 42(1): 220–230.Google Scholar
  57. Nedeva, Maria, and Rebecca Boden. 2006. Changing Science: The Advent of Neo-liberalism. Prometheus 24(3): 269–281.Google Scholar
  58. Nedeva, Maria et al. 2012a. Understanding and Assessing the Impact and Outcomes of the ERC and its Funding Schemes. EURECIA Final Synthesis Report, http://erc.europa.eu/sites/default/files/document/file/eurecia_final_synthesis_report.pdf.
  59. Nedeva, Maria, Rebecca Boden, and Yanuar Nugroho. 2012b. Rank and File: Managing Individual Performance in University Research. Higher Education Policy 25(3): 335–360.Google Scholar
  60. Nightingale, Paul, and Alister Scott. 2007. Peer Review and the Relevance Gap: Ten Suggestions for Policy-Makers. Science and Public Policy 34(8): 543–553.Google Scholar
  61. Niiniluoto, Ilka. 1987. Peer Review: Problems and Prospects. In Evaluation of Research. Nordic Experiences. Nordic Science Policy Council, FPR-publication No. 5 (NORD 1987:30).Google Scholar
  62. OECD. 2014. Promoting Research Excellence: New Approaches to Funding. Paris: OECD Publishing.  https://doi.org/10.1787/9789264207462-en.Google Scholar
  63. Oreskes, Naomi, and John Krige (eds.). 2014. Science and Technology in the Global Cold War. Cambridge, MA: MIT Press.Google Scholar
  64. Paradeise, Catherine, and Jean-Claude Thoenig. 2015. In Search of Academic Quality. Houndmills, Basingstoke: Palgrave Macmillan.Google Scholar
  65. Piro, Fredrik Niclas, and Gunnar Sivertsen. 2016. How can Differences in University Rankings be Explained? Scientometrics 109(3): 2263–2278.Google Scholar
  66. Polanyi, Michael. 1962/2000. The Republic of Science: Its Political and Economic Theory. Minerva 1:54–73 (Reprinted in Minerva 38:1–32).Google Scholar
  67. Rafols, Ismael, Loet Leydesdorff, Alice O’Hare, Paul Nightingale, and Andy Stirling. 2012. How Journal Rankings Can Suppress Interdisciplinary Research: A Comparison Between Innovation Studies and Business and Management. Research Policy 41(7): 1262–1282.Google Scholar
  68. Ravetz, Jerome R. 1971. Scientific Knowledge and Its Social Problems. Oxford: Clarendon Press.Google Scholar
  69. Rennie, Drummond. 2016. Make Peer Review Scientific. Nature 535: 31–33.Google Scholar
  70. Rip, Arie. 1994. The Republic of Science in the 1990s. Higher Education 28(1): 3–23.Google Scholar
  71. Salö, Linus. 2015. The Linguistic Sense of Placement: Habitus and the Entextualization of Translingual Practices in Swedish Academia. Journal of Sociolinguistics 19(4): 511–534.Google Scholar
  72. Salö, Linus. 2017. The Sociolinguistics of Academic Publishing: Language and the Practices of Homo Academicus. New York: Palgrave Macmillan.Google Scholar
  73. Sarewitz, Daniel. 2016. Saving Science. New Atlantis 49: 4–40.Google Scholar
  74. Sauder, Michael, and Wendy N. Espeland. 2009. The Discipline of Rankings: Tight Coupling and Organizational Change. American Sociological Review 74(1): 63–82.Google Scholar
  75. Seglen, Per O. 1997. Why Impact Factors of Journals Should not be Used for Evaluating Research. BMJ: British Medical Journal 314(7079): 498–502.Google Scholar
  76. Shapin, Steven. 2008. The Scientific Life. Chicago: University of Chicago Press.Google Scholar
  77. Shils, Edward. 1961a. Metropolis and Province in the Intellectual Community. Reprinted in: Shils, The Intellectuals and the Powers and Other Essays. Chicago, IL & London: The University of Chicago Press, 1972.Google Scholar
  78. Shils, Edward. 1961b. Center and Periphery. Reprinted in: Shils, Center and Periphery: Essays in Macrosociology. Chicago, IL & London: The University of Chicago Press, 1975.Google Scholar
  79. Shils, Edward. 1988. Center and Periphery: An Idea and Its Career, 1935–1987. In Center: Ideas and Institutions, eds. Liah Greenfeld and Michael Martin, 250–282. Chicago and London: The University of Chicago Press.Google Scholar
  80. Sörlin, Sverker (ed.). 2005. “I den absoluta frontlinjen”: En bok om forskningsstiftelserna, konkurrenskraften och politikens möjligheter [“On the Absolute Frontline”: A Book on Research Foundations, Competitiveness, and What Politics Can Do]. Stockholm: Nya Doxa.Google Scholar
  81. Sörlin, Sverker. 2007. Funding Diversity: Performance-based Funding Regimes as Drivers of Differentiation in Higher Education Systems. Higher Education Policy 20(4): 313–440.Google Scholar
  82. Sörlin, Sverker. 2018. Humanities of Transformation: From Crisis and Critique Towards the Emerging Integrative Humanities. Research Evaluation 27(4): 287–297.Google Scholar
  83. Star, Susan Leigh, and James R. Griesemer. 1989. Institutional Ecology, Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–1939. Social Studies of Science 19(3): 387–420.Google Scholar
  84. Tranøy, Knut Erik. 1976. Norms of Inquiry: Methodologies as Normative Systems. In Contemporary Aspects of Philosophy, ed. Gilbert Ryle, 1–13. London: Oriel Press.Google Scholar
  85. Tranøy, Knut Erik. 1986. Vitenskapen—Samfunnsmakt og livsform [Science–social Power and Way of Life]. Oslo: Universitetsforlaget.Google Scholar
  86. van Arensbergen, Pleun, Inge van der Weijden, and Peter van den Besselaar. 2014. The Selection of Talent as a Group Process. A Literature Review on the Social Dynamics of Decision Making in Grant Panels. Research Evaluation 23(4): 298–311.Google Scholar
  87. van der Meulen, Barend. 1998. Science Policies as Principal-Agent Games: Institutionalization and Path Dependency in the Relation Between Government and Science. Research Policy 27(4): 397–414.Google Scholar
  88. Van Raan, Anthony F.J., Antonius J. Nederhof, and Henk F. Moed. 1989. Science and Technology Indicators: Their Use in Science Policy and Their Role in Science: Select Proceedings of the First International Workshop on Science and Technology Indicators. Leiden: DSWO Press.Google Scholar
  89. Vessuri, Hebe, J.C. Guédon, and A.M. Cetto. 2014. Excellence or Quality? Impact of the Current Competition Regime on Science and Scientific Publishing in Latin America and Its Implications for Development. Current Sociology 62(5): 647–665.Google Scholar
  90. Weinberg, Alvin M. 1963/2000. Criteria for scientific choice. Minerva 1(2): 159–171 (Reprinted in Minerva 38(3): 255–266).Google Scholar
  91. Weingart, Peter, and Nico Stehr (eds.). 2000. Practising Interdisciplinarity. Toronto: University of Toronto Press.Google Scholar
  92. Weller, Ann C. 2001. Editorial Peer Review. Its Strengths and Weaknesses. Medford, NJ: Information Today.Google Scholar
  93. Whitley, Richard. 2000. The Intellectual and Social Organization of the Sciences, 2nd ed. Oxford: Oxford University Press.Google Scholar
  94. Whitley, Richard. 2011. Changing Governance and Authority Relationships in the Public Sciences. Minerva 49(4): 359–385.Google Scholar
  95. Whitley, Richard, Jochen Gläser, and Lars Engwall (eds.). 2010. Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and Their Consequences for Intellectual Innovation. Oxford: Oxford University Press.Google Scholar
  96. Whitley, Richard, Jochen Gläser, and Grit Laudel. 2018. The Impact of Funding and Authority Relationships on Scientific Innovations. Minerva 56(1): 109–134.Google Scholar
  97. Zuckerman, Harriet. 1977. Scientific Elite: Nobel Laureates in the United States. New York: Basic Books.Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Nordic Institute for Studies in Innovation, Research and Education (NIFU)OsloNorway
  2. 2.Alliance Manchester Business SchoolUniversity of ManchesterManchesterUK
  3. 3.Department of Business AdministrationLund UniversityLundSweden
  4. 4.History of Science, Technology and EnvironmentKTH Royal Institute of TechnologyStockholmSweden

Personalised recommendations