Keywords

Introduction

In this chapter, we explore how the introduction of performance-based research funding systems (PRFSs) in Denmark, Sweden, Norway and Finland is influencing the perception of research within universities. Here, performance-based resource allocation constitutes a new way of distributing institutional research funding, and its establishment is related to the general development of the increasing quantification of the higher education sector (Hicks 2012). Various performance measures are currently used to inform internal and external actors about organisational activities and to govern and control higher education institutions (HEIs) (see Chap. 2; de Rijcke et al. 2016). On the one hand, this has been propelled by demands from within the education sector. Academics have always been keen on evaluating and comparing the work of colleagues, and the development of quantitative tools to describe academic work has a long history (Garfield 1955; Nelhans 2013). With advances in information technology, quantification and performance indicators have become more refined, precise and complex but also more accessible to, and used by, professionals and amateurs alike (Gläser and Laudel 2007; Leydesdorff et al. 2016; van Raan 2005).

On the other hand, there are also a number of external pressures that have been suggested as ways to induce the increasing quantification of academic work. According to Portnoi, Rust and Bagley (2010), there is a clear trend towards global competition in the higher education sector. This is related to the advent of academic capitalism (Slaughter and Leslie 1997) but also to a global knowledge economy and a neoliberal paradigm in higher education governance (Olssen and Peters 2005). The increasing size and costs of the sector during the twentieth century have also created demands for increasing efficiency, transparency and accountability. Responses have often comprised the introduction of new public managementreforms, including marketisation, a strengthening of management structures and a focus on performance measurement (Paradeise et al. 2009). Thus, performance measures are used in various ways to assess institutional activities but also to incentivise universities and academics to increase their performance.

Although similar in many ways, the Nordic countries display considerable differences in university governance policies (Gornitzka and Maassen 2012, 124; Pinheiro et al. 2014). This also includes how metrics are used to assess, evaluate and award academic work. Although all the Nordic countries have implemented PRFSs in recent years, the design of these systems varies. The systems have furthermore come to influence institutional resource allocation practices because local PRFSs often are established at institutional or subinstitutional levels. However, recent research has found that local implementations of PRFSs vary greatly and rarely reflect the configuration of national systems (Aagaard 2015; Hammarfelt et al. 2016). Aagaard (2015, 736) suggests that these findings ‘only can be explained by including local conditions and personal perceptions at lower levels of the institutions’. Therefore, it is imperative to study not only the local resource allocation systems but also the nonsystematic and informal use of metrics in the organisation and execution of research activities.

This is the aim of the present chapter; we study how the varying use of performance indicators in the national PRFSs of four Nordic countries is reflected within universities. Our intention is to explore how national performance metrics affect local perceptions of research as organisational actors make sense of these novel forms of resource allocation. As suggested by Weick (1995), an organisation is not only a formal structure, but it also includes the way people interpret and categorise their daily experiences to make sense of a more or less disorderly reality. How the metrics that are used in national PRFSs are understood and acted upon within universities is thus likely to be of major importance for the local organisation of research. An investigation of these issues allows for a deeper comparative analysis of the qualitative aspects of the ways in which indicators influence research practices. It also contributes to the ongoing debate of the design, use and effects of performance-based funding of university research (e.g., European Commission 2018). Thus, taking a closer look at the perceptions and uses of research metrics within universities may provide important insights into how external performance measures structure everyday thought and action.

Because national PRFSs vary regarding their design, we expect the influence of research metrics at the institutional level to vary as well. Therefore, we compare the national PRFSs in four Nordic countries and ask how they affect the way university actors perceive and make sense of research activities at the institutional level. To study this, we conduct a comparative study between the four countries to explore how the link between national macro-states affects organisational behaviour within the universities. We identify three factors highlighted in previous research on performance metrics that have been suggested as being instrumental in influencing organisational action. Through interviews with academics and managers at eight universities in Sweden, Norway, Denmark and Finland, we explore how these factors inform the perception of research in Nordic universities.

The chapter is structured as follows: first, we review previous studies that analysed the effects of performance measures. Based on this, we develop our analytical framework. The framework identifies three major ways in which research metrics influence HEIs; their ability to enable action, to enhance legitimacy and to solidify taken-for-granted representations of reality. Second, we describe the methods used for the analysis in the present study. We then turn to the design of the national PRFSs in Denmark, Finland, Norway and Sweden. Next, we present the empirical analysis of our interview data. The final section contains a comparative discussion of the results.

The Roles and Effects of Performance-Based Funding Systems

Performance measures are tools that describe organisational activity and are constructed and applied with the intention to direct organisational attention (see Chap. 2). When introduced to incentivise actors, to support and facilitate decision-making and to enhance accountability, they perform these functions in new ways, thus complementing or replacing previous practices (Dahler-Larsen 2014; Espeland and Stevens 1998). As incentives, they measure and monitor everyday work in very precise and compartmentalised ways, neglecting undefined aspects and introducing the risk of displacing holistic assessments. As support for decision-making, they may constitute a transparent basis for decisions, counteracting personal biases and fraudulent behaviour, but they may also substitute for qualitative assessments, peer review and professional judgement. To account for organisational activities, indicators easily replace trust between people and may cause a myopic concern for numerical comparisons (Porter 1995). In some respects, metrics are superior to alternative ways of describing organisational activity, but in other ways, they are inadequate. The most immediate benefit of metrics is their ability to enable clear comparisons and induce action, but some notable side effects are that they decontextualise the measured phenomenon and structure reality in ways that may not always be desirable (Dahler-Larsen 2014; Espeland and Stevens 1998; Rottenburg et al. 2015). Thus, research on the role and effect of performance measures points out several ways in which metrics may influence organisational action. Drawing on these insights, we identify three factors that cause metrics to affect organisations: actionability, legitimacy and institutionalisation.

Actionability

Actionability refers to the ability of indicators to induce an action. This may occur either in decision-making processes, where indicators arbitrate between alternative routes of action or in the case where incentives are tied to the indicators, making the subjects of measurement motivated to act in certain ways. Regarding decision-making, actionability is a reason behind the popularity of rankings because they transform the differences in raw scores that may be negligible to clearly ordered alternatives that range from less to more or best to worse, thus facilitating decision-making (Espeland and Sauder 2007). Actionability is a factor that has been identified in several studies as being important when it comes to the influence of indicators. Aagaard (2015, 735), for example, shows how a publication indicator ‘functions as a potent instrument of managerial decision-making’. Even when the accuracy of indicators is questioned, they may be seen as useful. For instance, this has been shown to be the case for citation metrics (Aksnes and Rip 2009), the journal impact factor (Rushforth and de Rijcke 2015), journal lists (Mingers and Willmott 2013) and business school rankings (Wedlin 2007).

As noted by Espeland and Sauder (2007), measurement also alters the behaviour of the individuals being measured. Incentives combined with performance indicators are powerful tools to structure action because measurement causes reactivity from the subjects being measured. Incentives may be remunerative or normative, they may be positive or negative and they may be more or less formalised. Remunerative incentives imply the conditioning of material resources in relation to some indicator. Here, PRFSs are instructive because funding is allocated based on performance, which is often measured using quantitative indicators. Normative incentives, however, include the symbolic gains and losses that are related to an indicator. Institutional reputation is an example because it is a critical resource for universities that often is thought to be related to various indicators, such as university rankings. Also, PRFSs have been suggested as contributing heavily to the gains and losses of institutional reputation (Hicks 2012).

Legitimacy

Legitimacy is another factor that has been suggested to be important for the ability of performance measures to exert influence over organisations. Because metrics highlight the various aspects of organisations and their activities, they also can impart legitimacy to the organisation because its performances are demonstrated to internal and external actors. Whether metrics can perform this function depends on the legitimacy of the indicators because they must be accepted as valid. Here, we can distinguish between technical and normative legitimacy, where the former is conferred because of a perceived correspondence between the indicator and object, while the latter occurs as an indicator and is seen as appropriate to use. Regarding technical legitimacy, Bowker and Star (2000, 245) demonstrate the importance of designing indicators that resonate with people’s idea of the described phenomenon. Without a reasonable correspondence between the indicator and object, there is a risk that people will reject the indicator as a valid representation of reality, making the indicator unable to affect the organisation. This has been a major concern for research metrics, and the debate has continued about the validity of research metrics (Donovan 2007; Gläser and Laudel 2007; van Raan 2005).

However, normative legitimacy may be conferred to an indicator even though it has low technical legitimacy. Here, it is instead a matter of the perceived appropriateness to measure at all, even though accurate metrics may be missing. Power (2004, 769) notes that ‘specific measurement systems may be defective and fail, but they also constantly reproduce and reinvent an institutional demand for numbers’. The desire to measure, hence, trumps the ability to accurately do so. A prominent example may be university rankings, which have been criticised for being invalid measures of scientific excellence (Harvey 2008; van Raan 2005; van Vught and Westerheijden 2010). External actors may, however, consider the limited information provided by rankings better than the alternative, which often is overwhelming and impervious. The rankings thus gain normative legitimacy and provide an ostensible transparency of university excellence. In a similar way, Rushforth and de Rijcke (2015) show that researchers see the journal impact factor as useful for various purposes, despite having knowledge of its limitations. Aksnes and Rip (2009) also note that researchers doubt the ability of citation metrics to indicate scientific quality, but the metrics are seen as useful because they convey academic prestige. The normative legitimacy of these metrics thus makes them influential, even though they may represent reality in a unidimensional or inaccurate manner.

Institutionalisation

While actionability and legitimacy are effects that organisational actors are more or less conscious of, institutionalisation refers to the process where metrics are taken for granted (Scott 1987; Zucker 1987). When indicators solidify and become firmly established, people come to accept the general agreement of the indicator as representative of reality. Being accepted as real, the metrics’ limitations and flaws are easily forgotten, and they become more likely to influence decision-making and organisational activity. The institutionalisation of indicators may occur through a number of processes, including habituation, reification and reconstitution. Habituation implies that an indicator may gain increasing acceptance over time as people get used to it. Sauder and Espeland (2009) note how the novelty of rankings initially made universities dismiss them, but, in due time, these rankings came to be very influential. Reification implies the solidification of an indicator as it is built into the practical organisation of labour and resources. This may take place as offices are established to handle issues relating to the indicator, where an example includes bibliometric offices dealing with rankings (Espeland and Stevens 1998). Finally, reconstitution occurs as indicators alter the notion of the indicated objects. Dahler-Larsen (2014) describes this as the constitutive effects of indicators, and Woolgar (1991, 319) notes how ‘the very system of measuring and manipulating citations redefines the phenomenon it is supposed to measure’. Because bibliometrics emphasise publication in international peer-reviewed journals, this may alter the perception of publication quality to the detriment of publications in alternative outlets. How quality in research is understood may thus change to align with the indicator. The constitutive effects of the indicator cause institutional lock-in as the indicator and object converge.

The Analytical Framework

Summarising these insights, performance measures have been noted as influencing organisational action in three ways. First, metricsinduce action because numerical indicators are able to rank and clearly order alternatives for decision-makers; this also occurs because the subjects of measurement adapt their behaviour as they are being measured. Second, performance measures can impart organisationallegitimacy. This is contingent on the technical and the normative legitimacy of the metrics, which reflects the accuracy of the measures and the perceived usefulness of measuring performances. Third, performance measures can influence the organisation as they become institutionalised and are taken for granted as valid descriptions of reality. This occurs over time when people grow accustomed to indicators, when indicators are built into the practical organisation of activities and when people alter their idea of the measured object to better fit with the indicator. These three ways in which performance measures can influence universities are summarised in Table 4.1. They compose the analytical framework applied in the current study as we explore how the metrics used in national PRFSs influence Nordic universities and how this in turn affects the way academics make sense of research activities.

Table 4.1 Analytical framework: the influence of metrics

A caveat to note is that performance measures are not seen as unambiguously imposing actionability, legitimacy or institutionalisation. Instead, these effects may emerge as academics interpret performance measures in relation to the measured activities. Therefore, the influence of indicators depends on the perception and understanding of organisational actors. As academics experience performance measures as novel tools to describe research, they may then use these tools to reconstruct the meaning of research. It is the perceptionand interpretation of performance measures made by university actors that enables the metrics to be actionable, enhance legitimacy or become institutionalised.

Methods

In this chapter, we address how university actors perceive research activities in light of the performance measures used in national PRFSs. Because the purpose of the chapter is to reach a deeper understanding of these processes, we adopt a qualitative approach and apply a comparative case study method (Yin 2009). The study may furthermore be described as a mix of a congruence analysis and causal process tracing (Blatter and Haverland 2012). In our efforts to explore the influence of PRFSs on local perceptions of research, we utilise previous theoretical insights into our theoretical framework. Some of these insights are likely to be more influential than others and hence may provide more explanatory power. The current study will perform a congruence analysis, where the applicability of earlier theoretical accounts is tested. With the analytical focus on the influence of performance measures on university actors, however, there is also a large interest in the causal configurations of these processes. Thus, the analysis will contain a significant portion of causal process tracing because we want to analyse the way national PRFSs influence local perceptions of research.

A desktop study was conducted to map the national PRFSs. The sources include earlier research, as well as official reports from governments and government agencies. To study how research metrics implemented in national PRFSs affect perceptions of research at the institutional level, we conducted 93 semi-structured interviews with academics, managers and administrators at eight Nordic universities. The universities chosen include one flagship and one regional university per country. The interviews sought to illuminate organisational reactions as numerical indicators are used to describe and incentivise organisational action through national PRFSs. Although the perspectives varied among the respondents, they were all interviewed regarding their role as academic professionals and considered to represent their respective organisation and culture in which they were situated.

To perform the analysis, the interviews were recorded and transcribed verbatim with the approval of the respondents. The transcriptions were systematically analysed with the aid of computer software to code the data and structure the findings. Initially, the analysis was inductive and attentive to the material, exploring how performance measures influence perceptions of academic work. In later stages of the analysis, a refined coding was made to categorise the findings according to the analytical framework, where we explored whether national PRFSs create actionability, legitimacy and institutionalisation that in turn affects how the informants understand research activities. The results have subsequently been analysed and compared across the countries.

Before moving on, some terminology will be discussed to enable an informed comparison between the countries. The funding system terminology used has been adopted from the EU report ‘Performance-Based Funding of University Research’ (European Commission 2018, 27–29). The term institutional funding is used to denote government resources provided to universities, which they may spend more or less as they wish. However, a notable exception is that institutional funding in some countries is provided separately for teaching and research. In these cases, the term institutionalresearch funding will be used to specifically indicate the institutional funding allocated for research activities. Institutional funding is furthermore separated into block grants and performance-based funding. Performance-based funding is allocated depending on the outcome of various performance measures, which may be related to teaching, research, societal interaction or other activities. A block grant denotes the rest of the institutional funding and is often contingent on historical allocations. External funding denotes revenue from public and private organisations that normally is designated for particular purposes and won by individual researchers in a competition with others. Some countries use performancecontracts between HEIs and the government’s ministry. As long as these do not contain a funding formula, such as those found in a PRFS, these contracts are considered to inform the allocation of the block grant.

The Nordic Performance-Based Research Funding Systems

Although the four Nordic countries in the current study have implemented PRFSs in recent years, the systems differ in their configurations. The PRFSs are designed in different ways and include different indicators. In the following, the four PRFSs are presented and compared.

Denmark

In Denmark, a PRFS has been in place since the end of the 1990s, and it has distributed a small part of the institutional research funding based on student throughput, external research funding and PhD production, while the larger part has been constituted by block grants. Because of dissatisfaction with the absence of output measures of research quality, a fourth indicator was added to the Danish PRFS in 2010: the Bibliometric Research Indicator (BRI). The BRI took its inspiration from the Norwegian bibliometric indicator, measuring the publication activity in peer-reviewed journals and books, and awarding points to universities depending on their relative performance in a zero-sum game. Hence, the BRI covers the breadth of publishing patterns across scientific areas, including monographs, conference proceedings and so forth, to be relevant for all the disciplines.

Panels in each scientific discipline evaluate the journals and book publishers in their field and place them on either level 1 or level 2 (Schneider and Aagaard 2012). The evaluation of journals is done according to a quality criterion (originality and novelty) and a relevance criterion (that the journals are of interest to, and accessible to, Danish researchers). However, other than these very basic guidelines, it is very much up to the panels to decide how the assessment is conducted. All Danish researchers can suggest changes to the list that the panels will have to consider. Every year, the results of the panels’ work on placing journals on the authorised list are made publicly available.

The total funding distributed from the PRFS depends on how much new money is put into the system from year to year. In 2010, the PRFS distributed 4 per cent of the institutional research funding of Danish HEIs, but this amount increased to 19 per cent in 2017 (Aagaard 2016).

Finland

The Finnish funding system changed in the early 1990s when the first performance-based elements were introduced in the form of performance agreement negotiations. The new system was intended to offer incentives for increased efficiency and effectiveness, but it remained very input oriented. It was not until 2010 that performance-based funding was introduced, which is now used to allocate resources to universities in a zero-sum game. Currently, roughly 70 per cent of the institutional funding of universities is performance based. The current PRFS consists of a model where education performance accounts for 39 per cent, research performance for 33 per cent and other education and science policy considerations for 28 per cent. The research indicators used include doctoral degrees, scientific publications and external funding, which are about equally weighted. In addition, universities have strategy-based funding that is agreed upon between the university and the government as part of their negotiations. The funding scheme aims at strengthening the quality, impact and performance of universities. The institutional funding is thus largely performance based because the funding is allocated according to the performance results of the previous four years (for a current analysis, see Seuri and Vartiainen 2018).

For the bibliometric indicator, scientific outlets are given a rating by the publication forum, a classification system created by the Federation of Finnish Learned Societies. The evaluation of publication outlets is conducted by expert panels that consider the typical publication practices of the specific research fields, the existing appreciation of the particular publication channel within the scientific community and the balance presence of various disciplines at higher quality levels. In this system, each scientific outlet is placed on a level between 1 and 3. Also, nonrefereed journals are included at level 0, and publication in these outlets provide very low rewards.

Norway

In Norway, a PRFS was introduced in 2005, allocating institutional funding based on both teaching and research indicators. The purpose of the PRFS has been to provide a neutral framework for assigning funds between universities and scientific fields but also to stimulate better performance and reward successful research environments. In 2014, 24 per cent of the funds were distributed based on teaching indicators and 6 per cent based on research indicators (Kvaal 2014).

There are four research indicators: number of PhDs awarded, allocation of EU funding for research, allocation of funding from the Norwegian Research Council and bibliometrics. Regarding the bibliometric indicator, a national, non-commercial bibliographical database has been established to classify different types of scholarly and peer-reviewed literature from the whole sector, including journal articles, book chapters and monographs. Scientific outlets are classified at two levels, and publications in these outlets are rewarded with publication points fractionalised according to the number of authors. The data are used to allocate funding but also enhance transparency across institutions. This transparency is also supposed to increase the quality of research in the sector. The database is available online and is open to the public.

Sweden

In 2009, a performance-based dimension was introduced to the institutional research funding of Swedish HEIs, sending a clear signal from decision-makers of their desire to increase the quality of research performed at Swedish HEIs (Swedish Government Bill 2008/09:50). By conditioning part of the institutional research funding on performance indicators, incentives were created for the HEIs to increase their research output, but this system has changed several times in its short lifespan.

The system reallocates 20 per cent of the institutional research funds based on the outcome of two indicators: bibliometrics, which is composed of publication counts and citation counts, and the amount of external funding acquired. The resources are allocated based on the relative performance of each HEI compared with the others in a zero-sum game. Any new research funds granted by the government from one year to another are also allocated according to the model. The bibliometric data are collected from Thomson Reuters and are field normalised and fractionalised according to the number of authors. External funding is measured as a running three-year average and is weighted by discipline. The effects of the model have been moderated by various decisions throughout its existence. The continuous increase of the total institutional research funds has also left the worst performers with at least as much institutional research funding as the previous year. In a few cases, special allocations have been made to guarantee that no HEI experiences decreasing institutional research funding, with the result being that the redistributive effects of the model are modest (Universitetskanslersämbetet 2015, 2017, 19f.).

Similarities and Differences in the Nordic Performance-Based Research Funding Systems

Table 4.2 summarises the main components of the PRFSs in the four countries, showing a number of similarities but also some notable differences. The introduction of the systems all occurred at the same time, with the exception of Norway as a forerunner and acting as an inspiration for the Danish BRI and the Finnish bibliometric model. The Swedish system, however, utilises data from an already existing infrastructure, while the other three countries established completely new databases. Furthermore, the reasons behind implementing the PRFSs have been similar across the four countries. Allocating research funds through a PRFS in a zero-sum game is intended to provide universities with incentives to increase their performance. Higher competition is supposed to enhance both research quality and productivity. In Norway, the PRFS is also noted to improve the equity of the resource allocation system.

Table 4.2 Main components of the PRFSs in Denmark, Sweden, Finland and Norway

The amount of funds allocated through the PRFSs is similar in Denmark and Sweden, where about 20 per cent of the institutional research funds are performance based. Because HEIs in Denmark and Sweden receive separate institutional funding for teaching and research, the percentages of the amount of resources allocated by the PRFSs are not directly comparable with those in Norway (6 per cent) and Finland (33 per cent), where institutional funding also includes teaching funds. However, as noted in the EU report ‘Performance-Based Funding of University Research’ (European Commission 2018, 37), the use of PRFSs and external funding from the state affects whether research funding is more or less contested. The report notes that Norway and Sweden are restrained in their use of performance-based funding and rely heavier on external funding. Finland, on the other hand, has high competition for funds, where the PRFS is an integral component, thus creating strong incentives for universities to perform.

The indicators used differ somewhat between the countries. All countries use publication counts, but Finland differs somewhat because the PRFS do not fractionalise the publication counts. This makes it beneficial for researchers to coauthor their publications because the number of authors does not dilute the publication points awarded. This also imply a bias towards fields such as the natural and health sciences, where the tradition of copublication is strong, and the number of coauthors is high compared with the social sciences (Muhonen and Pölönen 2016). Sweden also includes a measure of citation counts that enables an assessment of the impact of individual publications. In the other three countries, publication outlets are given different weightings, giving all publications in the same outlet the same value in the PRFS. Denmark, Finland and Norway are not using citation counts because they have opted for systems with their own bibliometric databases, while Sweden relies on the already existing database of Thomson Reuters. The latter bibliometric database includes citations but does not have the same coverage of publication outlets as the databases created in Denmark, Finland and Norway.

Furthermore, all countries have indicators for external research funding, though what is counted differs somewhat. Although Norway also has a specific indicator for EU funding, this is accounted for in the measures of external funding in the other countries. Additionally, it can be noted that in Norway and Denmark, non-competitive funding is included as well (European Commission 2018, 50). All countries except Sweden have indicators for the number of PhDs awarded. In Denmark, there is also a connection to teaching performance because the use of student throughput informs the institutional research funding. Teaching metrics are, however, also used in Norway and Finland, though the connection to research is hard to assess because universities receive their institutional funding together for both teaching and research activities.

The Influence of Metrics on the Perceptions of Research

Actionability

For all four countries, the research metrics utilised in the national PRFSs are clearly actionable. Primarily, they facilitate managerial decision-making at different levels of the universities, but the formalisation in the use of metrics for this purpose differs. The perceived incentives provided by the PRFSs also differ. In some cases, the PRFSs provide clear and substantial incentives for universities and individual researchers, while the incentives in other cases are perceived as weaker or not directly related to the PRFSs.

In Denmark, the BRI has affected both the organisation of academic practices and the academic practices themselves. The most prominent example of changes in the organisation of academic practices is how the BRI has been used locally by universities in their budget models for allocating resources to lower organisational levels. It does, however, depend greatly on the context in what way, if at all, the BRI has been used. At the flagship university, the BRI has not been used in the budget model at the university level because international publishing was already seen as the norm. This was different at the regional university where they interpreted the BRI as very actionable because it could be used as a management tool for boosting performance. Thus, the regional university implemented the national PRFS locally for allocating funding to the faculty members and even made it apply to all the funding for research, in contrast to the approximately 20 per cent at the national level. Therefore, the PRFS, and especially the BRI, is seen as an extremely disciplining remunerative incentive at the lower-levels, affecting such things as publication practices. A manager stated, ‘What has pushed the publication activities mostly is the BRI system’ (Flagship, manager, DK).

The inclusion of the BRI in the budget models has also spurred changes in academic practices. Hence, it is mostly at the regional university that we see researchers reacting to the BRI. In the sociology department, the budget model was experienced as extremely disciplining: ‘There was money on each BRI point earned, and you could see it directly on the budget of the department’ (Regional, manager, DK). Therefore, management started to demand that in a period of two years, researchers produce BRI points. The researchers reacted by putting much more emphasis on making sure their outlets were on the sanctioned BRI list. Some reported that this led to less Danish language research outputs, less broad dissemination and more stress among faculty.

Also, in Finland, we note how the PRFS affects decision-making and provides incentives for the universities and individual researchers. At an institutional level, the PRFS has provided an action-induced and predictable way of improving the chances of receiving the required resources. The PRFS has pushed universities to make strategic choices regarding how they allocate funding internally and prioritise scientific fields. Seen from a manager’s point of view, the PRFS is also a way to provide support to the academic work and to the development of science within the university more broadly. The incentives of the PRFS also clearly affect research practices: ‘The publication forum classification has steered our publication activities in social sciences and the humanities towards more international fora’ (Regional, manager, FI). The PRFS is thus seen as enhancing the pressure on academics to strive for high-quality and impactful science. Many academics have seen this resulting in positive career developments at personal levels and hence have come to accept these changes as something that drives science forward.

In the previous Finnish system, where performance was tracked to a much lesser degree, problems of academic units and departments could, according to the interviewees, also be overlooked. In the current PRFS, this is no longer the case because universities now have the ability to see problems before they become too large to manage. Issues behind low performance are becoming visible, which encourages managers to provide the necessary academicleadership to overcome the situation; this provides managers with the support they need to bring out the best in their staff: ‘Once a year we have a performance discussion with the rector and go through the main indicators of how well the faculty has done. We look at the state of the faculty and its development prospects’ (Regional, manager, FI). As such, the PRFS aids managerial decision-making because it highlights underlying problems, such as poor human resourcesmanagement, weak leadership and favouritism, which in a more transparent system will be a call for action.

In Norway, the PRFS is also seen as a potent instrument, providing actionability at both the organisational and individual level. Organisationally, it facilitates decision-making, for instance, because universities have implemented local variations of performance-based funding. These local systems also provide incentives for the researchers, though their influence often is considered to be limited. Examples of these incentives include how some departments have established systems to reward researchers with a type of bonus that is earmarked for attending international conferences. These rewards are awarded for publications at levels 1 and 2 but also popular science publications in addition to the completion of a master thesis, PhD dissertation and external funding. Those who are working in units where metrics result in the allocation of bonuses find this to be an important part of the freedom to attend international conferences. Still, the amount of money is not large, so the influence on motivation is limited, as exemplified by a researcher: ‘It is clear, there are other things that drive what you are doing than money. It is … kind of not the reason why you are sitting down to write your articles, to get 5000 NOK’ (Regional, academic, NO). However, regardless of the connection to rewards or not, publication points and citations are highly valued by many academics. Also, other types of metrics are important to academics, such as citation indexes and journal impact factors, despite the fact that these metrics are unrelated to direct financial rewards. The metrics are instead regarded as symbols of success, and this is interpreted to be important for being invited to networks and research projects and obtaining new positions.

Performance metrics are also used to assign (and refuse) sabbaticals, a practice that is used at both case universities in Norway: ‘[Publication points] are presented as statistics to all of us … and this is used to assign sabbaticals, so this is a strong guiding principle for our institution’ (Regional, manager, NO). Metrics can also be used by managers to inspire and motivate academics and are often brought up in the annual appraisal meetings. Publication points are used to follow up on academics who are not publishing very much, not to punish, but rather to offer support and facilitation. A manager explains, ‘Actually, it is more like I am saying; “Is there anything we can do?” It is not like; “We are expecting you to publish five articles next year.” It is not on that level, we are not a factory’ (Flagship, manager, NO).

In Sweden, there is less emphasis on the actionability of performance measures compared with the other countries. There is broad agreement that performance measures are to some extent necessary to enable decision-making, but also that they are inevitable as others use them. Academics do, for instance, acknowledge the accountability relationship between the university and ministry and how this results in requirements to report organisational activities in standardised ways. Also, the dependence on external funders and other stakeholders is evident, and that they sometimes prefer simplified metrics to assess research. Thus, the actionability created by metrics is appreciated and accepted because it enables necessary accountability relations and resource allocation flows.

As incentives, the PRFS is most notable within the social sciences, where the increasing emphasis on bibliometrics has implied a shift in publication patterns. As explained by a manager, ‘Everyone is moving towards scientific articles. Not exclusively, but it is what people talk about and what we are supposed to aim for’ (Regional, Manager, SE). In the natural sciences, publications and citation counts are instead described as traditional measures of research performance. For researchers, the incentives provided by research metrics are, however, rarely related to the national PRFS. Instead, these indicators are important for other reasons. External funding is essential because it provides resources for the individual researchers, and bibliometrics are vital because of the reputational gains for researchers being well published and well cited. Whether research metrics are effective motivational tools is an issue where opinions vary. Some express the notion that they make researchers increase their output: ‘If you measure things, if you look at things and take notice of things, more things happen’ (Regional, Manager, SE). However, others doubt the necessity of creating stronger incentives because academia already is rife with incentives, emphasising that academics primarily are motivated by their own initiatives. The establishment of local PRFSs is thus challenged: ‘The question is whether we need to make yet another assessment to distribute the government grant’ (Flagship, Manager, SE). This also emphasises the transparency that indicators create because metrics may provide clear and indisputable grounds for decision-making. Although neither of the two Swedish case universities uses a PRFS at the institutional level, these systems exist at both universities at the faculty level. However, the local PRFSs are rarely strict implementations of the national system but often include a variety of components, such as PhDs awarded and teaching performance. The indicators of the national PRFS are thus applied in the local PRFS because they are seen as useful to allocate resources between organisational units, but they are not the only metrics used here.

Legitimacy

Research metrics are largely seen as important for legitimising organisations and their activities. It is generally acknowledged that metrics are important in demonstrating performance to external actors in simple and understandable ways. Also, equity issues are brought forward because metrics enhance transparency and thwart arbitrary decision-making. Although some critique may be noted against the necessity to measure research so closely, it is mostly seen as just and appropriate. Regarding the technical legitimacy of the PRFSs, there is more variation. In particular, we note how academics primarily from the natural sciences are sceptical of the PRFSs. They often perceive these systems as crude and unable to accurately gauge the value of scientific publications.

In Denmark, the BRI is a new measure of publication performance; it has, to varying degrees, challenged the status quo of the existing methods for assessing the value of different kinds of scholarly publications and outlets. Within the social sciences, the BRI constitutes a new indicator that reflects the publication patterns of the social sciences. For the faculty members of natural sciences, it was a different case. Here, the impact factor of what journal the research was published in had for decades been the standard to measure the quality of a journal. Hence, the BRI was seen as a crude measure because it only differentiated between two levels. In the eyes of natural science scholars, the BRI had low technical legitimacy and was competing against a well-institutionalised and entrenched measurement system. A similar logic differentiates the flagship university from the regional university. Although the BRI was understood as an appropriate tool to boost performances at the regional university, this was seen as unnecessary at the flagship university, where researchers were already publishing in international fora. Therefore, the BRI has never been fully accepted as a proper measurement tool by various groups and universities, thus suffering in both technical and normative legitimacy. This is especially the case in the natural sciences, where researchers simply do not know the BRI or reject it as faulty. As one researcher replied when asked if they take notice of the BRI, ‘No, I don’t think so. Because it is a bit wrong’ (Flagship, academic, DK).

In Finland, on the other hand, the PRFS generally enjoys high normative legitimacy but suffers from a somewhat lower technical legitimacy. Although there is some concern over how well the PRFS actually increases the quality of research, most academics and managers see it as a constructive, forward-looking system. Measuring academic performance is perceived to be an inseparable part of a modern university. However, the normative legitimacy is strongly coupled with the transparency of the indicators: ‘The more there is fair competition where rules are open, the better we do. But if there is competition where the rules of the game are not known by those who compete, it is simply an arbitrary use of power’ (Flagship, manager, FI). From a managerial perspective, measuring performance is a tool used for the smooth running of a complex expert organisation but also for ensuring the fair treatment of personnel. For the academics, the situation is more complex. They value the openness and transparency of the PRFS but do not necessarily feel they can trust the administration in upholding these standards because university managers adopt and use these metrics. In the eyes of academics, the legitimacy of the system is, hence, coupled with a fair and open application of the performance measures throughout.

Regarding the technical legitimacy of the metrics included in the PRFS, they are largely seen as established indicators of research performance and hence as technically legitimate. The use of bibliometric indicators is perceived to follow the logic of academia and is seen to align well with academic conventions. However, a concern is that the system is not seen as meeting or serving the interests of high-quality research: ‘Measuring performance can have a side effect that if the demands are too low or too quantitative we start to count how many publications to do, and so you start to produce lower quality publications because their quality is not measured, only quantity’ (Flagship, academic, FI). How much is published is considered to be stressed at the expense of quality, posing a threat to scientific integrity. This is the main reason for the mistrust towards the use of metrics in the evaluation of academic performance.

In Norway, performance measures are used to increase transparency between and within universities. However, there are large variations within the universities on how this is practised. In some departments, they share the information on an individual level to all employees, while others use the data to compare at the department and faculty level. The practice of sharing data at the individual level raises critical voices among both academics and managers because of the shaming of academics with few publications: ‘I believe it feels personally more uncomfortable, because it is so visible now. It is more apparent’ (Regional, academic, NO).

Generally, research metrics may be said to hold normative legitimacy as tools to indicate success. However, there seems to be differences in the legitimacy of the national PRFS among the academic fields. Within the natural sciences, the system of quantification was not questioned, but it was noted that it provided an increased focus. As illustrated by a researcher: ‘There is a larger focus on symbols, for instance in relation to highly ranked journals. To get an article in Nature of Science or others has larger significance now. This is almost immediately reported to the rector and on the web site. The flagging and use of status symbols … have changed dramatically, I think’ (Regional, academic, NO). Research performance, as indicated by metrics, is thus used more often to demonstrate achievements and acquire legitimacy for the university as an organisation.

There are also critical voices, mainly within the social sciences, where academics emphasise the problem of turning values of research into measurable points, problems related to quality versus quantity and highlighting that not everything is countable. Furthermore, these voices question how the role of the university as an independent research institution would be affected by the close connection between funding and metrics. The social scientists were also highly critical towards what they perceived as the new public management influence in the sector, as one academic expressed: ‘We are a kind of counter culture … many of the most prominent critics to the leadership of the university come from our department’ (Flagship, academic, NO).

In Sweden, the various components of the PRFS are fairly well established as indicators of research performance and may be considered to have a high level of technical validity. External funding is ‘the accepted method of measurement when it comes to research performances’ (Flagship, administrator, SE). It aligns well with the idea that external research grants are awarded to the most prominent applicants after a rigorous peer-review process; therefore, the acquisition of grants is an acknowledgement of academic merit. This is also a notion that is well represented within Swedish universities: ‘If you are rewarded and get a lot of grants you will be perceived as successful’ (Flagship, administrator, SE). Also, the bibliometric indicators used in the PRFS align well with academic conventions, though differences exist between the disciplines. Although some sections of academia are more familiar with bibliometrics and the publication practices it refers to, others have been less so. However, a shift is underway, making research metrics increasingly common within the social sciences.

Although generally accepted, the metrics of the PRFS are not exempt from critique. On the contrary, both researchers and managers emphasise the difficulties of measuring research. The critique is, however, mostly levelled towards measurement in general rather than focusing on specific problems with the existing indicators. An example is provided by a manager who states that fulfilling performance criteria ‘does not necessarily imply that the performance has high quality’ (Flagship, Manager, SE). There is a general awareness about the limitations of performance measures, and that academic work often produces benefits that are not easily captured by performance metrics. Also, the level where metrics are applicable is noted. Here, a manager states that most metrics are unfit to assess individual performance: ‘Your performance is not a result of your own efforts alone, it is largely collective’ (Flagship, manager, SE).

The research metrics of the Swedish PRFS are generally seen as normatively legitimate because they legitimise research activities. Still, this is contingent on the relatively high technical legitimacy. It is, however, generally stressed that the research metrics will not benefit the universities if these metrics come to define and controlacademic work internally. As expressed by a manager, ‘We need to make room for the fact that research can occur in various ways’ (Regional, manager, SE). Swedish academics are thus holding a quite pragmatic view of these research metrics, one where their benefits and limitations are acknowledged.

Institutionalisation

The research metrics of the national PRFSs have been variously institutionalised in the four studied countries. In some ways, they are now deeply institutionalised because they have been reified in organisational structures, and people are becoming increasingly habituated to them. On the other hand, there is variation regarding how much they are taken for granted. In some cases, they clearly affect how people make sense of the research activities. However, there are also findings indicating that these metrics are not internalised and taken for granted, yet people relate to them in attentive and deliberate ways.

In Denmark, the BRI is by far the element in the PRFS with the largest but also the most differentiated effects on the organisation and practice of academic work. Because the other elements of the PRFS (external funding, student throughput and PhD production) have been in use for almost two decades, they are already institutionalised in the organisation of academic work. Furthermore, they are also important measures in themselves outside of the PRFS. Hence, the importance of securing external funding is not tied so much to its inclusion in the PRFS but rather stems from the necessity to acquire external funding to enable research activities. Although researchers emphasise that the acquisition of external funding has become increasingly important and that they experience pressure from management, no one ties this specifically to external funding being included in the PRFS. However, it cannot be ruled out that the processes of reification and habituation have made external funding even more important because of its inclusion in the PRFS.

On the other hand, the BRI is clearly being institutionalised. We have already described how it is reified in the budget models at the regional university. Its effects on how research results are disseminated are also noted. As a manager states, ‘Another perverse effect is what we have felt strongly for, because we originally were created by the surrounding society: To disseminate to the surrounding society […]. You stopped doing that’ (Regional, manager, DK). Introducing the BRI has thus led to a reconstitution of what ‘quality publication’ is. However, despite the BRI leaving its mark on various places, it has not been broadly institutionalised as a taken-for-granted measure of research performance. This is related to the low legitimacy of the BRI among some groups within Danish universities, preventing the full acceptance of metrics. Moreover, most actors at the university level act under the impression that the BRI is only distributing a small fraction of the total funding for research. As one top manager notes, ‘If you look at how much it [the PRFS] has redistributed, then I think you will see that it has redistributed next to nothing’ (Regional, manager, DK). Hence, it seems that some institutionalisation of the BRI has taken place, though a very general and taken-for-granted type of lock-in effect is lacking.

In contrast, in Finnish universities, the performance measurement is becoming well institutionalised. It is now perceived as a control mechanism both for the purpose of keeping track and ensuring the accountability of academic staff, as well as being a transparency instrument allowing those who perform well to be rewarded. The internal application of PRFSs to allocate funding also indicates an increasing institutionalisation of the national PRFS. With institutional funding being highly performance based and as the competition for external funding increases, it has become sensible for universities to focus on strong and rising fields of research and to build incentive systems to reward high-achieving departments. Therefore, the logic of the PRFS has been internalised within Finnish universities. A manager exemplifies this when stating that ‘our revenue generation logic leans clearly on performance […] and results have to be somehow measurable’ (Regional, manager, FI). Although there is criticism against performance indicators and the way they are designed, the indicators have also influenced the way people understand research activities: ‘Also in research, people have started to speak that way, that research activities need to be effective and efficient, that they must be measurable and that the system is a kind of steering mechanism for how good research is’ (Regional, manager, FI). This indicates that reconstitution has started to occur because research indicators have influenced how academics perceive the meaning of everyday activities.

Also, in Norway, there is general agreement on the influence metrics have over the organisation of research. In particular, it is noted that the performance measures of the PRFS are institutionalised in several ways. The local use of performance measures derived from the national PRFS constitutes an institutionalisation of these metrics, both as they are reified in organisational decision-making structures and as people become habituated to an increasing measurement of academic performance. There are also signs of reconstitution: an increasing measurement of performance alters the notion of research activities among academics. There is now an increasingly widespread notion that research needs to be measurable so that academics can demonstrate their performance quantitatively. A manager notes how this influences the notion of sabbaticals as a reward rather than preconditions for research achievements: ‘Of course, there is more focus on that people have to deserve sabbaticals’ (Flagship, manager, NO). Thus, the use of metrics is influential as an organisational principle, and it affects the way people think about research:

It [publication metrics] means a lot today, even… It is almost comical, right? I can see what it does to my head. I mean, there are far too many journals, too much focus on publication points, because it is not saying anything about the quality, either this is level 1 or 2. Still, it messes with your head as you are measured and weighed, so you are in a way searching for… It means a lot. Therefore, this is an incredibly strong organisational principle. (Regional, manager, NO)

In Sweden, the metrics of the PRFS are quite well institutionalised. Although academics within the natural sciences are more familiar with them, social scientists are now well acquainted with these measures, making the habituation ubiquitous. The measures are, along with other measures of academic work, reified in the decision-making structures at various places in the two universities, albeit not at the highest level.

The reconstitution of the research metrics is relatively weak in Sweden. Although a general acceptance of the indicators of the PRFS has implications for the way university actors perceive research activities, this does not seem to stem from the PRFS. Mainly, the PRFS is not understood to be of particular importance to academics in organising their research activities when compared with other instances where research metrics appear. The way academics describe the relation between performance indicators and research activities instead alludes to a wider context where these metrics are seen as important. That the PRFS does not have a major influence on the way academics perceive research can be explained by the fact that the construction of the PRFS has proceeded from measures already institutionalised as indicators of research performance. However, the specific measures included in the PRFS are often the ones that academics refer to when describing research and the ways in which it is measured. A manager states, ‘We measure performance in external funding, publication and citations; those are the tools we have’ (Flagship, manager, SE). This indicates that the metrics included in the PRFS are institutionalised and that the PRFS aligns well with established conventions of how to measure research. Although the PRFS is not the origin of these metrics, its implementation creates yet another source of pressure on universities, reinforcing the power of these research indicators. A reconstitution of research in line with prevailing performance measures does seem to be absent, something that can be explained by the relatively weak actionability and incentives of the PRFS when compared with the other three countries.

Concluding Discussion: What Role Do Performance Metrics Play in Research?

In the present study, we have sought to illuminatehow the PRFSs of Sweden, Norway, Denmark and Finland affect the way university actors understand research activities at the institutional level. The PRFSs have all been introduced in recent years, but the ways in which they are configured differ somewhat. This is true for the indicators used, as well as for the amount of funds the systems are distributing. Our results indicate that the establishment of these PRFSs has had notable effects within Nordic universities. The performance measures of the PRFSs are implemented as formal structures for resource allocation and decision-making, but they are also used informally and in nonsystematic ways to organise and perform research activities. In particular, they contribute subtly to the institutionalisation and consolidation of research metrics as the descriptions and organising principles of research and to the notion that all scientific contributions can be compared with each other.

However, it is not only the metrics of the four PRFSs that are used within the universities. A number of performance measures are applied by university actors to make sense of research activities and to navigate in a context where there is evermore measurement, evaluation and competition. The PRFSs should therefore be seen in this wider context, where the PRFSs may be understood as expressions of government intentions to promote quantitative evaluation that allows for measurable evidence to be used to describe and compare a complex situation. Even though questions are raised within the universities against the various uses of performance measures, the metrics are generally accepted and often appreciated as valuable tools for enhancing transparency. The introduction of the PRFSs can thus be seen as an important contribution to the quantification of research and as effective in establishing an all-encompassing research evaluation regime.

Analysing the empirical findings against our analytical framework, the different ways in which performance measures have been noted to influence organisations in previous studies all possess explanatory power in the present study. Regarding the actionability of the performance measures (Espeland and Sauder 2007), they are instrumental in supporting decision-making within the universities. This is emphasised in all the studied countries, though the ways in which metrics are used for this purpose differ somewhat. Although there are examples of local PRFSs in all countries at the institutional or subinstitutional level, our results indicate that the metrics in Norway are also used to allocate funding for conferences or sabbaticals. In Denmark, there is a large variation between universities depending on the presence of local PRFSs, which are used at regional universities to improve organisational performance. This is also the main use of the metrics, as emphasised in Finland, where metrics are seen as enhancing transparency and thus the general development of Finnish universities. Therefore, performance measures are used to assist universities in making priorities and to aid managers in providing support to researchers. In Sweden, the metrics are described to aid decision-making at a higher level, where the actionability is mostly related to external accountability relationships and resource allocation flows.

Regarding the incentives, the picture is more consistent across the countries, despite the fact that the preconditions differ among the countries and universities. Most notably, perhaps, is that publication practices are perceived to be heavily influenced in all four countries, at least within the social sciences. Researchers are considering the implications of where they choose to publish their research, as defined by the prevailing performance measures. However, even when remunerative rewards are coupled with the achievement of measurable performances, it is mainly the symbolic rewards—such as reputational gains—that researchers desire. The reason is that the motivation of researchers to perform is commonly found elsewhere: in respect of peers and more traditional academic merits. Thus, the introduction of remunerative incentives is seen as superfluous. Instead, it is the visibility of performance created by the metrics that operates as a motivational tool because metrics allow researchers to transparently show evidence of their labour. There are, however, some differences regarding the importance of the remunerative incentives. At the regional university in Denmark, it is observed that the remunerative incentives are extremely disciplining, and in Finland, it is noted that the PRFS increases the pressure on academics to produce impactful, high-quality research.

We have seen several examples of metrics that are perceived as important, even though they are not tied to remunerative rewards. Our interpretation is that the establishment of national PRFSs contributes to the legitimisation of metrics as indicators of research performances, which then can be used to convey success. Examples of this include the findings from Norway, where publications in prestigious journals immediately are reported to the rector and are published on the university website. This brings us to the next concept in our analytical framework: the legitimacy that indicators can imbue to researchers and universities. Previous research has indicated that this process is contingent, in part, on the technical legitimacy of the performance measures (Bowker and Star 2000), as well as the normative legitimacy of measuring performances (Power 2004). Our results indicate that the technical legitimacy of the various performance measures of the four PRFSs is generally high because the metrics are largely seen as capturing research performance in an accurate manner. There are some differences between the countries, but primarily, we can see that the interviewees from the natural sciences often are sceptical towards bibliometric measures. Their critique is often levelled against the crudeness of the measures, as with the ones used in Norway, Denmark and Finland, where publications are categorised on a scale that has just a few levels. This is also understood as a risk to high-quality research because it is seen as promoting the production of more publications of lesser quality. This is not experienced as a problem in Sweden, where citations are also included to define the value of publications.

Although there are some concerns about the ability of performance measures to capture the relevant aspects of research, as well as the necessity to measure research performance as it is currently done, there is a general acceptance of performance measurement. This may be most strongly emphasised in Finland, where it is understood to be part and parcel of a modern university organisation and an important tool to promote transparency and a better (human resources) management of the university. In Finland, it is also understood as an essential tool for university managers to identify and handle internal issues, as well as to hold academics accountable. In contrast, the Swedish results indicate that the performance measures acquire normative legitimacy because of their ability to facilitate relations with external actors. The strongest criticism against performance measurement is found in Norway, where it is considered to challenge the independence of the universities.

The measures of the PRFSs have also been more or less institutionalised (Scott 1987; Zucker 1987). As already noted, bibliometrics have been used for quite some time within the natural sciences, but this has been less so in the social sciences. Although some opposition has been noted, our results indicate that, within the social sciences, people are getting more habituated to the performance measures and come to act in accordance with the incentives provided by them. There are also clear signs of reification (Espeland and Stevens 1998) because the measures of the PRFSs are used locally in various ways to make decisions and allocate resources. Our results indicate that reification occurs mainly where performance measures are less institutionalised. As noted in Denmark, the PRFS was thoroughly implemented at the regional university, but in the flagship universitybibliometrics were already institutionalised, which made the PRFS seem superfluous.

Perhaps most interesting are the differences regarding the reconstitution of research (Dahler-Larsen 2014; Woolgar 1991) as a result of the PRFSs. There are clear examples of this in Norway, where the interviewees mention the importance of the publication outlet levels and how this affects the way they make sense of research. It also shows in the way that sabbaticals are perceived as something a person deserves rather than have a right to. Also, in the Finnish interviews, there are indications that the measurement logic as embodied by the PRFS has reconstituted the perception of research activities within universities. The efficiency and measurability of results are now considered to be important aspects of research. Finally, the Danish case shows that the PRFS has led to less Danish publications, indicating a reconstitution of what quality publications are. These aspects are not as prevalent in the Swedish case. In Sweden, we have instead noted scattered voices of criticism against the implementation of local PRFSs, pointing mainly to the homogenising force of metrics and their inability to measure individual-level performance. This opposition seems to prevent reconstitution, while a pragmatic approach accepts the use of metrics for other purposes, such as external relations.

Taken together, it appears that the reactions from Sweden differ somewhat from the other three countries. In general, the Swedish interviewees display less concern about the use of PRFSs compared with the interviewees from Denmark, Finland and Norway. A possible explanation for this finding is that the bibliometric models used in the other three countries are experienced as more actionable than the one used in Sweden. The Danish, Finnish and Norwegian systems create clear incentives for researchers and enable decision-making based on publication points. The inclusion of citations in the Swedish system does, however, make it harder to assess the value of individual performance before some time has passed and the work has been cited. It is also clear that the novelty of the metrics is greater in Denmark, Finland and Norway, where completely new databases have been constructed. These have been large endeavours for the scientific communities in these countries and have also made a large impact on the researchers measured by the systems. The Swedish PRFS though, is built on an already existing database, which includes well-known metrics that many researchers were already relating to.

Going back to the original question of this chapter, we have sought to illuminate how the varying use of performance-based research funding is reflected within universities across the Nordic countries. We have looked at the formal resource allocation systems at the national level and studied the effects they have had on the perceptions of research at local levels. All of the studied countries have adopted PRFSs, and over the course of roughly two decades, they have modified their PRFSs to suit the national context and their role in the changing global working environment. The increasingly competitive environment and the systems put in place to monitor the research performance of Nordic universities have been internalised locally to varying degrees, partly based on differences in disciplinary practices and divergence between the traditions of flagship and regional universities.

In the current study, actionability, legitimacy and institutionalisation have functioned as valid factors to analyse how metrics affect university organisations. According to our analysis, an additional temporal dimension could be taken into account when looking deeper into the ways in which these three factors influence the use of metrics. As we look at the case universities, it seems that aspects of actionability, decision-making and incentive systems have been somewhat more straightforward to implement as managerial tools because their use is more under the control of formal management structures. Legitimacy and institutionalisation, however, require a longer temporal perspective because their success depends more on gaining trust and showing appreciation mutually between the academic, managerial and administrative professions.