Keywords

Introduction

Year after year, the higher education sector in Nordic countries continues to enjoy the highest level of public investments among all the OECD countries. Like in other European countries, these investments have put higher education institutions (HEIs) under increased scrutiny, with the obligation to explain their behaviour and performances. This trend is further intensified by the fact that the higher education sector competes with other sectors for public funds, namely primary and secondary education, public health, social services and defence. At the same time, Nordic HEIs are facing increasing expectations to become more ‘entrepreneurial’ and increase their abilities to compete in a more globalised market. All these mean that there is an increasing focus on cost efficiency and productivity, as well as quality.

The need for greater efficiency, productivity and quality in the higher education sector has triggered increased governmental interest towards different mechanisms of accountability, especially evaluation and performance measurement. This interest has developed over a relatively long period of time, but it has now reached its culmination point in many ways. For instance, advances in citation tracking, performance data collection and databases and the professionalisation of evaluative practices and methods have opened new avenues for verifying accountability.

This chapter offers definitions for the key concepts used throughout the book, which are as follows: accountability, evaluation and performance measurement and management. Each section is followed by a short contextualisation of the concept in Denmark, Finland, Norway and Sweden. The chapter ends with a short discussion about the policy convergence between Nordic countries and the reasons for it.

Accountability

The concept of accountability has always been a topical question in higher education. Over time, academics and their institutions have had relationships with various stakeholders (church, states and local communities) in which some sort of ‘answerability’ has continuously played an important role. In the modern world, such answerability relates to universities’ accounting for public money spent, as well as academics explaining their professional work and its outcomes (Huisman 2018). The concept of accountability, however, is multifaceted and ambiguous, allowing a range of understandings and definitions (Christensen and Lægreid 2017). Often, the concept of accountability is used in a broad sense, making it difficult to maintain clear distinctions in terms of related concepts like transparency, responsiveness, responsibility, answerability and liability (Bovens 2007; Dubnick 2014). Essential questions for accountability are as follows: who is to be held accountable, for what, to whom, and through what means? (Huisman and Currie 2004; Trow 1996). However, in general, accountability can be considered a relational principle that attaches certain expectations of one party to the actions and performance of another, thereby making the performing party responsible for its actions. The concept can be studied according to a personal and a structural perspective (Sinclair 1995). The personal viewpoint relates to internal virtues that guide actors’ actions, independently of formal rules, while the structural perspective is linked to mechanisms between an actor and a forum to justify actions (Bovens 2007). According to this latter view, accountability is a relational concept providing a link between those held accountable and those who have a right to claim the accountability of others (Bovens et al. 2014). For our analytical purposes, in defining accountability, we find Bovens’ (2007, 450) definition especially useful, where accountability is generically seen as a ‘relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences’.

The main purposes behind the need for accountability vary. For instance, accountability is needed to discourage fraud and manipulation, strengthen the legitimacy of institutions and enhance the quality of performance and work as a regulatory device through the criteria made explicit in the various reports requested by the reporting institutions (Huisman and Currie 2004). As such, it can be understood as ‘a constraint on arbitrary power, and on the corruptions of power, including fraud, manipulation, malfeasance and the like’ (Trow 1996, 311). Much of the discussion on accountability is geared towards economic or financial aspects. In addition, in the context of higher education, discussion on accountability is often paired with discussion on efficiency, effectiveness and performance evaluation. In this sense, the process of verifying accountability calls for proving, by effective means, that higher education has attained the predetermined results and performance. Correspondingly, accountability in higher education includes elements such as the rational use of resources, provision of evidence, evaluation of evidence, attaching importance to costs and effectiveness and improving the education process (Dressel 1980; Kai 2009).

Accountability regimes in higher education systems still tend to be the combinations of types of accountability principles and processes (King 2015). Out of these perspectives, professional and political accountability are often considered, especially important in the context of higher education (cf. Huisman and Currie 2004; see also Bovens et al. 2014; Romzek 2000). The difference between these two factors is related to the source of standards for performance. Professional accountability involves a high degree of autonomy for individual academics, whose decisions are based on internalised norms of what is considered appropriate action and performance. Especially on the side of research, the professional accountability standards are formulated in the academic community based on internal professional norms, which are enforced by academics. Due to the strong emphasis on professional authority, they are also more difficult to steer or manage in formal organisational settings.

Political accountability refers to political expectations for HEIs’ performance. In this sense, demands for accountability are a safeguard to protect the interests of various stakeholders and interest groups, as well as the public. In the widest sense, political accountability also includes an element of social accountability, which means HEIs’ answerability to wider society, not just the constituencies and political actors involved in the governing of HEIs. In more narrow terms, political accountability illustrates the governance relationship between the state and state-funded universities. In this context, a further distinction can be made between legal and financial accountability on the one hand, and academic accountability on the other. Other equally important aspects of autonomy are legal and financial accountability (Trow 1996). Legal and financial accountability highlight the universities’ obligation to report how state public resources have been used and to what effect. This side of accountability clarifies whether the university is doing what is required of it by law and whether its resources are being consumed for the purposes for which they were provided.

Discussion on accountability is often accompanied with discussion about the limits of the self-regulative capacity of institutions (autonomy) and individuals (academic freedom); the emergence of various accountability mechanisms can be interpreted as a signal of a lack of trust in academic work and the functioning of universities (Gornitzka et al. 2004; Kivistö 2007; Schmidtlein 2004). Institutions universally desire to uphold their rights and capacities of self-governance, maintain substantial autonomy, and exempt themselves from excessive interference from the government and other institution-external entities. However, accountability in its all forms implies outside interference, and intensification of accountability is often at odds, at least to some extent, with different aspects of institutional autonomy. As the notion of accountability seems to be highlighted more explicit on stakeholders’ agendas than in the past, the balance between accountability and autonomy often tilts towards an overemphasis on accounting for performance (Huisman 2018; Kai 2009).

Contextualising Accountability in the Nordic Countries

Denmark

Universities in Denmark are met with several accountability requests. Professional accountability is important in relation to the quality of educational programmes, and especially, the quality of research. However, to some extent, professional accountability has been challenged by political accountability, especially in the wake of the mergers of former governmental research institutes into universities.

Not surprisingly, political accountability regimes are well developed in welfare states where higher education is fully funded through taxation and to a certain extent, research is too. Over the last 15 years, reforms as part of higher educational policy have aimed at enhancing not only political but also social accountability. External stakeholders have become members of advisory councils and university boards. A corporate-like governance structure, including boards with a majority of external members and a chairman who is politically approved, has been introduced. The former elected leaders have been replaced by top-down appointed leaders. All in all, political accountability has been enhanced through intensified managerial accountability, as well as through the introduction of New Public Management (NPM) instruments like contracts and performance-based funding. However, these instruments have come hand in hand with more traditional legal and bureaucratic forms of accountability in recent years, for example, the dimensioning of educational programmes not matching labour market demands.

Finland

Emphasising different aspects of accountability has played a substantial role in shaping the contents of the Finnish higher education policy over the past 25 years. After the introduction of block grants and the performance-based funding model in the mid-1990s, especially financial accountability has dominated the discussion about the accountability of universities. Currently, the Finnish university funding model is one of the most performance-oriented models in the world: over 70% of the core state funding is based on success in performance criteria (de Boer et al. 2015). At the same time, the role of legal accountability in Finnish higher education policy has weakened after the new Universities Act that came into effect in 2010. This legislative reform changed the legal status of universities from being part of the state administration to being independent legal entities. Legislative regulation on central aspects like staffing policies (especially, regulation on staff qualifications, recruitment and remuneration) and internal governance of universities was significantly changed; at present, Finnish universities enjoy a relatively high level of autonomy compared with universities in many other European countries, including other Nordic countries (see Bennetot Pruvot and Estermann 2017).

In Finland, the role of universities in developing the economy has been supported and actively managed by successive governments since the 1960s. This policy has continued to the present, when universities are seen as central actors in the Finnish knowledge-based economy and core parts of the Finnish innovation system expected to contribute to sustainable economic growth, employment and national competitiveness (Biggar Economics 2017). At the same time, Finnish higher education policy recognises the importance of higher education’s social and civic responsibilities, for example, in reducing poverty, inequality and social exclusion. Year after year, among all the OECD countries, Finland is among the top three countries with the highest level of public expenditure (compared to the GDP) on HEIs (see, e.g. OECD 2017). This has kept political expectations, and therefore, political accountability, at a high level. Higher education in general and universities specifically continue to be at the core of educational policies, and thus, political interests. At the concrete level, this has been evident in the ‘Government Programmes’ and ‘Action Plans’ of the past ruling cabinets (see, e.g. Prime Minister’s Office 2017). At the same time, important stakeholders, such as several trade unions, student unions and employer organisations (e.g. the Confederation of Finnish Industries), have continued to keep universities and higher education high on their political agenda.

Professional accountability in Finland has remained strong alongside the other forms of accountability. For instance, various scientific associations operating under the Federation of Finnish Learned Societies are actively exercising their gatekeeping role, especially in publishing. Scientific associations are often responsible for publishing scientific journals and other publications, and they appoint the editorial boards and editors of these journals. In addition, the various trade unions, such as the Finnish Union of University Professors and Finnish Union of University Researchers and Teachers, continue to play a role in upholding and safeguarding professional norms and values of Finnish academic profession.

Norway

Accountability aspects have been in the focus of Norwegian higher education in the last three decades. The managerial structures have been changed through the ‘Quality Reform’ of 2003–2004, which involved an effort to enhance political and social accountability by including politically appointed stakeholders on the boards of the universities. The Ministry of Education introduced a model where the board appointed the chair, as well as the rector. This model replaced the traditional one where the rector was elected by the university and chaired the board (Gornitzka and Larsen 2004). Still, the individual institution could choose which model to follow, resulting in a hybrid version in many universities, with both appointed and elected leaders in the institutions. The aim in giving the universities the possibility of choosing the governance model was to increase autonomy (Stensaker 2014).

A performance-based funding system was introduced through the same reform, and this can be considered an important part of accountability programmes (Frølich 2011). Such a system offers a neutral framework for assigning funds between universities and scientific fields. The shares of funding related to performance-based indicators are much smaller than they are, for example, in the Finnish system. In Norway, 30% of the funding is assigned according to performance-based indicators from teaching and research, while the basic funding (70%) provides long-term and stable financing for the sector (Kvaal 2014). Most Norwegian HEIs are state owned, but private institutions are granted the same state funding as the public. As for professional autonomy, there has been an increased focus on quality of teaching and alignment in educational programmes, as well as on research quality and quantity. This focus on both quality and quantity has challenged the professional autonomy via a bureaucratic and political form of accountability.

Sweden

As in the other Nordic countries, Swedish universities are accountable to many stakeholders. The legal accountability in Sweden has changed in the last two decades. The country has a long tradition of central state steering based on planning. However, this changed during the 1980s and 1990s across many sectors, including higher education. During the 1990s, following a ground-breaking reform in 1993, the higher education sector was fundamentally deregulated, with a reduction in central laws and ordinances and an increased formal autonomy for HEIs. Although most universities remained state agencies, with the autonomy (or freedom) reform, two HEIs, namely University College Jönköping and Chalmers University of Technology, became private foundations upon applications to the government. The main differences were regarding the internal organisation and regulations of hiring academic staff. Academic positions had thus far been centrally regulated, but from then on, professorships could be initiated by each HEI.

An important aspect of the accountability context in Sweden is the funding system. The reform in 1993 also introduced performance-based funding in education. The system is based on the number of students starting education (input) and number of students graduating (output). The government also holds HEIs accountable in annual dialogues. Each year’s ‘production’ is presented in appropriations laid out by the government. The main aspects of state accountability are within the realm of evaluation (details are given below). As in Denmark, external stakeholders are represented on university boards.

The professional accountability remains strong, both as a standalone aspect of academic work and as intertwined in political accountability. Like in Finland, university teachers’ and researchers’ unions are a strong voice for the academic profession. Peer review is an ever-growing activity, for example, in conferences, research proposals, academicpublications and hiring and promotion of academic staff. Senior academics spend a significant amount of time assessing colleagues.

Evaluation

Evaluation is closely related to accountability, as it is often considered an action that is used to verify accountability. For this and other reasons, evaluation has been a key theme in the public policy and higher education literature for at least three decades. It is obvious that evaluation can be used for control, aiming at contributing to holding individuals, groups, departments and organisations accountable. However, evaluation can also be used for many other purposes, including further learning and enhancement and enlightenment purposes. Evaluation, therefore, is not limited to summative (retrospective) assessments, but it can be also formative (during the process) or diagnostic (prior to the process). Moreover, evaluation can be used in strategic and tactical ways when actors try to pursue specific interests, as well as in symbolic ways when they wish to signal aspects like novelty. A more recent discussion related to use is the discussion on constitutive effects of evaluation procedures and performance indicators (Dahler-Larsen 2014). The idea is that evaluation creates a new reality influencing and changing interpretations of the world, thereby enabling shifts in social relations and practices.

The literature is rich in defining the concept of ‘evaluation’. The North American literature is mostly concerned with aspects related to programme evaluation. Michael Quinn Patton, for example, defines (programme) evaluation as involving ‘the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgements about the program, improve program effectiveness, and/or inform decisions about future programming’ (Patton 1997, 23). In the Nordic context, evaluation has been defined in a much broader way, including evaluative procedures for assessing the effectiveness of public organisations. An example can be found in the work of Evert Vedung, who defines evaluation as ‘careful retrospective assessment of the merit, worth and value of administration, output and outcome of government interventions (in Swedish: offentlig verksamhet), which is intended to play a role in future, practical action situations’ (Vedung 1997, 3). This broader definition can be interpreted to resonate with the ideals of the Nordic institutional welfare state.

As evaluative thinking has increasingly become integrated into regulative and managerial practices, a distinction between evaluation on the one hand and other concepts, such as quality assurance, accreditation and performance measurement on the other, has become increasingly blurred. In the higher education sector, we find an array of evaluative systems and procedures performed at different levels and directed towards different activities, especially teaching and research (Geschwind 2016). At the national level, evaluative procedures are part and parcel of several accountability mechanisms by governments, such as performance-based funding and various external quality assurance instruments, most notably, accreditation and auditing systems (e.g. Gover and Loukkola 2018; Santiago et al. 2008). In the Nordic countries, these evaluative procedures are performed by national, autonomous organisations with their own boards, management and staff (Smeby and Stensaker 1999). Their various evaluation practices are typically based on peer review panels including members of academic staff, students and stakeholders from working life and supported by project managers from the evaluation body. These national bodies have an umbrella organisation called the European Association for Quality Assurance in Higher Education (ENQA). They have presented European standards and guidelines (ESG) to be followed by all national bodies. There is an ongoing and recurrent process of accreditation of quality assurance bodies (Stensaker et al. 2011).

Intra-institutional evaluative procedures play a critical role in shaping the teaching and research activities in universities. These procedures are built into educational programmes, for example, in monitoring student satisfaction, and at many universities, peer review–based evaluations of departments and programmes (‘audits’) are organised and carried through. Since the 1990s, HEIs in the Nordic countries have been expected to take responsibility for their own evaluation activities. Depending on the focus of the national systems, these institutional evaluations have either mirrored or complemented the national ones (Karlsson et al. 2014). This development of intra-institutional evaluation has also implied that HEIs invest in the internal evaluation capacity in the form of designated evaluation units and hiring professional staff with evaluation experience.

At the level of individual academics, peer review–based evaluative procedures are a standard precondition for scholars to be appointed and promoted, as well as having their research projects funded and findings published. Increasingly, conferences have become based on peer review. As a whole, higher education sectors in most European countries are saturated with aspects pertaining to evaluation to the extent that one can refer to ‘evaluation overload’. For instance, a recent study showed that senior academics can spend around a month per year evaluating other researchers’ work (Langfeldt and Kyvik 2010).

Evaluation focuses on assessing quality, comprising both education quality and research quality. The concept of quality is ambiguous, and both education quality and research quality are multifaceted and multidimensional phenomena. Quality can be judged, among other things, as exceptionality, consistency, fitness for purpose, value for money and transformation (Harvey and Green 1993). Originally, this traditional categorisation was an attempt to deconstruct the rather abstract concept of quality in the context of higher education, focussing on its various dimensions to reconcile different ways of thinking about quality (Santiago et al. 2008; Stensaker 2004). Over the years, it has undoubtedly become the most influential framework for understanding and discussing quality in the context of HEIs. Although almost 25 years old, its position remains unchallenged in the field of higher education research (Kivistö and Pekkola 2017).

In more concrete terms, quality in education can include aspects like preconditions (staff competence, talented student body and infrastructure), contents (relevant curriculum), process (pedagogical arrangements carried through by trained teachers) and the achievement of learning outcomes, retention and student employability (see, e.g. Gibbs 2010). Quality education can even be further contextualised, including the views and expectations of relevant stakeholders (Jongbloed and Benneworth 2010). The emphasis on the different phases of education differs over time, and evaluation systems are usually readjusted slightly according to the requirements of the operating environment. For example, in some systems, a great emphasis can be placed on teacher competences, whereas other systems can rely heavily on an assessment of the final thesis (Lindberg-Sand 2011).

Although it differs slightly across the scientific fields and methodologies used, the characteristics of research quality often relate to aspects like objectivity, validity (internal and external), reliability, open-mindedness, honesty and thorough reporting (e.g. Miles and Huberman 1994; Steinke 2004). As in education, not only has research output been under scrutiny, but so has the preconditions for undertaking research, that is, the research environments. Research quality evaluations have increasingly included assessments of the influence of the research, as shown both within academia and beyond, in the society at large. The latter could be evaluated by using patent and licensing data and counting the number of new companies, as well as by asking research environments to submit more qualitative ‘impact cases’ (Karlsson 2017).

The latest trend has been to evaluate the administrative operations at the institutional level as well. These ‘administrative assessment exercises’ have been undertaken with the same methodology as the evaluations of education and research, making use of panels of experts, both academics and professional support staff. The balance between central and local administrative support, digitalisation, efficiency and effectiveness and new roles and competency needs for administrative staff have been recurrent themes in these evaluations (Karlsson and Ryttberg 2016).

Contextualising Evaluation in the Nordic Countries

Denmark

Evaluative procedures are widespread in Danish higher education. External evaluation of educational programmes was adopted in the late 1980s and institutionalised in the 1990s, at first as a soft national system supporting local quality development, but from 2007 onwards, as a hard control-oriented accreditation system where every bachelor’s and master’s programme, new and established, had to be approved (Hansen 2011). Currently, the system is being changed into one based on approval of the internal quality systems at the institutions. If approval is refused, institutions are not allowed to establish new educational programmes, and existing programmes must be accredited. At the institutional level, student satisfaction evaluation is a routine exercise. Evaluations of educational programmes in the light of stakeholder and labour market requirements are carried out on an ad hoc basis.

Compared with education, research evaluation is less standardised. There is no national system for evaluation of departments, disciplines or scientific fields. Some universities have developed institutional procedures aiming at taking all departments through research evaluations based on international peer review, while others do evaluations on an ad hoc basis or organise with advisory councils giving advice on how to improve research quality. However, in connection with basic funding of research in universities, a performance-based funding system works as an evaluation tool for research. While this metrics-based evaluation tool is meant to be an accountability and quality improvement tool on a national level, giving universities incentives to improve research, the system has been used internally at universities in budget models and for setting performance demands (see Chap. 4). In addition, evaluation is also linked to competitive research funding. Funders of research, public and private, evaluate the quality of research proposals.

Finland

Finnish universities, units and academics are subject to several types of evaluative procedures. The most important of these are institutional audits (complying fully with previously mentioned ESG), which form the core of the national quality assurance system. The Finnish Education Evaluation Centre (FINEEC) and its predecessor the Finnish Higher Education Evaluation Council (FINHEEC) have conducted audits of the universities’ internal quality assurance systems since 2005. According to legislation, all HEIs must regularly (every six years on average) participate in external audits of their operations and internal quality assurance systems (Eurydice 2018). The main emphasis of the audits is to secure that institutions have properly functioning internal quality assurance systems; however, they do not evaluate the quality of education, research, or other institutional activities per se. The nature of external audits is primarily enhancement and improvement rather than control; failing an audit does not result in any sanctions, but instead, only initiates a mandatory re-audit process. This development rather than control orientation in evaluation can partly be explained by the rather extensive use of performance-based funding in providing core funding to universities. Having an accreditation type of evaluation system could be considered to add another layer of control, thereby making quality improvement a process of mandatory compliment rather than actual development.

Compared with that of education, the evaluation of research in Finland is a more multifaceted process, and it is more driven by needs of securing accountability. The Academy of Finland, the national funding agency for research, is responsible for financing research, and therefore, evaluating the research quality (applications). In addition, most universities regularly conduct internal research assessment exercises based on international peer review. However, unlike in some other European countries, there is no national-level comprehensive and centralised evaluation procedure for research. The quality of research, however, is considered in the funding model based on the following: (1) a bibliometric indicator awarding universities for publications ‘points’ (13% weighting) based on their coefficient (‘JUFO’ levels 0–3), which is expected to reflect the quality of publication outlets, and (2) amount of competitive researchfunding (9% weighting).

Norway

According to legal rules, the individual Norwegian HEIs are responsible for maintaining the quality of the offered education through systematic evaluations of quality, but the institutions are allowed to choose how to organise this work. Such evaluations are supposed to cover quality aspects of education, learning processes for students and practical studies, as well as regarding relevance of the educations to society. In addition, the Norwegian Agency for Quality Assurance (NOKUT) supervises the institutions and evaluates how the quality assurance work is performed. NOKUT’s mission is to supervise and provide information used to develop the quality of higher education in Norway, as well as evaluate and control the quality of study programmes and institutions. NOKUT performs periodic control of the accredited higher education programmes and institutions, but such controls are supposed to occur at least every eight years. The standards and guidelines recommended by NOKUT comply with ESG as far as possible.

The follow-up on research quality depends on several stakeholders acting as funders of the research. The Norwegian Research Council is a main actor in providing funding for research in Norway, but smaller public and private agencies also play a role. For accountability and competitive reasons, the quality of research applications is evaluated through peer-review processes. The Nordic Institute for Studies in Innovation, Research and Education (NIFU) is an independent research institute that aims to deliver data on how Norwegian research and innovation is developing and the importance for society. Another central actor is the Norwegian Centre for Research Data (NSD), which evaluates the quality of research projects prior to their initiating to secure anonymity of the participants; the NSD also acts as a national agent for securing and storing collected data. The Statistics on Higher Education (DBH) database information is also distributed by the NSD.

Several databases have been established in Norway to secure usable data to follow up on evaluations as actions to verify accountability. Data related to teaching, as well as research-related activities, are collected and publicly available at the NSD, DBH, NIFU and Statistics Norway (SSB). As one of the few countries offering this, a national and non-commercial bibliographical database named the ‘Current Research Information System in Norway’ (CRISTIN) is publicly available for recording scholarly and peer-reviewed literature. The individual researchers are supposed to report their publications, and data from CRISTIN are used as background material for assigning performance-based funding to the universities. According to the Norwegian Publication Indicator, publication points are separated at level 1 (lowest level) and level 2. The split of publication channels into two levels is due to peer reviews from academic associations, and the ratings of the different scientific journals and publishers are published on the webpage from NSD.

Sweden

Evaluation activities in Swedish higher education are performed at the national level, HEI level and by individuals. Starting with education, like in the other countries, a national system of evaluation has been in place since the 1990s, as part of the NPM-inspired reforms in the early 1990s. The first system can be described as a light-touch system, and it provided evaluations of each institution’s quality assurance system. These so-called institutional audits were undertaken during two rounds, with small adjustments. The system that followed (2001–2006) included an emphasis on subject and programme reviews across the system. All subjects and programmes leading to a degree were included. Since the 1990s, accreditation of programmes, scientific areas and HEIs has also been implemented, as well as thematic evaluations. The emphasis has shifted over the years; currently, there is again more focus on institutional audits.

Evaluation of research has been the responsibility of several actors. Through the Swedish Research Council, the state has initiated comprehensive subject evaluations. All the funding bodies evaluate the research that is being funded. There has been a development from only ex ante assessments of proposals to mid-term and final evaluations of funded projects and programmes. Many HEIs have also initiated independent evaluations of research. They follow a similar basic model, including panels, bibliometrics, self-evaluations and site visits, with a slight variation regarding scope and emphasis (Geschwind 2017).

Performance Measurement and Management

As is the case withevaluation, performance measurement and management can be understood as instruments for exercising accountability. In the context of higher education, performance can refer to all actions, tasks and processes carried out in HEIs (teaching, research, and third mission activities), as well as outputs and outcomes resulting from these actions. Given this high level of ambiguity, what is meant by performance is very much subject to different conceptions and definitions.

To determine its level (good vs. bad, low vs. high), performance needs to be measured somehow. As an activity, measurement requires objective ‘measures’ that can be utilised in the process of measurement to determine the performance (cf. Neely et al. 1995). In this sense, the selection of measures and the way in which they are utilised (weighting, measurement methodology, etc.) defines what is, at any point in time, considered performance. Thus, performance measurement is an evaluative act of quantification (of performance). By nature, performance measurement is always instrumental, as it is done for a certain purpose, whether symbolic or real. These purposes are often related to management and manifested around a set of instruments, such as ‘management by objectives’, ‘total quality management’, ‘knowledge management’, or ‘strategic management’, aimed at achieving organisational goals. Thus, performance management in higher education can be defined as an activity where universities use the information acquired through performance measurement to achieve and demonstrate progress towards a predetermined set of goals (e.g. Wholey 1999).

Performance measurement, however, is not only a tool to verify accountability; it is also a means of directing organisational attention and focus. This is done by translating the institutional strategy into a set of goals reflected in performance measures that make success (and failure) more concrete for everyone (Melnyk et al. 2004; Vasikainen 2014). The goal of this approach to management is shifting focus from input and focussing on bureaucratic rules and procedures, to the output with goal setting and use of performance information, where public organisations also focus on economic performance (Christensen et al. 2007; Hvidman and Andersen 2013). These techniques tend to be cyclical, incorporating the formulation of objectives, performance, evaluations and adjustments, and this information is used to make managerial decisions.

There is a generic assumption that ‘management is management’ (Hvidman and Andersen 2013, 37) and the same managerial techniques can be applied in both the private and public sector. Considering this, three organisational characteristics that differ between public and private organisations may theoretically mitigate the effectiveness of performance management in the sectors as follows: incentives, capacity and clarity. For incentives, managers in the public sector are presumably motivated less by pay and other financial incentives than managers in the private sector are, and they are steered by a public service motivation, where the value of doing something of importance for society is a personal incentive. Regarding capacity, public managers often have lower autonomy and higher levels of bureaucracy, and this affects their capacity to take advantage of the collected information, which can be used for decision-making. The clarity of goals is also more problematic in public organisations, as there are many stakeholders, multiple goals and different expectations of political responsiveness and social equity (see Boyne 2002).

Often, performance management is utilised simultaneously with performance-based funding, where funds are allocated by a formula or algorithm for achieving certain predefined measures of performance. In a higher education context, most of the performance indicators measure progression or completion of final outputs related to teaching and research, such as study credits, number of degrees awarded, publications, competitive research funding awarded, citations, patents, level of competitive/external researchfunding, or student satisfaction (Kivistö and Kohtamäki 2016). Performance-based funding is believed to incentivise institutions to improve or maintain their level of performance in exchange for higher revenue (Dougherty and Reddy 2011). By reformulating incentives so that institutions are rewarded or punished primarily according to actual performance, performance-based funding mechanisms stimulate a shift in institutional behaviour towards greater efficiency. However, whether this is accomplished in real terms is another matter (Kivistö and Kohtamäki 2016; Kivistö et al. 2017; Rutherford and Rabovsky 2014).

Performance management and performance-based funding are often associated with the use of performance contracts/agreements, both at the system level and in institution internal arrangements. Performance agreements are contracts (see Gornitzka et al. 2004) between the government and individual HEIs, which set out specific goals that institutions will seek to achieve in a given period. They specify intentions to accomplish given targets, measured against pre-set known standards (Claeys-Kulik and Estermann 2015; de Boer et al. 2015). Furthermore, performance management increasingly takes place at the level of the individual academics (Andersen and Pallesen 2008; Kivistö et al. 2017). This is especially the case when it comes to research performance, where measurement by publication points has become common place in Nordic countries, especially Norway, Denmark and Finland (see, e.g. Aagaard et al. 2015; Pölönen 2015). In some institutional contexts, direct financial rewards could even be allocated to individual academics for research achievements, for instance, in the form of publications in high-status journals (Opstrup 2014). These rewards can be paid as one-time bonuses, top ups of salaries and/or a maximum percentage of the individual’s total salary (Arnhold et al. 2018).

Contextualising Performance Measurement and Management in Nordic Countries

Denmark

Performance measurement and performance management have been increasingly important principles in higher education governance in Denmark for more than 30 years. However, performance management has been criticised for encouraging production of quantity at the expense of quality. This criticism has recently been followed by a political request to incorporate quality criteria in the performance management approaches.

In the 1980s, performance management was introduced in educational funding. In today’s funding system, educational programmes are funded solely according to a performance principle. Funding is based on the number of students passing exams, as well as on bonuses given if students accomplish their studies in due time. The system is based on a real-time principle implying that the universities do not know the exact amount of resources available for education in a given year until the autumn of the same year. The real-time principle can be said to have been an advantage for the universities in a period with considerable growth in student numbers, but uncertainty about budgets due to variations in student practice have posed challenges for the institutions. Recently, it has been decided to further develop the funding system, including employability criteria and quality aspects that are probably linked to student assessments. Over the years, the performance-based funding formula has thus become increasingly complex and still more tightly politically governed. Since 2009, an increasing part of the funding for basic research, currently amounting to 20%, has been performance based. The formula includes the number of graduates from master’s and PhD programmes, the ability to attract external funding and the counting of publications. A quality aspect is included in counting publications, as publication channels are divided into two groups, one releasing more points and resources than the other.

Funding from the Ministry of Higher Education and Science is given to the institutions as a lump sum, meaning that the universities decide how to distribute the resources between faculties and departments. In relation to education, the performance-based principle is typically implemented all the way down in the hierarchy, whereas there are only a few examples of this in relation to funding for basic research. Universities also negotiate performance contracts with their parent Ministry. Hitherto, contracts have not been related to funding allocations, but the institutions must document goal attainment. Recently, it was decided to link goal attainment to funding from 2019. In Denmark, salaries are only marginally linked to performance, although this aspect is increasingly gaining importance.

Finland

In Finland, performance measurement and performance management have been guiding principles in higher education governance, both at the system and institutional levels, for over 20 years. Originally, performance management and measurement landed in the university sector within the general reform of state administration, which, to a large extent, was implemented following the ideals derived from NPM. Today, even after the reform of 2010, which made universities legally independent from the state hierarchy, the university sector can be considered one of the administrative sectors governed/financed by the state where the ideals of NPM are most comprehensively applied (see e.g. Kauko and Diogo 2011; Salminen 2003). Some of the recent empirical studies have also proven the effectiveness of using performance-based funding in the increasing performance of Finnish universities (see Seuri and Vartiainen 2018).

Although the execution of performance management on behalf of the Finnish Ministry of Education and Culture has been highly structured, its further application in individual universities in their internal management and strategies is not controlled by the Ministry. In fact, individual universities, and in many cases their subunits, like faculties, have developed their own internal variations of performance management (Kallio and Kallio 2014). The extensiveness of performance-based funding is mostly visible in allocation practices in providing resources to universities, in professionalisation of academic and administrative management positions, in the use of contractual arrangements (performance agreements), and in outsourcing and centralisation of support and administrative services in universities. Furthermore, as in many other European countries, old and new trends related to management, such as strategic management, quality management and knowledge management, have also been applied in universities.

One important aspect of performance measurement is the salary system for university personnel. Since 2008, the salary system of universities, comprising both academic and administrative staff, has been based on performance measurement, where a maximum of around one-third of the salary is performance based. Although the salary or other performance-based financial incentives have not proven to be the main motivation for Finnish academics to work harder (see Kivistö et al. 2017), they are applied as means of translating system- and institutional-level incentives to the individual level, thereby drawing attention to what is considered valuable (and what is not).

Norway

The funding system for HEI in Norway provides a more stable budget than that in the Danish system, as 70% of the funding is allocated as block grants. Still, the 30% of performance-based indicators increasingly function as a policy tool used to stimulate improvement in both teaching and research, as well as managerial tools in the institutions. Teaching indicators constitute the largest share (24%), focussing on throughput of students and internalisation. As for research indicators (the remaining 6%), these are related to the throughput of PhD students, external funding of research (e.g. from the EU and the Norwegian Research Council), and finally from the metrics related to publications. The Norwegian Publication Indicator as a measurement system was introduced in 2004. As a policy and performance management tool, such indicators from research are meant to stimulate excellence and productivity, as well as to increase the accountability of public research. Another important aspect is aligning research to societal and economic needs (Aagaard et al. 2015). Despite the broad objectives, the financial role of the indicator is marginal, as it only distributes 2% of the funding to the sector (Aagaard et al. 2015).

This funding system based on metrics and a market model has, on the one hand, increased the autonomy in the universities, as the boards are responsible for prioritising within the allocated financial frames and aligning their activities to meet the goals for the sector. On the other hand, ex post control has increased, and the contractual relationship between universities and the state based on performance metrics is replacing the trust-based foundational pact (Stensaker 2014). The increased autonomy is counteracted by controlling instruments, reporting systems and the financial incentive systems following students and research activities (Christensen 2011). The individual academics are still autonomous regarding teaching and research, but the autonomy is limited or steered by incentive and reporting systems; this can feel like a decrease in professional autonomy (Christensen 2011).

Sweden

Generally, performance and performance measurement have become ever more important over time in Sweden as well. These phenomena have also increasingly ‘trickled down’ and been reflected across organisational levels. The developments of education and research described below have affected HEIs significantly, and various responses have emerged.

As mentioned above, one of the most dramatic changes in Swedish higher education was the introduction of performance-based funding in education, based on the inflow of students and throughput. The previous system was criticised for being too rigid, based on central planning, and not driving quality enough. The latter argument has also been used against the current system. Since funding is so closely related to student success, there have been discussions about decreased demands for passing students. The system is based on the idea that different educational areas bear different costs. A student in the Humanities is supposed to cost far less than an Engineering student, for instance. Another effect of this system has been an increased marketing activity by HEIs. An important aspect of the system is the use of a ‘ceiling’ for the number of students recruited. Allocation of funds has a limit and it is linked to a maximum number of students. Throughput of students has been a controversial quality indicator. Whereas there have been occasional discussions on the risk of lowering demands on students, there are also examples where student throughput has been linked to incentives. Overall, this has not affected the individual academics but rather organisational units and HEIs.

In research, the traditional model was block funding based on historical principles rather than performance. Direct state funding was the bulk of the total funding for research. Lately, there has been a development towards more competitive external funding than direct state funding, and as of 2018, the external funding made up slightly more than half of the total funding. A milestone in Swedish research policy was the introduction of performance-based funding as part of the direct state funding. Since the introduction in 2009, 10–20% of the total funding has been allocated to HEIs based on performance as shown in publications and external funding.

Converging Higher Education Policies

Organisational fields with their specific institutions, such as universities, have similarities in organisational design and activities all over the world. In many countries, universities have experienced a shift towards ‘academic capitalism’ (Slaughter and Leslie 1999) and operate as ‘entrepreneurial universities’ (Clark 1998; Etzkowitz et al. 2008). Rationalisation of the universities as organisational actors by the introduction of more formal structure, in terms of introducing a stronger emphasis on quality assurance, evaluation, accountability measures and incentive systems, can be considered a transnational process linked to the NPM type of governancereforms (Ramirez and Christensen 2013; Seeber et al. 2015). The social mechanisms of spreading the ideas of rationalisation can be highlighted from the perspective of institutional isomorphism (DiMaggio and Powell 1983). The literature on isomorphism concentrates on the increasing similarity of organisational and institutional structures and cultures, whereas studies on policy convergence focus on changes in national policy characteristics. Policy convergence, that is, the development of similar or identical policies across countries over time (Knill 2005), seems to be especially evident in Nordic countries, which show similar types of policy development in many significant areas of higher education policy, predominantly those related to governance.

One of the most important reasons behind policy convergence, although not the only one, is international policy promotion, where an actor with expertise in a policy field promotes certain policies. International (or supranational) organisations specialised in a certain policy field are the main actors for inducing the convergence of policies by actively promoting certain policies and defining objectives and standards in an international setting. Countries diverging from the promoted policy models may feel pressure to comply with the policies (Holzinger and Knill 2005; Knill 2005).

There are two overarching international political processes relating to higher education in Europe, which presumably have a significant effect on policy convergence, as follows: the higher education ‘Modernisation Agenda’ (European Commission 2006, 2011) promoted under the auspices of the EU institutions (especially the European Commission) and the intergovernmental Bologna Process (Moisio 2014). Many NPM ideals implemented in Nordic universities, such as promoting the accountability and autonomy of higher education institutions and improving the governance, funding, quality and relevance of higher education, are directly in line with the Commission’s Modernisation Agenda. Interestingly, the Modernisation Agenda presents chiefly the American higher education system and universities as one of the important points of comparison in developing European higher education (see also Slaughter and Cantwell 2012; Slaughter and Taylor 2016).

The Bologna Process seems to increase policy convergence at the European level, although the research evidence for this is not yet entirely clear (see, e.g. Witte 2008). However, Voegtle et al. (2011) have found that the higher education policies of the Bologna participants converge more strongly and that the Bologna Process has made a crucial difference in increasing the similarity of higher education policies. Especially in the area of quality assurance, most Bologna countries implemented most of the measures and included all the required actors for quality assurance measures according to Bologna standards by 2008 (Voegtle et al. 2011).

International/intergovernmental organisations, such as the OECD, World Bank and UNESCO, are highly influential actors in higher education policy convergence (see, e.g. Shahjahan 2012; Shahjahan and Madden 2015). At the European and Nordic level, most notably, the OECD has had a high level of influence on policy convergence. Nation states, including Nordic countries, often rely on the OECD to provide them with the latest data on trends, current issues and policy options. The OECD uses conferences, trend and review reports and the mediation of policy language to influence the thinking of national-level policymakers within and outside of its member countries (Shahjahan and Madden 2015). For instance, the OECD’s thematic reviews can provide a strong legitimisation or justification to national governments for initiating policy reforms, as has happened in Finland (Kallo 2009).

In addition to the influence of international organisations, cross-national policy convergence may simply be the result of similar but independent responses caused by the same type of policy problems to which countries are reacting (Bennett 1991; Knill 2005). At the same time, convergence in policies is more likely for countries that are characterised by high institutional similarity, as policies tend to be implemented insofar as they fit with the existing culture, socioeconomic structures and institutional arrangements. In the search for relevant policy models, states are expected to look to the experiences of those countries with which they share an especially close set of cultural similarities and ties (Knill 2005). In many ways, this is the case with Nordic countries, which are characterised by a welfare-state ideology and public-sector development in this framework. Moreover, they are relatively similar in population size and geographically proximate, and they share the same types of political systems and values. In terms of policy challenges, all Nordic countries have to deal with the financial, social and political sustainability of the Nordic welfare model, which in turn, as has been mentioned before, has triggered government-led reform efforts under the label of NPM, especially in the higher education sector. In all countries, universities are expected to play an increasingly important role in local and national economic development and innovation, which has further intensified government-led efforts to modernise the higher education sector in all Nordic countries.

Although policy convergence clearly is observable across the Nordic countries, however, it is important to observe that similar policies are introduced at different points in time and with important variations in the details. For instance, all the Nordic countries have introduced performance-based funding systems linked to the distribution of resources for basic research. However, performance in Nordic countries is measured using different indicators and redistribution potentials, and therefore, also the effects of the measurement are quite likely different. Other examples of divergence are found in relation to overall governance and management structures, as well as the national quality assurance systems linked to education. Overall, there seems to be more convergence in policy ideas and policy rhetoric than in actual policy implementation.