Advertisement

Performance in Higher Education Institutions and Its Variations in Nordic Policy

  • Jussi KivistöEmail author
  • Elias Pekkola
  • Laila Nordstrand Berg
  • Hanne Foss Hansen
  • Lars Geschwind
  • Anu Lyytinen
Open Access
Chapter

Abstract

The need for greater efficiency, productivity and quality in the higher education sector has triggered increased governmental interest towards different mechanisms of accountability, especially evaluation and performance measurement. This interest has developed over a relatively long period of time, but it has now reached its culmination point in many ways. For instance, advances in citation tracking, performance data collection and databases and the professionalisation of evaluative practices and methods have opened new avenues for verifying accountability. This chapter offers definitions for the key concepts used throughout the book, as follows: accountability, evaluation, and performance measurement and management. Each section is followed by a short contextualisation of the concept in Denmark, Finland, Norway and Sweden. The chapter ends with a short discussion about the policy convergence between Nordic countries and the reasons for it.

Keywords

External funding Authority relations Institutional theory Power Research freedom Budget-maximization logic 

Introduction

Year after year, the higher education sector in Nordic countries continues to enjoy the highest level of public investments among all the OECD countries. Like in other European countries, these investments have put higher education institutions (HEIs) under increased scrutiny, with the obligation to explain their behaviour and performances. This trend is further intensified by the fact that the higher education sector competes with other sectors for public funds, namely primary and secondary education, public health, social services and defence. At the same time, Nordic HEIs are facing increasing expectations to become more ‘entrepreneurial’ and increase their abilities to compete in a more globalised market. All these mean that there is an increasing focus on cost efficiency and productivity, as well as quality.

The need for greater efficiency, productivity and quality in the higher education sector has triggered increased governmental interest towards different mechanisms of accountability, especially evaluation and performance measurement. This interest has developed over a relatively long period of time, but it has now reached its culmination point in many ways. For instance, advances in citation tracking, performance data collection and databases and the professionalisation of evaluative practices and methods have opened new avenues for verifying accountability.

This chapter offers definitions for the key concepts used throughout the book, which are as follows: accountability, evaluation and performance measurement and management. Each section is followed by a short contextualisation of the concept in Denmark, Finland, Norway and Sweden. The chapter ends with a short discussion about the policy convergence between Nordic countries and the reasons for it.

Accountability

The concept of accountability has always been a topical question in higher education. Over time, academics and their institutions have had relationships with various stakeholders (church, states and local communities) in which some sort of ‘answerability’ has continuously played an important role. In the modern world, such answerability relates to universities’ accounting for public money spent, as well as academics explaining their professional work and its outcomes (Huisman 2018). The concept of accountability, however, is multifaceted and ambiguous, allowing a range of understandings and definitions (Christensen and Lægreid 2017). Often, the concept of accountability is used in a broad sense, making it difficult to maintain clear distinctions in terms of related concepts like transparency, responsiveness, responsibility, answerability and liability (Bovens 2007; Dubnick 2014). Essential questions for accountability are as follows: who is to be held accountable, for what, to whom, and through what means? (Huisman and Currie 2004; Trow 1996). However, in general, accountability can be considered a relational principle that attaches certain expectations of one party to the actions and performance of another, thereby making the performing party responsible for its actions. The concept can be studied according to a personal and a structural perspective (Sinclair 1995). The personal viewpoint relates to internal virtues that guide actors’ actions, independently of formal rules, while the structural perspective is linked to mechanisms between an actor and a forum to justify actions (Bovens 2007). According to this latter view, accountability is a relational concept providing a link between those held accountable and those who have a right to claim the accountability of others (Bovens et al. 2014). For our analytical purposes, in defining accountability, we find Bovens’ (2007, 450) definition especially useful, where accountability is generically seen as a ‘relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences’.

The main purposes behind the need for accountability vary. For instance, accountability is needed to discourage fraud and manipulation, strengthen the legitimacy of institutions and enhance the quality of performance and work as a regulatory device through the criteria made explicit in the various reports requested by the reporting institutions (Huisman and Currie 2004). As such, it can be understood as ‘a constraint on arbitrary power, and on the corruptions of power, including fraud, manipulation, malfeasance and the like’ (Trow 1996, 311). Much of the discussion on accountability is geared towards economic or financial aspects. In addition, in the context of higher education, discussion on accountability is often paired with discussion on efficiency, effectiveness and performance evaluation. In this sense, the process of verifying accountability calls for proving, by effective means, that higher education has attained the predetermined results and performance. Correspondingly, accountability in higher education includes elements such as the rational use of resources, provision of evidence, evaluation of evidence, attaching importance to costs and effectiveness and improving the education process (Dressel 1980; Kai 2009).

Accountability regimes in higher education systems still tend to be the combinations of types of accountability principles and processes (King 2015). Out of these perspectives, professional and political accountability are often considered, especially important in the context of higher education (cf. Huisman and Currie 2004; see also Bovens et al. 2014; Romzek 2000). The difference between these two factors is related to the source of standards for performance. Professional accountability involves a high degree of autonomy for individual academics, whose decisions are based on internalised norms of what is considered appropriate action and performance. Especially on the side of research, the professional accountability standards are formulated in the academic community based on internal professional norms, which are enforced by academics. Due to the strong emphasis on professional authority, they are also more difficult to steer or manage in formal organisational settings.

Political accountability refers to political expectations for HEIs’ performance. In this sense, demands for accountability are a safeguard to protect the interests of various stakeholders and interest groups, as well as the public. In the widest sense, political accountability also includes an element of social accountability, which means HEIs’ answerability to wider society, not just the constituencies and political actors involved in the governing of HEIs. In more narrow terms, political accountability illustrates the governance relationship between the state and state-funded universities. In this context, a further distinction can be made between legal and financial accountability on the one hand, and academic accountability on the other. Other equally important aspects of autonomy are legal and financial accountability (Trow 1996). Legal and financial accountability highlight the universities’ obligation to report how state public resources have been used and to what effect. This side of accountability clarifies whether the university is doing what is required of it by law and whether its resources are being consumed for the purposes for which they were provided.

Discussion on accountability is often accompanied with discussion about the limits of the self-regulative capacity of institutions (autonomy) and individuals (academic freedom); the emergence of various accountability mechanisms can be interpreted as a signal of a lack of trust in academic work and the functioning of universities (Gornitzka et al. 2004; Kivistö 2007; Schmidtlein 2004). Institutions universally desire to uphold their rights and capacities of self-governance, maintain substantial autonomy, and exempt themselves from excessive interference from the government and other institution-external entities. However, accountability in its all forms implies outside interference, and intensification of accountability is often at odds, at least to some extent, with different aspects of institutional autonomy. As the notion of accountability seems to be highlighted more explicit on stakeholders’ agendas than in the past, the balance between accountability and autonomy often tilts towards an overemphasis on accounting for performance (Huisman 2018; Kai 2009).

Contextualising Accountability in the Nordic Countries

Denmark

Universities in Denmark are met with several accountability requests. Professional accountability is important in relation to the quality of educational programmes, and especially, the quality of research. However, to some extent, professional accountability has been challenged by political accountability, especially in the wake of the mergers of former governmental research institutes into universities.

Not surprisingly, political accountability regimes are well developed in welfare states where higher education is fully funded through taxation and to a certain extent, research is too. Over the last 15 years, reforms as part of higher educational policy have aimed at enhancing not only political but also social accountability. External stakeholders have become members of advisory councils and university boards. A corporate-like governance structure, including boards with a majority of external members and a chairman who is politically approved, has been introduced. The former elected leaders have been replaced by top-down appointed leaders. All in all, political accountability has been enhanced through intensified managerial accountability, as well as through the introduction of New Public Management (NPM) instruments like contracts and performance-based funding. However, these instruments have come hand in hand with more traditional legal and bureaucratic forms of accountability in recent years, for example, the dimensioning of educational programmes not matching labour market demands.

Finland

Emphasising different aspects of accountability has played a substantial role in shaping the contents of the Finnish higher education policy over the past 25 years. After the introduction of block grants and the performance-based funding model in the mid-1990s, especially financial accountability has dominated the discussion about the accountability of universities. Currently, the Finnish university funding model is one of the most performance-oriented models in the world: over 70% of the core state funding is based on success in performance criteria (de Boer et al. 2015). At the same time, the role of legal accountability in Finnish higher education policy has weakened after the new Universities Act that came into effect in 2010. This legislative reform changed the legal status of universities from being part of the state administration to being independent legal entities. Legislative regulation on central aspects like staffing policies (especially, regulation on staff qualifications, recruitment and remuneration) and internal governance of universities was significantly changed; at present, Finnish universities enjoy a relatively high level of autonomy compared with universities in many other European countries, including other Nordic countries (see Bennetot Pruvot and Estermann 2017).

In Finland, the role of universities in developing the economy has been supported and actively managed by successive governments since the 1960s. This policy has continued to the present, when universities are seen as central actors in the Finnish knowledge-based economy and core parts of the Finnish innovation system expected to contribute to sustainable economic growth, employment and national competitiveness (Biggar Economics 2017). At the same time, Finnish higher education policy recognises the importance of higher education’s social and civic responsibilities, for example, in reducing poverty, inequality and social exclusion. Year after year, among all the OECD countries, Finland is among the top three countries with the highest level of public expenditure (compared to the GDP) on HEIs (see, e.g. OECD 2017). This has kept political expectations, and therefore, political accountability, at a high level. Higher education in general and universities specifically continue to be at the core of educational policies, and thus, political interests. At the concrete level, this has been evident in the ‘Government Programmes’ and ‘Action Plans’ of the past ruling cabinets (see, e.g. Prime Minister’s Office 2017). At the same time, important stakeholders, such as several trade unions, student unions and employer organisations (e.g. the Confederation of Finnish Industries), have continued to keep universities and higher education high on their political agenda.

Professional accountability in Finland has remained strong alongside the other forms of accountability. For instance, various scientific associations operating under the Federation of Finnish Learned Societies are actively exercising their gatekeeping role, especially in publishing. Scientific associations are often responsible for publishing scientific journals and other publications, and they appoint the editorial boards and editors of these journals. In addition, the various trade unions, such as the Finnish Union of University Professors and Finnish Union of University Researchers and Teachers, continue to play a role in upholding and safeguarding professional norms and values of Finnish academic profession.

Norway

Accountability aspects have been in the focus of Norwegian higher education in the last three decades. The managerial structures have been changed through the ‘Quality Reform’ of 2003–2004, which involved an effort to enhance political and social accountability by including politically appointed stakeholders on the boards of the universities. The Ministry of Education introduced a model where the board appointed the chair, as well as the rector. This model replaced the traditional one where the rector was elected by the university and chaired the board (Gornitzka and Larsen 2004). Still, the individual institution could choose which model to follow, resulting in a hybrid version in many universities, with both appointed and elected leaders in the institutions. The aim in giving the universities the possibility of choosing the governance model was to increase autonomy (Stensaker 2014).

A performance-based funding system was introduced through the same reform, and this can be considered an important part of accountability programmes (Frølich 2011). Such a system offers a neutral framework for assigning funds between universities and scientific fields. The shares of funding related to performance-based indicators are much smaller than they are, for example, in the Finnish system. In Norway, 30% of the funding is assigned according to performance-based indicators from teaching and research, while the basic funding (70%) provides long-term and stable financing for the sector (Kvaal 2014). Most Norwegian HEIs are state owned, but private institutions are granted the same state funding as the public. As for professional autonomy, there has been an increased focus on quality of teaching and alignment in educational programmes, as well as on research quality and quantity. This focus on both quality and quantity has challenged the professional autonomy via a bureaucratic and political form of accountability.

Sweden

As in the other Nordic countries, Swedish universities are accountable to many stakeholders. The legal accountability in Sweden has changed in the last two decades. The country has a long tradition of central state steering based on planning. However, this changed during the 1980s and 1990s across many sectors, including higher education. During the 1990s, following a ground-breaking reform in 1993, the higher education sector was fundamentally deregulated, with a reduction in central laws and ordinances and an increased formal autonomy for HEIs. Although most universities remained state agencies, with the autonomy (or freedom) reform, two HEIs, namely University College Jönköping and Chalmers University of Technology, became private foundations upon applications to the government. The main differences were regarding the internal organisation and regulations of hiring academic staff. Academic positions had thus far been centrally regulated, but from then on, professorships could be initiated by each HEI.

An important aspect of the accountability context in Sweden is the funding system. The reform in 1993 also introduced performance-based funding in education. The system is based on the number of students starting education (input) and number of students graduating (output). The government also holds HEIs accountable in annual dialogues. Each year’s ‘production’ is presented in appropriations laid out by the government. The main aspects of state accountability are within the realm of evaluation (details are given below). As in Denmark, external stakeholders are represented on university boards.

The professional accountability remains strong, both as a standalone aspect of academic work and as intertwined in political accountability. Like in Finland, university teachers’ and researchers’ unions are a strong voice for the academic profession. Peer review is an ever-growing activity, for example, in conferences, research proposals, academic publications and hiring and promotion of academic staff. Senior academics spend a significant amount of time assessing colleagues.

Evaluation

Evaluation is closely related to accountability, as it is often considered an action that is used to verify accountability. For this and other reasons, evaluation has been a key theme in the public policy and higher education literature for at least three decades. It is obvious that evaluation can be used for control, aiming at contributing to holding individuals, groups, departments and organisations accountable. However, evaluation can also be used for many other purposes, including further learning and enhancement and enlightenment purposes. Evaluation, therefore, is not limited to summative (retrospective) assessments, but it can be also formative (during the process) or diagnostic (prior to the process). Moreover, evaluation can be used in strategic and tactical ways when actors try to pursue specific interests, as well as in symbolic ways when they wish to signal aspects like novelty. A more recent discussion related to use is the discussion on constitutive effects of evaluation procedures and performance indicators (Dahler-Larsen 2014). The idea is that evaluation creates a new reality influencing and changing interpretations of the world, thereby enabling shifts in social relations and practices.

The literature is rich in defining the concept of ‘evaluation’. The North American literature is mostly concerned with aspects related to programme evaluation. Michael Quinn Patton, for example, defines (programme) evaluation as involving ‘the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgements about the program, improve program effectiveness, and/or inform decisions about future programming’ (Patton 1997, 23). In the Nordic context, evaluation has been defined in a much broader way, including evaluative procedures for assessing the effectiveness of public organisations. An example can be found in the work of Evert Vedung, who defines evaluation as ‘careful retrospective assessment of the merit, worth and value of administration, output and outcome of government interventions (in Swedish: offentlig verksamhet), which is intended to play a role in future, practical action situations’ (Vedung 1997, 3). This broader definition can be interpreted to resonate with the ideals of the Nordic institutional welfare state.

As evaluative thinking has increasingly become integrated into regulative and managerial practices, a distinction between evaluation on the one hand and other concepts, such as quality assurance, accreditation and performance measurement on the other, has become increasingly blurred. In the higher education sector, we find an array of evaluative systems and procedures performed at different levels and directed towards different activities, especially teaching and research (Geschwind 2016). At the national level, evaluative procedures are part and parcel of several accountability mechanisms by governments, such as performance-based funding and various external quality assurance instruments, most notably, accreditation and auditing systems (e.g. Gover and Loukkola 2018; Santiago et al. 2008). In the Nordic countries, these evaluative procedures are performed by national, autonomous organisations with their own boards, management and staff (Smeby and Stensaker 1999). Their various evaluation practices are typically based on peer review panels including members of academic staff, students and stakeholders from working life and supported by project managers from the evaluation body. These national bodies have an umbrella organisation called the European Association for Quality Assurance in Higher Education (ENQA). They have presented European standards and guidelines (ESG) to be followed by all national bodies. There is an ongoing and recurrent process of accreditation of quality assurance bodies (Stensaker et al. 2011).

Intra-institutional evaluative procedures play a critical role in shaping the teaching and research activities in universities. These procedures are built into educational programmes, for example, in monitoring student satisfaction, and at many universities, peer review–based evaluations of departments and programmes (‘audits’) are organised and carried through. Since the 1990s, HEIs in the Nordic countries have been expected to take responsibility for their own evaluation activities. Depending on the focus of the national systems, these institutional evaluations have either mirrored or complemented the national ones (Karlsson et al. 2014). This development of intra-institutional evaluation has also implied that HEIs invest in the internal evaluation capacity in the form of designated evaluation units and hiring professional staff with evaluation experience.

At the level of individual academics, peer review–based evaluative procedures are a standard precondition for scholars to be appointed and promoted, as well as having their research projects funded and findings published. Increasingly, conferences have become based on peer review. As a whole, higher education sectors in most European countries are saturated with aspects pertaining to evaluation to the extent that one can refer to ‘evaluation overload’. For instance, a recent study showed that senior academics can spend around a month per year evaluating other researchers’ work (Langfeldt and Kyvik 2010).

Evaluation focuses on assessing quality, comprising both education quality and research quality. The concept of quality is ambiguous, and both education quality and research quality are multifaceted and multidimensional phenomena. Quality can be judged, among other things, as exceptionality, consistency, fitness for purpose, value for money and transformation (Harvey and Green 1993). Originally, this traditional categorisation was an attempt to deconstruct the rather abstract concept of quality in the context of higher education, focussing on its various dimensions to reconcile different ways of thinking about quality (Santiago et al. 2008; Stensaker 2004). Over the years, it has undoubtedly become the most influential framework for understanding and discussing quality in the context of HEIs. Although almost 25 years old, its position remains unchallenged in the field of higher education research (Kivistö and Pekkola 2017).

In more concrete terms, quality in education can include aspects like preconditions (staff competence, talented student body and infrastructure), contents (relevant curriculum), process (pedagogical arrangements carried through by trained teachers) and the achievement of learning outcomes, retention and student employability (see, e.g. Gibbs 2010). Quality education can even be further contextualised, including the views and expectations of relevant stakeholders (Jongbloed and Benneworth 2010). The emphasis on the different phases of education differs over time, and evaluation systems are usually readjusted slightly according to the requirements of the operating environment. For example, in some systems, a great emphasis can be placed on teacher competences, whereas other systems can rely heavily on an assessment of the final thesis (Lindberg-Sand 2011).

Although it differs slightly across the scientific fields and methodologies used, the characteristics of research quality often relate to aspects like objectivity, validity (internal and external), reliability, open-mindedness, honesty and thorough reporting (e.g. Miles and Huberman 1994; Steinke 2004). As in education, not only has research output been under scrutiny, but so has the preconditions for undertaking research, that is, the research environments. Research quality evaluations have increasingly included assessments of the influence of the research, as shown both within academia and beyond, in the society at large. The latter could be evaluated by using patent and licensing data and counting the number of new companies, as well as by asking research environments to submit more qualitative ‘impact cases’ (Karlsson 2017).

The latest trend has been to evaluate the administrative operations at the institutional level as well. These ‘administrative assessment exercises’ have been undertaken with the same methodology as the evaluations of education and research, making use of panels of experts, both academics and professional support staff. The balance between central and local administrative support, digitalisation, efficiency and effectiveness and new roles and competency needs for administrative staff have been recurrent themes in these evaluations (Karlsson and Ryttberg 2016).

Contextualising Evaluation in the Nordic Countries

Denmark

Evaluative procedures are widespread in Danish higher education. External evaluation of educational programmes was adopted in the late 1980s and institutionalised in the 1990s, at first as a soft national system supporting local quality development, but from 2007 onwards, as a hard control-oriented accreditation system where every bachelor’s and master’s programme, new and established, had to be approved (Hansen 2011). Currently, the system is being changed into one based on approval of the internal quality systems at the institutions. If approval is refused, institutions are not allowed to establish new educational programmes, and existing programmes must be accredited. At the institutional level, student satisfaction evaluation is a routine exercise. Evaluations of educational programmes in the light of stakeholder and labour market requirements are carried out on an ad hoc basis.

Compared with education, research evaluation is less standardised. There is no national system for evaluation of departments, disciplines or scientific fields. Some universities have developed institutional procedures aiming at taking all departments through research evaluations based on international peer review, while others do evaluations on an ad hoc basis or organise with advisory councils giving advice on how to improve research quality. However, in connection with basic funding of research in universities, a performance-based funding system works as an evaluation tool for research. While this metrics-based evaluation tool is meant to be an accountability and quality improvement tool on a national level, giving universities incentives to improve research, the system has been used internally at universities in budget models and for setting performance demands (see Chap.  4). In addition, evaluation is also linked to competitive research funding . Funders of research, public and private, evaluate the quality of research proposals.

Finland

Finnish universities, units and academics are subject to several types of evaluative procedures. The most important of these are institutional audits (complying fully with previously mentioned ESG), which form the core of the national quality assurance system. The Finnish Education Evaluation Centre (FINEEC) and its predecessor the Finnish Higher Education Evaluation Council (FINHEEC) have conducted audits of the universities’ internal quality assurance systems since 2005. According to legislation, all HEIs must regularly (every six years on average) participate in external audits of their operations and internal quality assurance systems (Eurydice 2018). The main emphasis of the audits is to secure that institutions have properly functioning internal quality assurance systems; however, they do not evaluate the quality of education, research, or other institutional activities per se. The nature of external audits is primarily enhancement and improvement rather than control; failing an audit does not result in any sanctions, but instead, only initiates a mandatory re-audit process. This development rather than control orientation in evaluation can partly be explained by the rather extensive use of performance-based funding in providing core funding to universities. Having an accreditation type of evaluation system could be considered to add another layer of control, thereby making quality improvement a process of mandatory compliment rather than actual development.

Compared with that of education, the evaluation of research in Finland is a more multifaceted process, and it is more driven by needs of securing accountability. The Academy of Finland, the national funding agency for research, is responsible for financing research, and therefore, evaluating the research quality (applications). In addition, most universities regularly conduct internal research assessment exercises based on international peer review. However, unlike in some other European countries, there is no national-level comprehensive and centralised evaluation procedure for research. The quality of research, however, is considered in the funding model based on the following: (1) a bibliometric indicator awarding universities for publications ‘points’ (13% weighting) based on their coefficient (‘JUFO’ levels 0–3), which is expected to reflect the quality of publication outlets, and (2) amount of competitive research funding (9% weighting).

Norway

According to legal rules, the individual Norwegian HEIs are responsible for maintaining the quality of the offered education through systematic evaluations of quality, but the institutions are allowed to choose how to organise this work. Such evaluations are supposed to cover quality aspects of education, learning processes for students and practical studies, as well as regarding relevance of the educations to society. In addition, the Norwegian Agency for Quality Assurance (NOKUT) supervises the institutions and evaluates how the quality assurance work is performed. NOKUT’s mission is to supervise and provide information used to develop the quality of higher education in Norway, as well as evaluate and control the quality of study programmes and institutions. NOKUT performs periodic control of the accredited higher education programmes and institutions, but such controls are supposed to occur at least every eight years. The standards and guidelines recommended by NOKUT comply with ESG as far as possible.

The follow-up on research quality depends on several stakeholders acting as funders of the research. The Norwegian Research Council is a main actor in providing funding for research in Norway, but smaller public and private agencies also play a role. For accountability and competitive reasons, the quality of research applications is evaluated through peer-review processes. The Nordic Institute for Studies in Innovation, Research and Education (NIFU) is an independent research institute that aims to deliver data on how Norwegian research and innovation is developing and the importance for society. Another central actor is the Norwegian Centre for Research Data (NSD), which evaluates the quality of research projects prior to their initiating to secure anonymity of the participants; the NSD also acts as a national agent for securing and storing collected data. The Statistics on Higher Education (DBH) database information is also distributed by the NSD.

Several databases have been established in Norway to secure usable data to follow up on evaluations as actions to verify accountability. Data related to teaching, as well as research-related activities, are collected and publicly available at the NSD, DBH, NIFU and Statistics Norway (SSB). As one of the few countries offering this, a national and non-commercial bibliographical database named the ‘Current Research Information System in Norway’ (CRISTIN) is publicly available for recording scholarly and peer-reviewed literature. The individual researchers are supposed to report their publications, and data from CRISTIN are used as background material for assigning performance-based funding to the universities. According to the Norwegian Publication Indicator, publication points are separated at level 1 (lowest level) and level 2. The split of publication channels into two levels is due to peer reviews from academic associations, and the ratings of the different scientific journals and publishers are published on the webpage from NSD.

Sweden

Evaluation activities in Swedish higher education are performed at the national level, HEI level and by individuals. Starting with education, like in the other countries, a national system of evaluation has been in place since the 1990s, as part of the NPM-inspired reforms in the early 1990s. The first system can be described as a light-touch system, and it provided evaluations of each institution’s quality assurance system. These so-called institutional audits were undertaken during two rounds, with small adjustments. The system that followed (2001–2006) included an emphasis on subject and programme reviews across the system. All subjects and programmes leading to a degree were included. Since the 1990s, accreditation of programmes, scientific areas and HEIs has also been implemented, as well as thematic evaluations. The emphasis has shifted over the years; currently, there is again more focus on institutional audits.

Evaluation of research has been the responsibility of several actors. Through the Swedish Research Council, the state has initiated comprehensive subject evaluations. All the funding bodies evaluate the research that is being funded. There has been a development from only ex ante assessments of proposals to mid-term and final evaluations of funded projects and programmes. Many HEIs have also initiated independent evaluations of research. They follow a similar basic model, including panels, bibliometrics, self-evaluations and site visits, with a slight variation regarding scope and emphasis (Geschwind 2017).

Performance Measurement and Management

As is the case with evaluation, performance measurement and management can be understood as instruments for exercising accountability. In the context of higher education, performance can refer to all actions, tasks and processes carried out in HEIs (teaching, research, and third mission activities), as well as outputs and outcomes resulting from these actions. Given this high level of ambiguity, what is meant by performance is very much subject to different conceptions and definitions.

To determine its level (good vs. bad, low vs. high), performance needs to be measured somehow. As an activity, measurement requires objective ‘measures’ that can be utilised in the process of measurement to determine the performance (cf. Neely et al. 1995). In this sense, the selection of measures and the way in which they are utilised (weighting, measurement methodology, etc.) defines what is, at any point in time, considered performance. Thus, performance measurement is an evaluative act of quantification (of performance). By nature, performance measurement is always instrumental, as it is done for a certain purpose, whether symbolic or real. These purposes are often related to management and manifested around a set of instruments, such as ‘management by objectives’, ‘total quality management’, ‘knowledge management’, or ‘strategic management’, aimed at achieving organisational goals. Thus, performance management in higher education can be defined as an activity where universities use the information acquired through performance measurement to achieve and demonstrate progress towards a predetermined set of goals (e.g. Wholey 1999).

Performance measurement, however, is not only a tool to verify accountability; it is also a means of directing organisational attention and focus. This is done by translating the institutional strategy into a set of goals reflected in performance measures that make success (and failure) more concrete for everyone (Melnyk et al. 2004; Vasikainen 2014). The goal of this approach to management is shifting focus from input and focussing on bureaucratic rules and procedures, to the output with goal setting and use of performance information, where public organisations also focus on economic performance (Christensen et al. 2007; Hvidman and Andersen 2013). These techniques tend to be cyclical, incorporating the formulation of objectives, performance, evaluations and adjustments, and this information is used to make managerial decisions.

There is a generic assumption that ‘management is management’ (Hvidman and Andersen 2013, 37) and the same managerial techniques can be applied in both the private and public sector. Considering this, three organisational characteristics that differ between public and private organisations may theoretically mitigate the effectiveness of performance management in the sectors as follows: incentives, capacity and clarity. For incentives, managers in the public sector are presumably motivated less by pay and other financial incentives than managers in the private sector are, and they are steered by a public service motivation, where the value of doing something of importance for society is a personal incentive. Regarding capacity, public managers often have lower autonomy and higher levels of bureaucracy, and this affects their capacity to take advantage of the collected information, which can be used for decision-making. The clarity of goals is also more problematic in public organisations, as there are many stakeholders, multiple goals and different expectations of political responsiveness and social equity (see Boyne 2002).

Often, performance management is utilised simultaneously with performance-based funding, where funds are allocated by a formula or algorithm for achieving certain predefined measures of performance. In a higher education context, most of the performance indicators measure progression or completion of final outputs related to teaching and research, such as study credits, number of degrees awarded, publications, competitive research funding awarded, citations, patents, level of competitive/external research funding , or student satisfaction (Kivistö and Kohtamäki 2016). Performance-based funding is believed to incentivise institutions to improve or maintain their level of performance in exchange for higher revenue (Dougherty and Reddy 2011). By reformulating incentives so that institutions are rewarded or punished primarily according to actual performance, performance-based funding mechanisms stimulate a shift in institutional behaviour towards greater efficiency. However, whether this is accomplished in real terms is another matter (Kivistö and Kohtamäki 2016; Kivistö et al. 2017; Rutherford and Rabovsky 2014).

Performance management and performance-based funding are often associated with the use of performance contracts/agreements, both at the system level and in institution internal arrangements. Performance agreements are contracts (see Gornitzka et al. 2004) between the government and individual HEIs, which set out specific goals that institutions will seek to achieve in a given period. They specify intentions to accomplish given targets, measured against pre-set known standards (Claeys-Kulik and Estermann 2015; de Boer et al. 2015). Furthermore, performance management increasingly takes place at the level of the individual academics (Andersen and Pallesen 2008; Kivistö et al. 2017). This is especially the case when it comes to research performance, where measurement by publication points has become common place in Nordic countries, especially Norway, Denmark and Finland (see, e.g. Aagaard et al. 2015; Pölönen 2015). In some institutional contexts, direct financial rewards could even be allocated to individual academics for research achievements, for instance, in the form of publications in high-status journals (Opstrup 2014). These rewards can be paid as one-time bonuses, top ups of salaries and/or a maximum percentage of the individual’s total salary (Arnhold et al. 2018).

Contextualising Performance Measurement and Management in Nordic Countries

Denmark

Performance measurement and performance management have been increasingly important principles in higher education governance in Denmark for more than 30 years. However, performance management has been criticised for encouraging production of quantity at the expense of quality. This criticism has recently been followed by a political request to incorporate quality criteria in the performance management approaches.

In the 1980s, performance management was introduced in educational funding. In today’s funding system, educational programmes are funded solely according to a performance principle. Funding is based on the number of students passing exams, as well as on bonuses given if students accomplish their studies in due time. The system is based on a real-time principle implying that the universities do not know the exact amount of resources available for education in a given year until the autumn of the same year. The real-time principle can be said to have been an advantage for the universities in a period with considerable growth in student numbers, but uncertainty about budgets due to variations in student practice have posed challenges for the institutions. Recently, it has been decided to further develop the funding system, including employability criteria and quality aspects that are probably linked to student assessments. Over the years, the performance-based funding formula has thus become increasingly complex and still more tightly politically governed. Since 2009, an increasing part of the funding for basic research, currently amounting to 20%, has been performance based. The formula includes the number of graduates from master’s and PhD programmes, the ability to attract external funding and the counting of publications. A quality aspect is included in counting publications, as publication channels are divided into two groups, one releasing more points and resources than the other.

Funding from the Ministry of Higher Education and Science is given to the institutions as a lump sum, meaning that the universities decide how to distribute the resources between faculties and departments. In relation to education, the performance-based principle is typically implemented all the way down in the hierarchy, whereas there are only a few examples of this in relation to funding for basic research. Universities also negotiate performance contracts with their parent Ministry. Hitherto, contracts have not been related to funding allocations, but the institutions must document goal attainment. Recently, it was decided to link goal attainment to funding from 2019. In Denmark, salaries are only marginally linked to performance, although this aspect is increasingly gaining importance.

Finland

In Finland, performance measurement and performance management have been guiding principles in higher education governance, both at the system and institutional levels, for over 20 years. Originally, performance management and measurement landed in the university sector within the general reform of state administration, which, to a large extent, was implemented following the ideals derived from NPM. Today, even after the reform of 2010, which made universities legally independent from the state hierarchy, the university sector can be considered one of the administrative sectors governed/financed by the state where the ideals of NPM are most comprehensively applied (see e.g. Kauko and Diogo 2011; Salminen 2003). Some of the recent empirical studies have also proven the effectiveness of using performance-based funding in the increasing performance of Finnish universities (see Seuri and Vartiainen 2018).

Although the execution of performance management on behalf of the Finnish Ministry of Education and Culture has been highly structured, its further application in individual universities in their internal management and strategies is not controlled by the Ministry. In fact, individual universities, and in many cases their subunits, like faculties, have developed their own internal variations of performance management (Kallio and Kallio 2014). The extensiveness of performance-based funding is mostly visible in allocation practices in providing resources to universities, in professionalisation of academic and administrative management positions, in the use of contractual arrangements (performance agreements), and in outsourcing and centralisation of support and administrative services in universities. Furthermore, as in many other European countries, old and new trends related to management, such as strategic management, quality management and knowledge management, have also been applied in universities.

One important aspect of performance measurement is the salary system for university personnel. Since 2008, the salary system of universities, comprising both academic and administrative staff, has been based on performance measurement, where a maximum of around one-third of the salary is performance based. Although the salary or other performance-based financial incentives have not proven to be the main motivation for Finnish academics to work harder (see Kivistö et al. 2017), they are applied as means of translating system- and institutional-level incentives to the individual level, thereby drawing attention to what is considered valuable (and what is not).

Norway

The funding system for HEI in Norway provides a more stable budget than that in the Danish system, as 70% of the funding is allocated as block grants. Still, the 30% of performance-based indicators increasingly function as a policy tool used to stimulate improvement in both teaching and research, as well as managerial tools in the institutions. Teaching indicators constitute the largest share (24%), focussing on throughput of students and internalisation. As for research indicators (the remaining 6%), these are related to the throughput of PhD students, external funding of research (e.g. from the EU and the Norwegian Research Council), and finally from the metrics related to publications. The Norwegian Publication Indicator as a measurement system was introduced in 2004. As a policy and performance management tool, such indicators from research are meant to stimulate excellence and productivity, as well as to increase the accountability of public research. Another important aspect is aligning research to societal and economic needs (Aagaard et al. 2015). Despite the broad objectives, the financial role of the indicator is marginal, as it only distributes 2% of the funding to the sector (Aagaard et al. 2015).

This funding system based on metrics and a market model has, on the one hand, increased the autonomy in the universities, as the boards are responsible for prioritising within the allocated financial frames and aligning their activities to meet the goals for the sector. On the other hand, ex post control has increased, and the contractual relationship between universities and the state based on performance metrics is replacing the trust-based foundational pact (Stensaker 2014). The increased autonomy is counteracted by controlling instruments, reporting systems and the financial incentive systems following students and research activities (Christensen 2011). The individual academics are still autonomous regarding teaching and research, but the autonomy is limited or steered by incentive and reporting systems; this can feel like a decrease in professional autonomy (Christensen 2011).

Sweden

Generally, performance and performance measurement have become ever more important over time in Sweden as well. These phenomena have also increasingly ‘trickled down’ and been reflected across organisational levels. The developments of education and research described below have affected HEIs significantly, and various responses have emerged.

As mentioned above, one of the most dramatic changes in Swedish higher education was the introduction of performance-based funding in education, based on the inflow of students and throughput. The previous system was criticised for being too rigid, based on central planning, and not driving quality enough. The latter argument has also been used against the current system. Since funding is so closely related to student success, there have been discussions about decreased demands for passing students. The system is based on the idea that different educational areas bear different costs. A student in the Humanities is supposed to cost far less than an Engineering student, for instance. Another effect of this system has been an increased marketing activity by HEIs. An important aspect of the system is the use of a ‘ceiling’ for the number of students recruited. Allocation of funds has a limit and it is linked to a maximum number of students. Throughput of students has been a controversial quality indicator. Whereas there have been occasional discussions on the risk of lowering demands on students, there are also examples where student throughput has been linked to incentives. Overall, this has not affected the individual academics but rather organisational units and HEIs.

In research, the traditional model was block funding based on historical principles rather than performance. Direct state funding was the bulk of the total funding for research. Lately, there has been a development towards more competitive external funding than direct state funding, and as of 2018, the external funding made up slightly more than half of the total funding. A milestone in Swedish research policy was the introduction of performance-based funding as part of the direct state funding. Since the introduction in 2009, 10–20% of the total funding has been allocated to HEIs based on performance as shown in publications and external funding.

Converging Higher Education Policies

Organisational fields with their specific institutions, such as universities, have similarities in organisational design and activities all over the world. In many countries, universities have experienced a shift towards ‘academic capitalism’ (Slaughter and Leslie 1999) and operate as ‘entrepreneurial universities’ (Clark 1998; Etzkowitz et al. 2008). Rationalisation of the universities as organisational actors by the introduction of more formal structure, in terms of introducing a stronger emphasis on quality assurance, evaluation, accountability measures and incentive systems, can be considered a transnational process linked to the NPM type of governance reforms (Ramirez and Christensen 2013; Seeber et al. 2015). The social mechanisms of spreading the ideas of rationalisation can be highlighted from the perspective of institutional isomorphism (DiMaggio and Powell 1983). The literature on isomorphism concentrates on the increasing similarity of organisational and institutional structures and cultures, whereas studies on policy convergence focus on changes in national policy characteristics. Policy convergence, that is, the development of similar or identical policies across countries over time (Knill 2005), seems to be especially evident in Nordic countries, which show similar types of policy development in many significant areas of higher education policy, predominantly those related to governance.

One of the most important reasons behind policy convergence, although not the only one, is international policy promotion, where an actor with expertise in a policy field promotes certain policies. International (or supranational) organisations specialised in a certain policy field are the main actors for inducing the convergence of policies by actively promoting certain policies and defining objectives and standards in an international setting. Countries diverging from the promoted policy models may feel pressure to comply with the policies (Holzinger and Knill 2005; Knill 2005).

There are two overarching international political processes relating to higher education in Europe, which presumably have a significant effect on policy convergence, as follows: the higher education ‘Modernisation Agenda’ (European Commission 2006, 2011) promoted under the auspices of the EU institutions (especially the European Commission) and the intergovernmental Bologna Process (Moisio 2014). Many NPM ideals implemented in Nordic universities, such as promoting the accountability and autonomy of higher education institutions and improving the governance, funding, quality and relevance of higher education, are directly in line with the Commission’s Modernisation Agenda. Interestingly, the Modernisation Agenda presents chiefly the American higher education system and universities as one of the important points of comparison in developing European higher education (see also Slaughter and Cantwell 2012; Slaughter and Taylor 2016).

The Bologna Process seems to increase policy convergence at the European level, although the research evidence for this is not yet entirely clear (see, e.g. Witte 2008). However, Voegtle et al. (2011) have found that the higher education policies of the Bologna participants converge more strongly and that the Bologna Process has made a crucial difference in increasing the similarity of higher education policies. Especially in the area of quality assurance, most Bologna countries implemented most of the measures and included all the required actors for quality assurance measures according to Bologna standards by 2008 (Voegtle et al. 2011).

International/intergovernmental organisations, such as the OECD, World Bank and UNESCO, are highly influential actors in higher education policy convergence (see, e.g. Shahjahan 2012; Shahjahan and Madden 2015). At the European and Nordic level, most notably, the OECD has had a high level of influence on policy convergence. Nation states, including Nordic countries, often rely on the OECD to provide them with the latest data on trends, current issues and policy options. The OECD uses conferences, trend and review reports and the mediation of policy language to influence the thinking of national-level policymakers within and outside of its member countries (Shahjahan and Madden 2015). For instance, the OECD’s thematic reviews can provide a strong legitimisation or justification to national governments for initiating policy reforms, as has happened in Finland (Kallo 2009).

In addition to the influence of international organisations, cross-national policy convergence may simply be the result of similar but independent responses caused by the same type of policy problems to which countries are reacting (Bennett 1991; Knill 2005). At the same time, convergence in policies is more likely for countries that are characterised by high institutional similarity, as policies tend to be implemented insofar as they fit with the existing culture, socioeconomic structures and institutional arrangements. In the search for relevant policy models, states are expected to look to the experiences of those countries with which they share an especially close set of cultural similarities and ties (Knill 2005). In many ways, this is the case with Nordic countries, which are characterised by a welfare-state ideology and public-sector development in this framework. Moreover, they are relatively similar in population size and geographically proximate, and they share the same types of political systems and values. In terms of policy challenges, all Nordic countries have to deal with the financial, social and political sustainability of the Nordic welfare model, which in turn, as has been mentioned before, has triggered government-led reform efforts under the label of NPM, especially in the higher education sector. In all countries, universities are expected to play an increasingly important role in local and national economic development and innovation, which has further intensified government-led efforts to modernise the higher education sector in all Nordic countries.

Although policy convergence clearly is observable across the Nordic countries, however, it is important to observe that similar policies are introduced at different points in time and with important variations in the details. For instance, all the Nordic countries have introduced performance-based funding systems linked to the distribution of resources for basic research. However, performance in Nordic countries is measured using different indicators and redistribution potentials, and therefore, also the effects of the measurement are quite likely different. Other examples of divergence are found in relation to overall governance and management structures, as well as the national quality assurance systems linked to education. Overall, there seems to be more convergence in policy ideas and policy rhetoric than in actual policy implementation.

Notes

Acknowledgements

The data presented in the current volume and individual chapters emanate from a comparative study funded by the Norwegian Research Council under its FINNUT flagship program, a long-term program for research and innovation in the educational sector program. The project number was 237782, and the project was titled ‘Does it matter? Assessing the performance effects of changes in leadership and management structures in Nordic Higher Education’.

References

  1. Aagaard, Kaare, Carter Bloch, and Jesper Schneider. 2015. Impacts of Performance-Based Research Funding Systems: The Case of the Norwegian Publication Indicator. Research Evaluation 24 (1): 106–117.Google Scholar
  2. Andersen, Lotte B., and Thomas Pallesen. 2008. ‘Not Just for the Money?’ How Financial Incentives Affect the Number of Publications at Danish Research Institutions. International Public Management Journal 11: 28–47.Google Scholar
  3. Arnhold, Nina, Elias Pekkola, Vitus Püttmann, and Andrée Sursock. 2018. World Bank Support to Higher Education in Latvia: Volume 3. Academic Careers. Washington: World Bank. https://openknowledge.worldbank.org/handle/10986/29738.Google Scholar
  4. Bennetot Pruvot, Enora, and Thomas Estermann. 2017. University Autonomy in Europe III. The Scorecard 2017. Brussels: European University Association.Google Scholar
  5. Bennett, Colin. 1991. What Is Policy Convergence and What Causes It? British Journal of Political Science 21: 215–233.Google Scholar
  6. BiGGAR Economics. 2017. Economic Contribution of the Finnish Universities. Penicuik, Midlothian: BiGGAR Economics. http://www.unifi.fi/wp-content/uploads/2017/06/UNIFI_Economic_Impact_Final_Report.pdf.Google Scholar
  7. Bovens, Mark. 2007. Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal 13 (4): 447–468.Google Scholar
  8. Bovens, Mark, Thomas Schillemans, and Robert Goodin. 2014. Public Accountability. In The Oxford Handbook of Public Accountability, ed. Mark Bovens, Thomas Schillemans, and Robert Goodin, 1–20. Oxford: Oxford University Press.Google Scholar
  9. Boyne, George A. 2002. Public and Private Management: What’s the Difference? Journal of Management Studies 39 (1): 97–122.Google Scholar
  10. Christensen, Tom. 2011. University Governance Reforms: Potential Problems of More Autonomy? Higher Education 62 (4): 503–517.Google Scholar
  11. Christensen, Tom, and Per Lægreid. 2017. Introduction. Accountability and Welfare State Reforms. In The Routledge Handbook to Accountability and Welfare State Reforms in Europe, ed. Tom Christensen and Per Lægreid, 1–11. Oxon: Routledge.Google Scholar
  12. Christensen, Thomas, Per Lægreid, and Inger Marie Stigen. 2007. Performance Management and Public Sector Reform: The Norwegian Hospital Reform. Public Management Journal 9 (2): 113–139.Google Scholar
  13. Claeys-Kulik, Anna-Lena, and Thomas Estermann. 2015. DEFINE Thematic Report: Performance-Based Funding of Universities in Europe. Brussels: European University Association.Google Scholar
  14. Clark, Burton R. 1998. Creating Entrepreneurial Universities: Organizational Pathways of Transformation. Oxford: Pergamon.Google Scholar
  15. Dahler-Larsen, Peter. 2014. Constitutive Effects of Performance Indicators: Getting Beyond Unintended Consequences. Public Management Review 16 (7): 969–986.Google Scholar
  16. de Boer, Harry, Ben Jongbloed, Paul Benneworth, Leon Cremonini, Renze Kolster, Andrea Kottmann, and Hans Vossensteyn. 2015. Performance-Based Funding and Performance Agreements in Fourteen Higher Education Systems. Enschede: Center for Higher Education Policy Studies, University of Twente.Google Scholar
  17. DiMaggio, Paul J., and Walter W. Powell. 1983. The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review 48 (2): 147–160.Google Scholar
  18. Dougherty, Kevin, and Viskash Reddy. 2011. The Impacts of State Performance Funding Systems on Higher Education Institutions: Research Literature Review and Policy Recommendations. CCRC Working Paper No. 37, Teachers College, Columbia University, New York, NY.Google Scholar
  19. Dressel, Paul L., ed. 1980. The Autonomy of Public Colleges. San Francisco: Jossey-Bass.Google Scholar
  20. Dubnick, Melvin. 2014. Accountability as a Cultural Keyword. In The Oxford Handbook of Public Accountability, ed. Mark Bovens, Thomas Schillemans, and Robert E. Goodin, 23–28. Oxford: Oxford University Press.Google Scholar
  21. Etzkowitz, Henry, Marina Ranga, Mats Benner, Lucia Guaranys, Anne Marie Maculan, and Robert Kneller. 2008. Pathways to the Entrepreneurial University: Towards a Global Convergence. Science and Public Policy 35 (9, November): 681–695.  https://doi.org/10.3152/030234208X389701.Google Scholar
  22. European Commission. 2006. Delivering on the Modernisation Agenda for Universities: Education, Research and Innovation. Communication from the Commission to the Council and the European Parliament. Brussels: European Commission.Google Scholar
  23. ———. 2011. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on “Supporting Growth and Jobs—An Agenda for the Modernisation of Europe’s Higher Education Systems”. Brussels: European Commission.Google Scholar
  24. Frølich, Nicoline. 2011. Multi-layered Accountability. Performance-Based Funding of Universities. Public Administration 89 (3): 840–859.Google Scholar
  25. Geschwind, Lars. 2016. Academic Core Values and Quality: The Case of Teaching-Research Links. In Att ta utbildningens komplexitet på allvar. En vänskrift till Eva Forsberg, ed. Maja Elmgren, Maria Folke-Fichtelius, Stina Hallsén, Henrik Román, and Wieland Wermke, 227–238. Uppsala: Uppsala Studies in Education No. 138.Google Scholar
  26. ———. 2017. Reflections on Q&R17 by a Researcher on Research. In KoF17 Quality and Renewal 2017. Research Environment Evaluation at Uppsala University, ed. Anders Malmberg, Åsa Kettis, and Camilla Maandi, 34–38. Uppsala: Uppsala University.Google Scholar
  27. Gibbs, Graham. 2010. Dimensions of Quality. Helsington, York: The Higher Education Academy. https://www.heacademy.ac.uk/system/files/dimensions_of_quality.pdf.Google Scholar
  28. Gornitzka, Åsa, and I.M. Larsen. 2004. Towards Professionalization? Restructuring of Administrative Workforce in Universities. Higher Education 47: 455–471.Google Scholar
  29. Gornitzka, Åsa, Bjørn Stensaker, Jens-Christian Smeby, and Harry de Boer. 2004. Contract Arrangements in the Nordic Countries: Solving the Efficiency–Effectiveness Dilemma? Higher Education in Europe 29 (1): 87–101.  https://doi.org/10.1080/03797720410001673319.Google Scholar
  30. Gover, Anna, and Tiia Loukkola. 2018. Enhancing Quality: From Policy to Practice. Brussels: EQUIP. http://www.eua.be/Libraries/publications-homepage-list/equip-publication_final.Google Scholar
  31. Hansen, Hanne F. 2011. University Reforms in Denmark and the Challenges for Political Science. European Political Science 10: 235–247.Google Scholar
  32. Harvey, Lee, and Diana Green. 1993. Defining Quality. Assessment and Evaluation in Higher Education 18 (1): 9–34.Google Scholar
  33. Holzinger, Katharina, and Christoph Knill. 2005. Causes and Conditions of Cross-national Policy Convergence. Journal of European Public Policy 12 (5): 775–796.  https://doi.org/10.1080/13501760500161357.Google Scholar
  34. Huisman, Jeroen. 2018. Accountability in Higher Education. In Encyclopedia of International Higher Education Systems and Institutions, ed. Pedro Teixeira and Jung Cheol Shin. Dordrecht: Springer.  https://doi.org/10.1007/978-94-017-9553-1_156-1.Google Scholar
  35. Huisman, Jeroen, and Jan Currie. 2004. Accountability in Higher Education: Bridge over Troubled Water? Higher Education 48: 529–551.Google Scholar
  36. Hvidman, Ulrik, and Simon Calmar Andersen. 2013. Impact of Performance Management and Private Organizations. Journal of Performance Management in Public and Private Organizations 24 (1): 35–58.Google Scholar
  37. Jongbloed, Ben, and Paul Benneworth. 2010. Who Matters to Universities? A Stakeholder Perspective on Humanities, Arts and Social Sciences Valorization. Higher Education 59: 567–588.  https://doi.org/10.1007/s10734-009-9265-2.Google Scholar
  38. Kai, Jiang. 2009. A Critical Analysis of Accountability in Higher Education. Chinese Education & Society 42 (2): 39–51.  https://doi.org/10.2753/CED1061-1932420204.Google Scholar
  39. Kallio, Kirsi-Mari, and Tomi J. Kallio. 2014. Management-By-Results and Performance Measurement in Universities—Implications for Work Motivation. Studies in Higher Education 39 (4): 574–589.Google Scholar
  40. Kallo, Johanna. 2009. OECD Education Policy. A Comparative and Historical Study Focusing on the Thematic Reviews of Tertiary Education. Doctoral diss., University of Turku, Jyväskylä, Fera.Google Scholar
  41. Karlsson, Sara. 2017. Evaluation as a Travelling Idea: Assessing the Consequences of Research Assessment Exercises. Research Evaluation 26 (2): 55–65.Google Scholar
  42. Karlsson, Sara, and Malin Ryttberg. 2016. Those Who Walk the Talk: The Role of Administrative Professionals in Transforming Universities into Strategic Actors. Nordic Journal of Studies in Educational Policy 2016 (2–3): 315–337.  https://doi.org/10.3402/nstep.v2.31537.Google Scholar
  43. Karlsson, Sara, Karin Fogelberg, Åsa Kettis, Stefan Lindgren, Mette Sandoff, and Lars Geschwind. 2014. Not Just Another Evaluation: A Comparative Study of Four Educational Quality Projects at Swedish Universities. Tertiary Education and Management 20 (3): 239–251.  https://doi.org/10.1080/13583883.2014.932832.Google Scholar
  44. Kauko, Jaakko, and Sara Diogo. 2011. Comparing Higher Education Reforms in Finland and Portugal: Different Contexts, Same Solutions? Higher Education Management and Policy 23 (3): 115–133.Google Scholar
  45. King, Roger. 2015. Institutional Autonomy and Accountability. In The Palgrave International Handbook of Higher Education Policy and Governance, ed. Jeroen Huisman, Harry de Boer, David D. Dill, and Manuel Souto-Otero, 485–505. Basingstoke: Palgrave.Google Scholar
  46. Kivistö, Jussi. 2007. Agency Theory as a Framework for the Government-University Relationship. Doctoral diss., Higher Education Finance and Management Series, Tampere University Press, Tampere.Google Scholar
  47. Kivistö, Jussi, and Vuokko Kohtamäki. 2016. Does Performance-Based Funding Work? Reviewing the Impacts of Performance-based Funding on Higher Education Institutions. In Positioning Higher Education Institutions: From Here to There, ed. Rosalin Pritchard, Attila Pausits, and James Williams, 215–226. Rotterdam: Sense Publishers.Google Scholar
  48. Kivistö, Jussi, and Elias Pekkola. 2017. Quality in Administration of Higher Education. Stockholm: Sveriges universitets- och högskoleförbund (SUHF).Google Scholar
  49. Kivistö, Jussi, Elias Pekkola, and Anu Lyytinen. 2017. The Influence of Performance-Based Management on Teaching and Research Performance of Finnish Senior Academics. Tertiary Education and Management 23 (3): 260–275.Google Scholar
  50. Knill, Christoph. 2005. Introduction: Cross-National Policy Convergence: Concepts, Approaches and Explanatory Factors. Journal of European Public Policy 12 (5): 764–774.  https://doi.org/10.1080/13501760500161332.Google Scholar
  51. Kvaal, Torkel Nybakk. 2014. Finansieringssystem for universiteter og høyskoler. Oslo: Kunnskapsdepartementet.Google Scholar
  52. Langfeldt, Liv, and Svein Kyvik. 2010. Researchers as Evaluators: Tasks, Tensions and Politics. Higher Education 62 (2): 199–212.  https://doi.org/10.1007/s10734-010-9382-y.Google Scholar
  53. Lindberg-Sand, Åsa. 2011. Koloss på lerfötter? Utveckling av metodik för ett resultatbaserat nationellt kvalitetssystem i svensk högre utbildning. Lund: Centre for Educational Development, Lunds universitet.Google Scholar
  54. Melnyk, Steven, Douglas Stewart, and Morgan Swink. 2004. Metrics and Performance Measurement in Operations Management: Dealing with the Metrics Maze. Journal of Operations Management 22: 209–217.Google Scholar
  55. Miles, Michael, and Matthew Huberman. 1994. Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed. Thousand Oaks, CA: Sage Publications.Google Scholar
  56. Moisio, Johanna. 2014. Understanding the Significance of EU Higher Education Policy Cooperation in Finnish Higher Education Policy. Doctoral diss., Tampere University Press, Tampere.Google Scholar
  57. Neely, Andy, Mike Gregory, and Ken Platts. 1995. Performance Measurement System Design: A Literature Review and Research Agenda. International Journal of Operations Management 15 (4): 80–116.Google Scholar
  58. OECD. 2017. Education at a Glance 2017. OECD Indicators. Paris: OECD.Google Scholar
  59. Opstrup, Niels. 2014. Causes and Consequences of Performance Management at Danish University Departments. Doctoral diss., Syddansk Universitet, Det Samfundsvidenskabelige Fakultet.Google Scholar
  60. Patton, Michael. 1997. Utilization-Focused Evaluation. 3rd ed. Thousand Oaks: Sage Publications.Google Scholar
  61. Pölönen, Janne. 2015. Suomenkieliset kanavat ja julkaisut Julkaisufoorumissa [Finnish Language Publication Channels and Publications in the Publication Forum]. Media & viestintä 38 (4): 237–252. https://journal.fi/mediaviestinta/article/view/62073.Google Scholar
  62. Prime Minister’s Office. 2017. Finland, A Land of Solutions. Mid-term Review: Government Action Plan 2017–2019. https://valtioneuvosto.fi/documents/10184/321857/Government+action+plan+28092017+en.pdf.
  63. Ramirez, Francisco, and Tom Christensen. 2013. The Formalization of University: Rules, Roots, and Routes. Higher Education 65 (6): 695–708.Google Scholar
  64. Romzek, Barbara S. 2000. Dynamics of Public Sector Accountability in an Era of Reform. International Review of Administrative Science 66: 21–44.Google Scholar
  65. Rutherford, Amanda, and Thomas Rabovsky. 2014. Evaluating Impacts of Performance Funding Policies on Student Outcomes in Higher Education. Annals of the American Academy of Political and Social Science 655: 185–208.Google Scholar
  66. Salminen, Ari. 2003. New Public Management and Finnish Public Sector Organisations: The Case of Universities. In The Higher Education Managerial Revolution? ed. Alberto Amaral, Vincent L. Meek, and Ingvild M. Larsen, 55–69. Dordrecht: Kluwer Academic.Google Scholar
  67. Santiago, Paulo, Karine Tremblay, Ester Basri, and Elena Arnal. 2008. Tertiary Education for the Knowledge Society. Volume 1: Special Features: Governance, Funding, Quality. Paris: OECD.Google Scholar
  68. Schmidtlein, Frank A. 2004. Assumptions Commonly Underlying Government Quality Assessment Practices. Tertiary Education and Management 10 (4): 263–285.Google Scholar
  69. Seeber, Marco, et al. 2015. European Universities as Complete Organizations? Understanding Identity, Hierarchy and Rationality in Public Organizations. Public Management Review 17 (10): 1444–1474.  https://doi.org/10.1080/14719037.2014.943268.Google Scholar
  70. Seuri, Allan, and Hannu Vartiainen. 2018. Yliopistojen rahoitus, kannustimet ja rakennekehitys. Talouspolitiikan arviointineuvosto: Helsinki. https://www.talouspolitiikanarviointineuvosto.fi/wordpress/wp-content/uploads/2018/01/Seuri_Vartiainen_2018-1.pdf.Google Scholar
  71. Shahjahan, Riyad A. 2012. The Roles of International Organizations (IOs) in Globalizing Higher Education Policy. In Higher Education: Handbook of Theory and Research 27, ed. John C. Smart and Michael B. Paulsen, 369–407. Dordrecht: Springer.Google Scholar
  72. Shahjahan, Riyad A., and Meggan Madden. 2015. Uncovering the Images and Meanings of International Organizations (IOs) in Higher Education Research. Higher Education 69: 705–717.  https://doi.org/10.1007/s10734-014-9801-6.Google Scholar
  73. Sinclair, Amanda. 1995. The Chameleon of Accountability: Forms and Discourses. Accounting Organizations and Society 20 (2–3): 219–237.Google Scholar
  74. Slaughter, Sheila, and Brendan Cantwell. 2012. Transatlantic Moves to the Market: The United States and the European Union. Higher Education 63 (5): 583–606.  https://doi.org/10.1007/s10734-011-9460-9.Google Scholar
  75. Slaughter, Sheila, and Larry Leslie. 1999. Academic Capitalism: Politics, Policies, and the Entrepreneurial University. Baltimore: Johns Hopkins University Press.Google Scholar
  76. Slaughter, Sheila, and Barret J. Taylor, eds. 2016. Competitive Advantage: Stratification, Privatization and Vocationalization of Higher Education in the US, EU, and Canada. Dordrecht: Springer.Google Scholar
  77. Smeby, Jens-Christian, and Bjørn Stensaker. 1999. National Quality Assessment Systems in the Nordic Countries: Developing a Balance Between External and Internal Needs? Higher Education Policy 12 (1): 3–14.Google Scholar
  78. Steinke, Ines. 2004. Quality Criteria in Qualitative Research. In A Companion to Qualitative Research, ed. Uwe Flick, Ernst von Kardorff, and Ines Steinke, 184–190. London: SAGE.Google Scholar
  79. Stensaker, Bjørn. 2004. The Transformation of Organisational Identities. Interpretations of Policies Concerning the Quality of Teaching and Learning in Norwegian Higher Education. Doctoral thesis, CHEPS/University of Twente, Enschede.Google Scholar
  80. ———. 2014. Troublesome Institutional Autonomy: Governance and the Distribution of Authority in Norwegian Universities. In International Trends in University Governance: Autonomy, Self-Government and the Distribution of Authority, ed. Michael Shattock, 34–48. New York: Routledge.Google Scholar
  81. Stensaker, Bjørn, Liv Langfeldt, Harvey Lee, Jeroen Huisman, and Don Westerheijden. 2011. An In-Depth Study on the Impact of External Quality Assurance. Assessment & Evaluation in Higher Education 36 (4): 465–478.  https://doi.org/10.1080/02602930903432074.Google Scholar
  82. Trow, Martin A. 1996. Trust, Markets and Accountability in Higher Education: A Comparative Perspective. Higher Education Policy 9 (4): 309–324.Google Scholar
  83. Vasikainen, Soili. 2014. Performance Management of the University Education Process. Doctoral diss., University of Oulu, Oulu.Google Scholar
  84. Vedung, Evert. 1997. Public Policy and Program Evaluation. New Brunswick: Transaction Publishers.Google Scholar
  85. Voegtle, Eva M., Christoph Knill, and Michael Dobbins. 2011. To What Extent Does Transnational Communication Drive Cross-National Policy Convergence? The Impact of the Bologna-Process on Domestic Higher Education Policies. Higher Education 61: 77–94.  https://doi.org/10.1007/s10734-010-9326-6.Google Scholar
  86. Wholey, Joseph S. 1999. Performance-Based Management: Responding to the Challenges. Public Productivity and Management Review 22: 288–307.Google Scholar
  87. Witte, Johanna. 2008. Aspired Convergence, Cherished Diversity: Dealing with the Contradictions of Bologna. Tertiary Education and Management 14 (2): 81–93.  https://doi.org/10.1080/13583880802051840.Google Scholar

Copyright information

© The Author(s) 2019

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Jussi Kivistö
    • 1
    Email author
  • Elias Pekkola
    • 1
  • Laila Nordstrand Berg
    • 2
  • Hanne Foss Hansen
    • 3
  • Lars Geschwind
    • 4
  • Anu Lyytinen
    • 1
  1. 1.Faculty of Management and BusinessTampere UniversityTampereFinland
  2. 2.Department of Social ScienceWestern Norway University of Applied SciencesSogndalNorway
  3. 3.Department of Political ScienceUniversity of CopenhagenCopenhagenDenmark
  4. 4.School of Industrial Engineering and ManagementKTH Royal Institute of TechnologyStockholmSweden

Personalised recommendations