Introduction

In this last chapter, we will summarize the main findings from this extensive comparative study, draw some conclusions, and discuss possible implications for research, policy, and practice. The starting point for the project FINNUT-PERFACAD (consult Chap. 1 of the current volume for details) was that the conditions of the environment under which Nordic higher education institutions (HEI) operate have changed dramatically during the last decade. Policy efforts aimed at modernizing the sector have paid considerable attention to the way in which public universities operate. A privileged focus has been attributed to aspects such as efficiency, effectiveness, and accountability (Fägerlind and Strömqvist 2004; Gornitzka and Larsen 2004). In addition to managing their internal operations in a more cost-efficient manner, public universities in the Nordic countries and elsewhere are increasingly expected to respond adequately to the needs of various external stakeholder groups (Jongbloed et al. 2008; Neave 2002). One of the mechanisms being used to achieve these goals lies in enhancing the rationalization of internal structures and activities (Ramirez 2006, 2010) by, inter alia, promoting professional management (Amaral et al. 2003; Paradeise et al. 2009). As a result, most Nordic universities have developed extended administrative structures (at central and unit levels) capable of strategically supporting their primary activities (cf. Aarrevaara et al. 2014), and some have introduced recent changes in the nomination of formal leaders, such as filling the positions by appointment rather than election (Hansen 2017).

Yet, in spite of these trends, few studies have investigated, in a systematic fashion and comparative manner, the effects such types of strategic measures are having on the actual performance of individual institutions. This study has addressed this knowledge gap by investigating the impact of the rationalization processes—with a focus on the rise of professional management (managerialism) and the strengthening of leadership structures—on the teaching and research performance of public universities in Norway, Finland, Denmark, and Sweden in the period 2003–2013. The research problem driving the project is the following:

  • To what extent are changes in leadershipandmanagementstructures related to shifts in teaching andresearch performancein public universities across theNordic countriesin the last decade?

In order to address this query, we focused on three key dimensions: drivers, actors, and effects. The study adopted a mixed-methods design based on desktop research (comparative database) and a survey questionnaire along with interviews with staff of selected public universities (for details consult Chap. 1 of the current volume).

Before moving on to discussing the main findings of the project, a selection of previously undertaken studies of the Nordic higher education systems, as well as the conceptual backdrop, will be revisited.

The PERFACAD Project in Context: Earlier Studies on Nordic Higher Education

In many contexts, in particular from the outside, the Nordic countries are discussed as one system. This volume also contributes to that discussion with its explicit comparative approach. Over the years, the Nordic higher education system has been in focus in a number of studies. A decade and a half ago, Fägerlind and Strömqvist (2004) published an edited volume with contributions from all Nordic countries: Reforming higher education in theNordic countries. Studies of change in Denmark, Finland, Iceland, Norway and Sweden. They write in their concluding chapter that the global economy has had a substantial influence on higher education in the Nordic countries, where the social function of education has changed from welfare state social engineering to globalized market features. Further, they conclude that the academic oligarchy has lost power and that the role of the state is not as straightforward as it used to be before these reforms. During the 1990s, all Nordic countries increased their student participation rates, in particular, Finland. All countries have had traditions of strict centralization of higher education systems. However, recent decentralizationreforms have changed systems from normative legislation to funding and evaluation systems and by appointing external members to university boards. Performance-based funding systems were in place in all systems based on the number of students and their achievements in the form of degrees or credits. By the early 2000s, all five Nordic countries had introduced a management by results governance model. HEIs have been given more autonomy with respect to programmes, internal organization, and economy. In all countries, designated organizations were created for the evaluation of higher education, and in all cases except Sweden, the organizations are somewhat autonomous from the Ministry.

At the beginning of the volume, Fägerlind and Strömqvist also ask the question whether the Nordic countries are similar or different. The answer they give is that it is “complex.” On the one hand, all countries share the fact that they have become increasingly similar due to, for instance, the Bologna system, globalization, and governance trends. They were also distinct regarding the organization of the tertiary education landscape, where Finland, in particular, had chosen the most explicit binary sector, Norway was a front runner in the implementation of the Bologna degree structure, and Sweden’s higher education system was considered by the authors as too uniform and based on an ideology of “sameness.”

In a comparative project between the United Kingdom, Norway, and Sweden from the 1990s, a number of similar conclusions are made. The final volume (Kogan et al. 2000) summarizes, “We noted how all three governments urged universities to adopt explicit quality assurance practices, market behaviour, stronger vocational missions and public accountability, but the policies came out differently” (200). The United Kingdom and Sweden were basically the opposite, where Sweden’s tradition of state planning gave way to self-regulation at the university level. Norway was hesitant to “insinuate nationally devised practices.” The researchers also identified different national policy styles, where the English were described as “heroic,” the Norwegian as “incremental,” and the Swedish as “adversarial.”

A more recent study of the Nordic countries was undertaken by Ahola et al. (2014). Regarding governance, they concluded that all national systems have strengthened institutional autonomy, and a new governance regime had been introduced, based on delegation of state authority to HEIs by the use of performance-based funding and evaluations. Managerial forms of governance have largely replaced collegial modes. This is particularly the case in Denmark and Finland, where there are the most “extreme” versions of reforms and where universities have become more autonomous institutions. Other organizational aspects mentioned include the introduction of tuition fees, centres of excellence, doctoral schools, and mergers within and between universities. The authors interpret this as a transformation of the Nordic model of higher education as part of the larger transition from welfare state to welfare society, where “the state no longer solely takes the role as a protector, while to a greater extent expecting the higher education institutions to operate as entrepreneurs in a global market” (Ahola et al. 2014, 8).

Recently, based on research evidence from 12 European flagship universities, including the Nordic countries, Maassen (2017) discussed why the outcomes of reforms in general are not in line with reform intentions. One explanation for this “governance paradox” would be the neglect of institutional trajectories of universities, what is commonly known as “path dependencies” (cf. Krücken 2003), as one of modern society’s oldest but still existing organizations.

The comparative analyses mentioned above not only tell us something about the individual country, but they also shed light on historical development in relation to other, neighbouring countries. Our results build upon these empirical and theoretical insights about the Nordic higher education systems. Before we discuss the findings, we will briefly revisit the theoretical backdrop and the methodology used in the project.

Revisiting the Conceptual Backdrop

The theoretical approach taken in this project, discussed at some length in Chap. 1, was inspired by a typology developed by Norwegian scholar Johan P. Olsen (2007). This typology focused on various aspects of governance of universities and also stressed the ability of universities—as institutions—to resist, adapt, and respond to change initiatives from external and internal actors. It emphasizes the resilience of universities and their capacity to fight back against unwanted and perceived intrusive policy and management initiatives. Olsen suggested four visions, or typologies (along two dimensions, autonomy vs. conflict), for the modern university based on different assumptions about what the university is for as well as the circumstances under which it will operate appropriately. At the heart of Olsen’s inquiry is the question, what type of university and for what type of society?

Olsen’s neo-institutional model (Table 9.1) captures various dimensions of modern universities: external–internal, change–stability, market–collegiality–bureaucracy. Universities are highly institutionalized organizations laden with rules, norms, and regulations. Traditionally, they have been described as loosely coupled and bottom-heavy (Clark 1983), with an impressive capacity to resist, delay, and simply not do what is expected of them by external stakeholders. This picture has changed in the last decades, and present-day universities are increasingly described as “strategic actors” (Krücken and Meier 2006), more tightly coupled, rational, and even “complete” organizations (Seeber et al. 2015), yet still heavily dependent on the external environment for resources, legitimacy, and power (Bleiklie et al. 2015). Bleiklie and colleagues have recently introduced the concept of “penetrated hierarchies” for understanding universities as organizations. The authors stress the introduction of more hierarchical bureaucratic governance of universities, the conflict between leadership and academic staff, and the relationship between members of the organization and key external audiences who penetrate their organization by influencing the legitimacy of control models and resource decisions.

Table 9.1 Visions of the European university

As outlined in Chap. 1 of this volume, the theoretical framework being adopted resulted in an operationalization comprising six organizational/management mechanisms, listed below and related to organizational performance:

  • Strategy

  • Decision-making structures

  • Organizational structures

  • Accountability measures

  • Funding arrangements

  • Cultural climate

These mechanisms were further operationalized in a number of themes in the interviews and survey discussed in Chap. 1. We also formulated a few basic assumptions in light of the research problem and following on Olsen’s work:

Strategy

  • H0: An overarching and penetrating institutional strategy boosts performance.

  • H1: An overarching and penetrating institutional strategy alienates staff and negatively affects performance.

  • H2: Strategies that are developed through participation boost performance.

Decision-Making Structures

  • H0: More hierarchical decision-making structures stimulate increased performance.

  • H1: More hierarchical decision-making structures negatively affect performance.

  • H2: Participatory decision-making structures stimulate increased performance.

Organizational Structure

  • H0: Larger, more interdisciplinary structures boost performance.

  • H1: Larger, more interdisciplinary structures negatively affect performance.

  • H2: Diverse structures are best fitted to the diversity found in universities, and diversity boosts performance.

Accountability Measures

  • H0: More systematic and regular (intense) reporting boosts performance.

  • H1: More systematic and regular (intense) reporting negatively affects performance.

  • H2: It is the way and form of reporting that affects performance.

Funding Arrangements

  • H0: More incentives and results-oriented funding boost performance.

  • H1: More incentives and results-oriented funding negatively affect performance.

  • H2: A mixed funding arrangement is the best way to boost performance.

Cultural Climate

  • H0: Systematic training and competence building in the organization boost performance.

  • H1: Systematic training and competence building (which takes time away from primary activities) negatively affect performance.

  • H2: Cultural change through participatory and trust-based processes drives performance.

As mentioned in Chap. 1, these hypotheses have not been tested in each chapter, but they have been instrumental in the operationalization of the study. We will now return to these mechanisms, themes, and assumptions and discuss them in relation to results presented in the empirical chapters composing Part II of the current volume.

Comparative Thematic Findings

Strategy

Starting out with the thematic strategies, earlier research has shown how they have become part and parcel of modern universities for planning and steering and also for organizational identity formation (Fumasoli et al. 2015). Chapter 7, in particular, sheds light on two critical aspects of strategies: who gets involved with strategic processes and to what extent these processes affect behaviour across the organization. The results show that participation in strategy work varies across cases, and many times, participation is low, which in turn affects the legitimacy of the strategic process, per se. The data show that some academic staff are not involved in strategic processes at all, which alienates them from their own institutional goals and values. Furthermore, the authors show that strategies at lower levels are considered more relevant to academic staff, and whereas less than 10% of survey respondents were involved at the university level, around half of the academic staff reported participation at the unit level.

These findings suggest that there is a growing gap between values, practices, and priorities, as expressed in strategies, held by university managers and administrators as compared with those of floor-level academics (Pinheiro and Stensaker 2014; Ramirez 2010). Thus, when we talk about universities as strategic actors, not all employees are necessarily included, but rather, only a small portion of the total staff (Pekkola et al. 2017). Strategies have the capacity to rebuild the university’s power relationships, engagement, legitimacy, and organizational values. However, where academic staff define a strategy for the benefit of individuals or units, there is no common understanding of what the strategy is within or among any of the four Nordic countries.

It is difficult to assess how strategies have affected performance in teaching and research. That said, the so-called strategic turn seems to be associated with a new culture of performativity and accountability (Hansen et al. 2019). Our data show that assistant professors and lecturers are least influential in decision-making processes for institutional strategies. Instead, they play a significant role in unit-level strategy work and especially in the grass-roots implementation, or localization or translation (Sahlin and Wedlin 2008), of institutional strategies. On the basis of survey results and interviews, the main observation made is that no single group is fully dominant in strategy formulation, and there seems to be no common arena where the strategy dialogue takes place (Battilana 2006). The findings regarding strategy also indicate that the process is as important as the outcome. Without dialogue and buy-in from internal stakeholders, the content of the strategies will remain irrelevant and the effects minimal.

Decision-Making and Organizational Structures

Regarding decision-making and organizational structures, some important changes have taken place in the Nordic countries. External stakeholders have become members of advisory councils and university boards. A corporate-like governance structure, including boards with a majority of external members and a chairman who is politically approved, has been introduced (Benner and Geschwind 2016). In Denmark, this corporate-like governance structure has been mandatory for all universities since 2003, while political approval of board chairs has only recently been introduced. Here, the former autonomy of universities has been restricted. In Denmark and Finland, the formerly elected leaders have been replaced by appointed leaders. The new Universities Act that went into effect in 2010 in Finland changed the legal status of universities from being part of the state administration to independent legal entities. Legislative regulation on central aspects such as staffing policies (in particular, regulation on qualifications of the staff, recruitment, and remuneration) and internal governance of universities were significantly changed; currently, Finnish universities enjoy a relatively high level of autonomy compared to many other European countries (see Pruvot and Estermann 2017).

In Norway, the managerial structures have been changed through the “Quality reform” of 2003–2004, with an effort made to enhance political and social accountability by including politically appointed stakeholders on the boards of the universities. The Ministry of Education introduced a model where the board appointed their chair and also appointed the rector. This model replaced the traditional one where the rector was elected by the university and also chaired the board (Gornitzka and Larsen 2004). Still, despite the Ministerial preference for the appointment model, institutions can voluntarily choose which model to follow (or they can follow a combination of the two). This has resulted in a hybrid version in many universities, with both appointed and elected leaders in key roles of the institutions. The aim in giving universities the possibility of choosing their own governing model was twofold: to increase autonomy (Stensaker 2014), on the one hand, and to respect the traditions of universities as collegial entities, on the other hand (Olsen 2007).

The decision-making structures in Sweden have changed during the last two decades. The country has a long tradition of central state governance based on planning. However, this changed during the 1980s and 1990s across many sectors, higher education included. During the 1990s, following a groundbreaking reform in 1993, the higher education sector was fundamentally deregulated through a reduction in central laws and ordinances and an increased formal autonomy for HEIs. Although most universities remained state agencies, the autonomy (or freedom) reformed two HEIs, Jönköping University and Chalmers University of Technology, which became private foundations upon applications to the government. The main differences were regarding the internal organization and the regulations around the hiring of academic staff. Academic positions had to that point been centrally regulated, but from that point on, professorships could be initiated by each HEI. In 2011, another autonomyreform was implemented deregulating the internal organization of HEIs and academic positions. However, an even more far-reaching autonomy bill suggesting Swedish HEIs become private foundations was rejected by the sector a couple of years later (Geschwind 2017).

In Chap. 6, academicleadership is in focus. The pre–New Public Management (NPM) state-regulated system meant detailed centralized decision-making about, for instance, hiring of professors and the introduction of new educational programmes. The findings from the survey and interviews reveal that the roles of academic leaders are changing, most dramatically in Denmark and Finland, but also in Norway and Sweden, which have been the target of more evolutionary reforms. The perceived decision-making power of leaders differs significantly between countries, with Danish managersreporting the lowest degree of power. This finding is, in itself, rather interesting since the rationale for implementing NPM-inspired unitary management models (with centralization of decision-making) is to empower specific (formal managers) individuals (Berg and Pinheiro 2016). Thus, it should be reflected in the views of academic staff in interviews, stressing what they perceived as increasing managerial power.

The traditional professional, collegialacademicleadership that is based on rotating systems, election among peers, and collegial decision-making has been complemented with, and in some places replaced by, a “managerial logic” (cf. Deem and Brehony 2005) substantiated on order-giving, performance measurement, and appointed managers as a new academic profession (managerialism). A related identified trend is the greater focus on individual leaders and managers (leaderism) (Ekman et al. 2017). This development is met with different opinions by HEI employees, ranging from deep concern in Denmark to moderate appreciation in Finland and Norway and occasional frustration expressed by Swedish managers with regard to the power of external stakeholder influence. Hence, as in the other themes, reforms have not been implemented to the same depth and at the same pace across and within universities. The ability and willingness to follow a strict, more corporate-like management style are unevenly distributed.

Accountability Measures

As public organizations dependent on the support of several stakeholders (Benneworth and Jongbloed 2010), universities in the Nordic countries meet with a number of accountability requests. In recent decades, a number of reforms have been implemented in order to increase the accountability of universities (Hazelkorn et al. 2018; Hansen et al. 2019). Professional accountability is important in relation to the quality of educational programmes and particularly the quality of research. However, professional accountability has, in all countries to some extent, been challenged or at least complemented by political and social accountability. Political accountability has been enhanced through the introduction of New Public Management instruments such as performance-based funding, contract governing, and evaluation “machines” (Dahler-Larsen 2012).

This has kept political expectations, and thus, also political accountability, at a high level. Higher education in general and universities in particular continue to be at the core of educational policies, and therefore, political interests. At the concrete level, this has been evident in the government programmes and action plans of past ruling cabinets, but also in the prominent role of the European Commission (“modernization agenda”) and the importance of skills and research to the Europe of Knowledge, more generally (Maassen and Stensaker 2011; Pinheiro 2015). At the same time, important stakeholders such as several trade unions, student unions, employer organizations (such as the Confederation of Finnish Industries) have continued to keep universities and higher education at the forefront of their political agendas (Klemenčič 2018).

Professional accountability in Finland has remained strong alongside the other forms of accountability. For instance, various scientific associations operating under the Federation of the Finnish Learned Societies are actively exercising their gatekeeping role, especially in publishing. Scientific associations are often responsible for publishing scientific journals and other publications and appoint the editorial boards and editors for these journals. Also, the various trade unions, such as the Finnish Union of University Professors and the Finnish Union of University Researchers and Teachers, continue to play a critical role in upholding and safeguarding professional norms and values of the Finnish academic profession.

The majority of Norwegian HEIs are state owned, but private institutions are granted the same state funding as the public ones. As for professional autonomy, there has been an increased focus on the quality of teaching and alignment in educational programmes, but also on research quality as well as quantity. This increased focus on quality and quantity associated with a bureaucratic and political form of accountability is challenging the professional autonomy of academics. That being said, professional accountability remains strong, both as a stand-alone aspect of academic work and as intertwined in political accountability. As in Finland, university teachers and researchers’ unions are strong voices for the Norwegian academic profession. Peer review is an ever-growing activity, for example, in conferences, research proposals, academicpublications, and hiring and promotion of academic staff, and senior academics spend a significant amount of time assessing colleagues.

One specific aspect of accountability measures is evaluation, highlighted in some detail in Chap. 8. Evaluative procedures have become widespread in Nordic higher education since the 1990s.

As shown in Table 9.2, there are different evaluation practices within the Nordic region, although the ideas behind developing evaluation practices are similar. In relation to educational tasks, the Nordic countries adopt slightly different approaches than those of the Bologna process, including different indicators for their performance-based funding systems. In relation to research, Finland and Norway have developed national evaluation systems that are driven by the Finnish Academy and the research council, respectively. In Denmark and Sweden, there are no national systems, and the universities have more autonomy to organize evaluations themselves. Hence, with regards to evaluations, we find evidence of yet another case of policy convergence combined with diversity when it comes to implementation (Pollitt 2002; Maassen 2017).

Table 9.2 Evaluation models and procedures

A general pattern across the Nordic countries is that policy-driven evaluations have been institutionalized and expanded, and management-oriented schemes—sometimes mirroring the national systems—have gained importance. Last but not least, the academically driven evaluations have proliferated as well. Our findings indicate that evaluations with similar evaluands lack coordination. This has created a feeling of evaluation overload among academics, although, generally speaking, many academics still regard national evaluations as legitimate tasks.

Another important finding concerns the usefulness of evaluations. Although the policy-driven evaluation schemes in all the Nordic countries seem to be largely accepted, academic staff do not consider these effective as tools for improving performance, either in research or in education. There seems to be a mismatch between academic and managerial conceptions of what constitutes and supports quality and performance. This is particularly the case in Denmark, where domestic academics stand out as the most negative. Explanations that are discussed in Chap. 8 include the fact that the evaluations have been perceived as intrusive managerial instruments adding extra workloads without tangible returns or rewards for academic staff.

Evaluations have taken up a central role in the changed governance of higher education in all four countries, which in itself reflects intensified accountability demands due to the growth of the higher education sectors and the corresponding increases in the public resources being allocated to the sector (see Chap. 3). It also shows that decentralization, in the form of increased institutional autonomy, occurs in tandem with centralization initiatives (managerialism and leaderism), as detected in earlier studies in the Nordics (Torjesen et al. 2017).

Performance Measurement and Management

The empirical evidence provided throughout this volume shows that performance measurement and management have become important, and growing in importance, principles in higher education governance in the Nordic countries. There are many common features in the actions taken by the respective governments, but also important differences. Performance management has been criticized for encouraging quantity on behalf of quality, and the criticism has recently been followed by a political request to incorporate quality criteria in the performance management approaches. Already, in the 1980s and 1990s, performance management was introduced in educational funding in Denmark and Sweden, and in today’s system, educational programmes are funded solely according to a performance principle, where funding is based on the number of students passing exams as well as on bonuses given if students accomplish their studies in due time. In Denmark, it has been decided to further develop the funding system to include employability criteria as well as quality aspects possibly linked to student assessments. Since 2009, an increasing part of the funding for basic research, in recent years amounting to 20%, has been performance based. The formula includes the number of graduates from master’s and PhD programmes, the ability to attract external funding, and the counting of publications. A quality aspect is included in counting publications as publication channels are divided into two groups, one releasing more points and resources than the other. Universities also negotiate performance contracts with the Ministry. Hitherto, contracts have not been related to funding, but the institutions have to document goal attainment, and recently, it has been decided to link goal attainment to funding, starting in 2019. In Denmark, salaries are rather marginally linked to performance, although this is increasingly gaining importance.

In Finland, after the reform of 2010 making universities legally independent from the state hierarchy, the university sector can be considered one of the administrative sectors governed/financed by the state, where the ideals of NPM are most comprehensively applied (Kauko and Diogo 2012). Some of the recent empirical studies have also proven the effectiveness of using performance-based funding in increasing the performance of Finnish universities (see Seuri and Vartiainen 2018). Although the execution of performance management on behalf of the Finnish Ministry of Education and Culture has been highly structured, its further application in individual universities within their own internal management and strategies is not controlled by the Ministry. As a matter of fact, individual universities, and in many cases also their subunits, like faculties, have developed their own internal variations of performance management (Kallio and Kallio 2014). The extensiveness of performance-based funding in providing resources to universities, professionalization of academic and administrative management positions, the use of contractual arrangements (performance agreements), outsourcing and centralization of support and administrative services in universities, and the use of various types of competitive funding are examples where the influence of performance management is most visible. One important aspect of performance measurement is the salary system for university personnel. Since 2008, the salary system at universities encompassing both academic and administrative staff has been based on performance measurements, where a maximum of one-third of the salary is performance based. Even though the salary or other performance-based financial incentives have not proven to be the main motivation of Finnish academics to work harder (see Kivistö et al. 2017), they are applied as a means to impose system and institutional level incentives on the individual level, and thereby draw attention to what is considered valuable.

The funding system in Norway provides a more stable budget than the Danish and Finnish systems, as 70% of the funding is in the form of a block grant. Still, the 30% of performance-based indicators increasingly function as a policy tool used to stimulate improvement in both teaching and research, but also as a managerial tool at the institutions. Teaching indicators constitute the largest share (24%), focusing on throughput of students and internalization. As for research indicators (the remaining 6%), these are related to throughput of PhD students, external funding of research (e.g. from the EU and the Norwegian Research Council), and lastly, from the metrics related to publications. The Norwegian Publication Indicator was introduced in 2004 as a system to measure publication activities. As a policy and performance management tool, such indicators from research are meant to stimulate excellence and productivity, but also to increase the accountability of public research. Another important aspect is to align research with societal and economic needs (Aagaard et al. 2015). Despite the broad objectives, the financial role of the Indicator is marginal as it only distributes 2% of the funding to the sector (ibid.). This funding system based on metrics and a market model has, on the one hand, increased autonomy within the universities as the boards are responsible for prioritizing within the allocated financial frames and for aligning their activities to meet the goals for the sector. On the other hand, ex-post control has increased, and the contractual relationships between universities and the state based on performance metrics are replacing the trust-based foundational pact (Stensaker 2014). The increased autonomy is counteracted by controlling instruments, reporting systems, and the financial incentive systems following students and research activities (Christensen 2011).

In Sweden, as well, performance measurement has become more important over time (Geschwind 2017). As mentioned above, one of the most dramatic changes in Swedish higher education was the introduction of performance-based funding in education, based on the inflow of students and throughput. The previous system was criticized for being too rigid, based on central planning, and not driving quality enough. The latter argument has also been used against the current system. Since funding is so closely related to student success, there have been discussions about decreased demands for passing students. The system is based on the idea that different educational areas bear different costs. A student in the humanities is supposed to cost far less than an engineering student, for instance. Another effect of this system has been increased marketing activity by HEIs. An important aspect of the system is the use of a “ceiling” for the number of students recruited. Allocation of funds has a limit, linked to a maximum number of students. Throughput of students has been a controversial quality indicator, and whereas there have been occasional discussions on the risks of lowering demands on students, there are also examples where student throughput has been linked to incentives. Generally speaking, though, this has not affected the individual academic but rather organizational units and HEIs.

In research, the traditional model was block-funding based on historical principles rather than performance. Direct state funding was the bulk of the total funding of research. Lately, there has been a development towards more competitive external funding rather than direct state funding, and as of 2018, the external funding makes up slightly more than half of the total funding. A milestone in Swedish research policy was the introduction, in 2009, of performance-based funding as part of the direct state funding. Since its introduction, 10–20% of the total funding has been allocated to HEIs based on performance, as shown in publications and external funding.

The national systems of performance measurement and management are described in Table 9.3. So what can be said about the actual effects of these systems, both for universities and for individual academics? The in-depth empirical studies in this volume (Chaps. 4 and 5) focus on research rather than education, which is no coincidence. The performance management systems are primarily used for research, albeit other academic activities are also discussed to varying degrees in terms of performance. Following Dahler-Larsen (2014), it can be concluded that research performance measurement has had the greatest constitutive effects on academic staff.

Table 9.3 Main components of the performance-related research funding systems in Denmark, Sweden, Finland, and Norway

The results discussed in these two chapters show that performance-based research funding systems have had notable effects on Nordic universities. Performance indicators are implemented for resource allocation and decision-making in a way that impacts how university actors understand and perceive research activities. Not only do they contribute to a rationalization of formal university structures (Ramirez and Christensen 2013), but they also subtly contribute to an institutionalization and consolidation of research metrics as organizing principles of research (Geschwind and Pinheiro 2017). Even though there are concerns within universities, metrics are generally accepted and even appreciated as a means of enhancing transparency and for assisting university leaders in their efforts to set priorities and improve performance. From the perspective of incentives, publication practices are heavily influential in all countries. Researchers are considering the implications of where to publish as defined by performance measures. Most important are “reputational factors” (Kwiek 2016) rather than the introduction of remunerative incentives such as bonuses and direct salary consequences. However, at some universities in Finland and Denmark, the remunerative incentives have become very important tools, putting pressure on academics to publish high-quality research. The use of metrics is important nevertheless, and the establishment of national metrics in research also influences how success is communicated internally in universities. The technical legitimacy of the measures is generally high, meaning that metrics are perceived as accurately assessing research performance. There are some interesting differences between the countries, however, with more criticism aimed at the crudeness of measures in Norway, Denmark, and Finland, where publications are categorized on a scale with few levels. This is even seen as a threat to high-quality research as it might prompt the production of more publications of lesser quality. The institutionalization of performance measures was also found to vary across scientific fields and institutions. The results from this study show that in the social sciences as well, which have been later to adopt bibliometrics, researchers now act in accordance with measures. An interesting aspect is also the reconstitution of research as a result of performance measurement; the importance of the publication outlet affects how researchers make sense of research. Again, in the three countries of Denmark, Finland, and Norway, with their respective systems of publication levels, this is clearly evident, whereas in Sweden, this was not discussed.

Funding Arrangements

Among all OECD countries, the Nordic countries are, year after year, in the group of countries with the highest levels of public expenditures (compared to GDP) on HEIs. Compared with other countries in the Western world, the four countries’ respective higher education and research sectors studied here have remained largely unaffected by the latest financial crisis. An issue affecting the role of higher education in Nordic societies has been the introduction of fees for non-EU students in Denmark in 2006, and in 2011, Sweden followed suit. However, that is an issue beyond the scope of the empirical studies of this volume. Another topic being recently discussed in the Nordic countries is the relationship between external and internal funding for research and, in turn, its consequences for performance. In an often-cited report, Swedish scholars Öquist and Benner (2012) argued that systems with more direct state funding perform better in research. One of the benchmarks in this study was Denmark (the others were the Netherlands and Switzerland). This has led to a political debate in Sweden on the balance between direct state funding and competitive external funding (Öquist and Benner 2012). As shown in Fig. 9.1, there are significant differences between the Nordic countries regarding this issue. In Norway, only about 30% of total research funding is external, whereas the same number in Sweden and Finland is over 50%.

Fig. 9.1
A graph illustrates the variation in external funding with respect to total funding between the years 1981 and 2015 in the following countries. Denmark, Finland, Norway, and Sweden. All the countries have an increasing trend.

Development in external funding as a percentage of total funding for research at Nordic HEIs. Source: Chap. 5 in this volume

The relationship between internal and external funding also has influence over power relations within HEIs. Chapter 5 includes a discussion about how the increasing proportion of external funding affects authority relations surrounding research activities (Whitley 2011; Whitley and Gläser 2014). It is concluded that authority over research has decreased for managers and increased for funders as a result of these developments. In addition, successful researchers (i.e. those who win grants) receive more freedom in relation to their managers. This is also discussed in Chap. 6.

Concluding Discussion

Having exposed here the main findings across the core categories and themes being investigated in the study, a critical question remains—how does this volume contribute to our knowledge about performance, leadershipreforms, and universities as organizations? The richness of data and our comparative approach have made a number of conclusions possible. This was thematically discussed above, although admittedly, the initial project question on the relationship between leadershipreforms and actual performance in all its crudeness turned out to be more complicated to assess than initially anticipated, not least of all due to challenges we faced in finding appropriate indicators for comparison, not to mention the different definitions (e.g. what counts as student or staff categories) across the Nordic countries. In fact, one of the major conclusions made by the research team is that there are, indeed, four distinct Nordic higher education systems, each with its own dynamics and peculiarities as well as sets of interrelated (nested) variables, which makes any comparative or causality assessment a challenging task. That being said, and guided by Olsen’s visions of the European university, we can unequivocally conclude that the conditions of the environments under which Nordic HEIs operate have changed dramatically during the last decade. With reference to our initial hypotheses, it can be concluded that the H0s, based on a generally rationalist view of universities, have guided the policies and reforms in the Nordic countries. However, our survey and interview data reveal more nuanced and multifaceted experiences, more closely related to the H1s and H2s, emanating from an institutionalist view of universities.

Policy efforts aimed at modernizing the sector have paid considerable attention to the way in which public universities operate. A privileged focus has been given to aspects such as efficiency, effectiveness, and accountability. Most Nordic universities have developed extended administrative structures (central and unit levels) capable of strategically supporting their primary activities, and some have introduced recent changes in the nomination of formal leaders, moving to appointed positions rather than elected ones. Here is a clear distinction between Denmark and Finland, on the one hand, and Norway and Sweden, on the other. The more radical reforms in the former two countries have brought with them a development towards managerialism and leaderism that can be traced in the other two countries as well, albeit not to the same degree. As expected, it is, indeed, apparent that aspects of all four Olsen visions appear in the findings. As a general conclusion, though, we find signs of movement towards universities becoming more hierarchical, bureaucratic organizations where the modus operandi associated with the “community of scholars” has gradually been replaced by a market-driven logic substantiated on entities that are, formally speaking, more “autonomous” but also highly dependent on the external environment (Sahlin 2012). There are also signs of change in what Olsen calls a “representative democracy,” where the role of elective collegial bodies has gradually changed. We therefore find the concept of “penetrated hierarchies” introduced by Bleiklie et al. (2015) to be useful for unpacking and explaining the complex structures of Nordic universities. The authors identify a new institutional template for organizational control, stressing the virtues of a hierarchical bureaucratic model that creates pressures within universities. These pressures are mediated by actors at different levels of the organizational field. This conclusion is, indeed, also valid in our analysis. Furthermore, we found empirical support for the second main conclusion of Bleiklie et al. (2015); namely, that control models are associated with ongoing power struggles between leadership and professionals, which in turn, are partly contingent on their respective control of external resources. Stated differently, our findings reveal that social standing (Battilana 2006), legitimacy (Deephouse and Suchman 2008), and resourcedependencies (Pfeffer and Salancik 2003) do matter within the context of change dynamics in universities as modern organizations. What is more, these dimensions reinforce (and are tightly nested in) one another, thus making any causal claims with respect to the link between structural change and performance a daunting task.

Following ongoing evidence of the transformation of universities as strategic actors (Whitley 2008), another important conclusion arising from this research project is that universities are active entities, not only through the collective efforts of their employees but also as organizations. Contrary to what was the case in the past, it has become rather important for university actors (at multiple levels) to initiate and show activity (Karlsson 2016; Geschwind 2018). One explanation for this is found in the need for legitimacy in the view of external stakeholders and, further on, with taxpayers (Suchman 1995). The formulation of strategies (Chap. 7) is one such example. The launch of evaluations (Chap. 8) is another example, and the introduction of management and measurement systems (Chap. 6) is a third one. Yet, another is the pressure from governmental agencies to respond to demands for accountability, efficiency, and effectiveness (cf. Hazelkorn et al. 2018). In combination with more ambitious professional leaders and managers, this has created a sector packed with initiatives, some of which are aligned, some overlapping, some co-existing, and some conflicting (Geschwind 2018). Evaluation provides an interesting example in this regard, where there is now a combination of policy-initiated, managerial, and professional evaluations that make up a wide array of initiatives.

It is also clear that reforms have similar aims and primary rationales across the Nordic countries. The close collaboration between the countries and the “travelling of ideas” (Sahlin and Wedlin 2008) between the countries has created conversion at the policy level, similar to trends found elsewhere in Europe (Witte 2008). That said, it is worth pointing to the fact that the operationalization of these ideas—such as stronger, professionalized management; the use of metrics and strategies; and the roles of external stakeholders like funding bodies and others—differs significantly, and there is plenty of manoeuvre room for governments and university leaders to navigate in. One distinctive difference between the four countries is the introduction of increased formal autonomy for universities in Denmark and Finland and the changes in recruitment and appointment of academicmanagers. Another distinct difference is the use of publication points in all countries except Sweden. The discussion about “level 1” or “level 2” publication seems to have become institutionalized in all three countries and has been found to have effects on researchers’ behaviour, although the effects are deeper in some scientific fields than others.

Performance measurement and management have proliferated as well, albeit with important differences. First, it should be noted that performance is discussed more in relation to research than education. The metrics used deeply affect researchers in all countries, but particularly so in Denmark, Finland, and Norway, where each publication is marked with a number and thus made easily measurable. This has also complemented other available metrics, such as the h-index, impact factor, and discipline-specific lists of prestigious journals that are still the most common ways to measure excellence in some scientific areas. Although performance-based funding has been used in education in all countries, with slight variations, the performance measures—basically input/output—are less directly related (or even questionably so) to quality and performance. This, in turn, might have consequences for the quality of education, which is an issue that needs further research.

Publication statistics show that performance has been high in the Nordic countries and also that performance is, today, more transparent, measurable, and comparable. However, there is also a growing critical discussion on the concept of performance and its relationship to quality and impact of research. Some of the findings in this project indicate that performance management systems encourage researchers to publish too much and that researchers are more eager to apply various strategies in order to add “points” to their résumé than they are to pose challenging and meaningful research questions (Seeber et al. 2019). Not least in the social sciences, this has been increasingly debated (Alvesson et al. 2017). Future studies should look into how this agenda develops and how initiatives implemented influence university performance in relation to researcher behaviour. A similar effect is found in applications for competitive grants, in particular in systems such as in the Nordic countries, where external funding is an important part of the incentives, becoming a goal in itself rather than a means. Both HEIs and individual academics apply for ever more research grants, not only to sustain a perceived optimal level but also for merit, leading to growth at all levels and casualization of academic staff.

As mentioned earlier, national differences and similarities have appeared in our project. Our case universities included both flagship universities and so-called regional universities. Not least of all, the latter term is controversial, and our impression is that it is considered pejorative and not necessarily used (at least in Denmark and Sweden). Being linked to the region is important, but “being regional” is less attractive, as identified in earlier inquiries (Pinheiro 2012). We have found few differences between these two types of universities. Some trends worth exploring in future studies include the relatively greater importance of education (and thus its stakeholders) and the more managerial type of steering, with appointed managers even in Norway and Sweden. We also selected soft and hard scientific fields in order to control for differences across the sciences. Also there, we found few significant differences. Worth mentioning, however, is the greater dependence on external funding and more acceptance of the use of metrics for measuring quality in the hard sciences.

Performance measurement and management have, indeed, created different universities than those before the implementation of NPM. For many senior academics who experienced academic life prior to NPMreforms, the changes have been rather dramatic. Some of these voices have been heard in this project. In contrast, for younger academics, this world of performance indicators is part and parcel of being an academic in the twenty-first century. The development over time and generational shifts are important. Further (longitudinal) studies of early career researchers’ perceptions of current developments within universities are necessary. Finally, we need to continuously discuss how evaluation, measurement, and management systems affect academic life and its core activities of research and education. We surely hope this volume has encouraged our fellow colleagues across the social sciences to pursue these and other related inquiries in the near future.