National Evaluation Systems

  • Christina SegerholmEmail author
  • Joakim Lindgren
Open Access
Part of the Evaluating Education: Normative Systems and Institutional Practices book series (ENSIP)


The national reform context for evaluation and quality assurance (EQA) in Swedish higher education along with the designs of the different national EQA systems during 1995–2014 is described and analysed in this chapter. This historical account establishes the background for the coming chapters in the book. Our aim is to scrutinise the relationship between the EQA systems and governing. We used government bills, official reports, research reports, articles, and books to describe and analyse the different policy contexts and EQA systems. In the analysis, we observe shifts and continuities as institutional reproduction and change. In this chapter, we also discuss governing in terms of the different expectations raised by the national EQA systems and liken the historical expansion of the systems to an evaluation machinery.


Design Expectations Evaluation systems Governing 


This chapter, in addition to forming the background for the coming chapters in this book, contextualises the Swedish national evaluation and quality assurance (EQA) systems and explicates their national reform contexts.1 Higher education is an area that European and global policy efforts have greatly influenced in recent years. The Bologna declaration and the joint work on developing common indicators for assessing quality in higher education are far-reaching examples of such policy work. In parallel with other policy areas, the contemporary education policy strongly promotes the idea of systematic EQA and is part of what Power (1997) has discussed as the “audit society”, Dahler-Larsen (2012) as the “evaluation society”, and Neave (1998) as the “evaluative state”. These European policy efforts have also influenced the Swedish national context. Thus, we recognise the importance of global and European influences (see, e.g. Ozga et al. 2011; Grek and Lindgren 2015), but in this chapter, we concentrate on exploring the relationship between national EQA systems and governing in the Swedish context. This account, we believe, can provide insight also into how changes in other countries’ EQA systems are part of the contemporary governing of higher education.

Over the last few decades in Swedish higher education, various national EQA systems have been decided, developed, and put into effect. Over time, these systems have displayed diverse political purposes and directions and exhibited different designs. The ramifications for the EQA systems that emerged from the policy context and the design of these systems form parts of the complex and comprehensive work of governing. Here, our objective is to provide a historical account and describe and analyse the national EQA systems for higher education, their designs from 1995 to the 2011–2014 system, and their relation to governing. We explore the political process leading up to the 2016 system in the chapter “ Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting” and scrutinise this system in detail in the chapter “ Re-launching National Evaluation and Quality Assurance: Expectations and Preparations”.

In this chapter, the following questions directed our analysis and also organised our presentation of the national EQA systems:
  • What is being evaluated? Why? By whom? How? What are the consequences in terms of expectations?

  • What are the implications for higher education governing?

In the following, we present a theoretical frame for our understanding of the different national EQA systems that have evolved over the past 25 years. Thereafter, we describe the major reforms in Swedish higher education as a context for the design and use of the EQA systems. We divide this description into the periods the respective designs were in operation. Finally, we discuss the EQA systems in relation to governing by expectations and inductively based on Dahler-Larsen’s (2012) idea of evaluation machines (see next section).

A Theoretical Approach to Evolving National Evaluation Systems

As stated in the first chapter, we understand governing as a verb and emphasise the work carried out in different ways by different actors in diverse places and spaces and by various means to reach certain aims (Clarke 2015). Policy intentions and aims are expressed in several policy documents and reforms, and for EQA systems, such intentions are also embedded in their designs or “infrastructures of rules” (Fourcade 2010, pp. 571–572). The designs prescribe what is to be done (what is to be evaluated, how, by whom, and for what reasons [what is hoped to be achieved]), orient the attention of those taking part in these processes, and influence their behaviours, activities, and perceptions of their enterprises (Dahler-Larsen 2014; Segerholm 2001).

For the purpose of this chapter, we recognise the dynamic relationship underlying both institutional reproduction and change (Mahoney and Thelen 2010). Change does not necessarily need to encompass the whole institution but can target and influence parts of it. EQA systems may change gradually or more dramatically, and these dynamics hold implications not only for governing but also for our understanding of expectations of what should count as, for instance, “good quality” (Hopmann et al. 2007). Thelen (2003, p. 213) argues that models of path dependency, accompanied by change as rapid and drastic punctuations, need to be complemented by other tools that enable us to account for a more gradual and dynamic relationship, as these processes may be more incremental than usually proposed. Departing from such an understanding of stability versus change, we use a set of concepts that capture this dynamic in different ways.

First, conversion, marking a change originating from within the institution itself when existing frameworks come to be enacted in various ways, resulting in institutions’ reorientation towards new goals or missions. If new rules and procedures that originate from outside the institution are put into place, the concept of displacement refers to both more radical exogenous alterations, as well as slow, incremental processes of change. Institutional layering marks a gradual process of change, involving revisions, additions, and modifications when new elements are added to and put alongside old ones. Each revision may be small, but when placed alongside one another, they accumulate and result in a fundamental change. Institutions also face the risk of drift, erosion stemming from an incapacity to respond to the external context. Another such risk is exhaustion by slow-moving breakdown, resulting in self-destruction from within (Mahoney and Thelen 2010; Thelen 2000, 2003). We use these concepts as analytical descriptors of the evolving national EQA systems.

We relate to the idea of a governing – evaluation – knowledge nexus in that the notion of “evaluation machines” (Dahler-Larsen 2012, pp. 176–182) is used as a basis to discuss the expansion of the EQA systems over time. Evaluation systems may be conceived as evaluation machines, since they are being based on “distinctive epistemological perspectives”, “organisational responsibility”, “permanence”, and “focus on the intended use of evaluations” (Dahler-Larsen 2012; Leeuw and Furubo 2008, pp. 159–160). We will elaborate on the evaluation machine metaphor further in the discussion.

Our account is based on documentary materials, such as government bills, official reports, and descriptions of the EQA systems, by the two national agencies responsible for higher education during the periods. We also use secondary sources such as research reports, articles, and books that describe and analyse the different policy contexts, and EQA systems in higher education in Sweden.

Shifting Policy Contexts: Continuities and Shifts in Evaluation Designs

In this section, we present continuities and shifts in Sweden’s higher education policies, along with the variety of national EQA systems designs that emerges from 1995 to the system decided in 2016. We find that descriptions of the major reforms are contextual information that is necessary for understanding the EQA systems’ designs. The periods slightly extend into each other since the termination of one system overlaps with the introduction of a new one. Before we go into this presentation, we offer a short background on the history of EQA in Sweden.


External evaluation is a rather new phenomenon within the landscape of Swedish higher education institutions (HEI). From 1477, the formation of the first university in Uppsala, up until the 1960s, institutionalised forms of external evaluation did not exist. According to Gröjer (2004; see also Neave 1998), external evaluation was a response to problems related to the expansion of higher education, and to the transformation from an elite institution to a mass university. From 1950 to 1960, the higher education system expanded from 16,400 students to 37,000, and by 1970, the number had increased to 130,000 (Gröjer 2004, p. 50). This expansion involved not only new groups of students but also new HEIs, staff, programmes, etc., and external evaluation was perceived as a state instrument to retrieve knowledge that could be used to plan, steer, and improve the sector according to contemporary utopian and rationalistic ideas. Notably, the first national agency (Universitetskanslersämbetet, UKÄ), which was installed in 1964 and was responsible for planning and sizing, focused its first evaluations on issues on pedagogical aspects such as improvement in teaching and examination. The 1970s saw the emergence of the idea that evaluation could be used at a national system level in order to control whether national goals were attained. Not only examination frequencies, i.e. output, were used, but also data on the inner workings of HEIs in terms of teaching, examination, students’ previous knowledge, study habits, teachers’ working conditions, etc. were acknowledged. As Gröjer (2004) noted, “effective development could only be achieved if the entire education system was scanned and evaluated” (p. 64). Thus, evaluation originally served the purpose of making the inner world of higher education “visible” to the state. External experts and agency staff who used implicit professional standards and indicators conducted the evaluations, and methods were adapted contextually without any overall agency policy governing the assessments. Information from the evaluations did not follow any explicit official plan but was still distributed to those who were affected (Gröjer 2004). In 1977, a higher education reform informed by ideas on decentralisation and management by objectives called for further national evaluations. The new national agency (Universitets- och högskoleämbetet, UHÄ) engaged independent researchers and started building networks to facilitate exchanges of experiences and ideas. Conferences, seminars, and information were also used to develop evaluation practices.

According to Gröjer (2004), notions of inefficiency within the higher education sector in the 1980s made evaluations increasingly important. Components like site visits, peer reviews, and criteria for international comparisons were implemented and methods developed. However, HEIs still held responsibility for teaching quality and education results, whereas EQA served to control quality at the national level. At the end of the 1980s, the concept of “quality” began to manifest itself within the language of EQA, and the perspectives of new groups of actors including students, student unions, and representatives from the working life (like potential employers) were introduced in the evaluations. The purpose of the evaluations was both to improve and control; however EQA was still based on implicit criteria and a group of directly involved experts who, based on their professional discretion, designed and implemented each evaluation (Gröjer 2004). Gröjer (2004) describes a continuous process of professionalization for the evaluative activities during this period. In this process, specific knowledge was developed, for example, during site visits where actors had the opportunity to learn from each other. Increasingly also, the national agency (UHÄ) argued that the HEIs must use the evaluation results for the purpose of improvement.


The overall aims of the higher education reform of 1993 were to increase the freedom of HEIs, to establish incentives for quality development, and to improve efficiency in the use of resources in HEI activities. This reform also dramatically changed the preconditions for HEIs since the entire state allocation system was altered to a governing system based on economic incentives and productivity (Bauer et al. 1999; Government Bill 1992/1993:1). A performance-based funding system (Herbst 2009) based on a per-student state grant system was introduced, with one sum for each registered student and a larger sum for each student passing the course requirements. This meant that the previous system of applying for state grants in relation to the number of students was abandoned, as well as the applications for funds to install professorships and senior lecturers, that when approved, were granted by royal letters. Another novelty compared to the central planning in the 1960s was local freedom for the HEIs to decide on educational content in their courses and programmes. Specific national/state requirements were however still in operation for professional programmes (e.g. teacher education, physicians). Internal quality assurance (IQA) systems at the HEIs were also introduced as a mandatory requirement along with a demand for obligatory course evaluations (Bauer et al. 1999; Government Bill 1992/1993:1).

The design of the national EQA system during this period, it was argued, was to stimulate internal quality work in order to uphold and enhance quality. A new national agency, the Swedish National Agency for Higher Education (SNAHE, Högskoleverket), was established with the commission to push and control the HEIs’ work with quality issues (Government Bill 1992/1993:1). In line with these motives, the design of the EQA system was directed at the HEIs’ internal quality assurance work, leaving it to the HEIs to decide how this was to be carried out. The SNAHE carried out two cycles of these types of evaluations (SNAHE 1998). Another part of the EQA system was directed towards accreditation for awarding magister degrees. Both these evaluation types were performed in a similar way, by a so-called self-evaluation based on a particular national template, peer review with a site visit carried out by external colleagues, plus a public report. The process as a whole was administered by the SNAHE. All HEIs were evaluated in 3-year cycles. During this period, with the 1993 years reform as a starting point, recurring external control of higher education through a national EQA system was introduced for the first time in Sweden.


No major national quality assurance reform was decided during this period, but some substantial changes were nevertheless made regarding the design of the national EQA system. The system was now said to be a means to guarantee a minimum standard in the education provided, to enhance trust in HEIs, to increase student influence, and to serve students with information so they can make informed choices (Government Bill 1999/2000:28).

The design of the EQA system shifted its focus to be directed to quality in education, that is, evaluation of the quality in academic subject courses and programmes (Government Bill 1999/2000:28; Franke and Nitzler 2008). Another part of the design was to include thematic evaluations, which were directed at, for example, student influence and diversity. Accreditation for rewarding degrees and certificates and of so-called scientific areas (e.g. the right for a university college to establish PhD programmes and award doctoral degrees) was another part. All types of evaluations were carried out in line with a local evaluation model developed in the 1980s by Sigbrit Franke-Wikberg (1990), who at this time was the director general of the SNAHE. The model was adapted to serve a national perspective and consisted of a self-evaluation based on a national template. The template for the subject and programme evaluations asked the responsible department for information about three aspects: preconditions for courses and programmes, the education processes, and the outcomes of the education processes (SNAHE 2001, 2003). The template emphasised preconditions such as number of faculty with PhD, their positions, and number of enrolled students. The three aspects were then to be related to one another in an analysis and an assessment of the education the departments provided. A self-evaluation report was sent to the SNAHE, and the work with it was supposed to engage the whole faculty in an internal discussion of their work. A group of subject/programme experts carried out a peer review, and for the first time, students were part of this external evaluation group. The peer-review group made site visits and conducted interviews with department heads and managers, teachers/researchers, and students. The group produced a written public report in which the SNAHE included their decision on whether the education was assessed to be of sufficient quality. Hence, the reviewers had to provide a cut score that encapsulated their judgement and formed the basis for the SNAHE’s decision. Sanctions were also introduced with this EQA system, meaning that if the department/HEI did not improve, the right to award degrees or certificates could be revoked. A follow-up was therefore performed a year after in cases of a decision of inadequate quality. This happened very rarely (Wahlén 2012), but the entire base for HEIs to provide a certain level of education (course or programme) could be in jeopardy because the state grants regulated in the 1993 reform were (and still are in 2019) coupled to the right to award degrees. This EQA system was run in a 6-year cycle, and the intention was to include all academic subjects and programmes.

During this period, the HEIs had to evaluate their educational quality in line with the national template and had to yield to assessments conducted by an external group of “colleagues” including students. To adapt to these new circumstances and support departments in their work with self-evaluations, many HEIs expanded their administrations with new functions such as “quality officers” at the faculty level and deputy vice-chancellors with education quality as a particular responsibility at the central level (Segerholm and Åström 2007). The number of evaluations that the universities and departments had to engage in increased substantially with this rather extensive system, and there were signs that the previously existing internal evaluation models gave way to the national model and its templates (Segerholm and Åström 2007). There were also signs of evaluation influence, before an actual evaluation process had even started; for example, several signs showed that the attention of the HEIs was directed at what was asked for in the national template, and not what had locally been prioritised previously (Segerholm and Åström 2007).


The major change in preconditions for higher education during this period was the so-called Bologna reform in 2007. The entire structure for Swedish higher education was altered to include three levels (undergraduate, advanced, and graduate) rather than two (undergraduate and graduate). This system introduced a new order for degrees and certificates that required students to achieve learning outcomes to get a degree/certificate. Subsequently, specified learning objectives for all individual courses were now also required. Another novelty was the establishment of the term “subject areas” (i.e. either an academic subject or composite of related academic subjects) compared to the previous, more strict division in academic subjects (e.g. political science, sociology, and psychology). At the end of this period, the government decided on a new national agency that should exclusively supervise and evaluate higher education, the Swedish Higher Education Authority (SHEA, Universitetskanslersämbetet).

The design of the national EQA system stayed rather much the same but with some stress on the relationship between the learning objectives (i.e. the requirements for passing an individual course) and the learning outcomes (i.e. the requirements for a degree/certificate) (SNAHE 2007). Evaluations of the IQA systems at the HEIs were reintroduced and followed the European Association for Quality Assurance in Higher Education’s (ENQA) recommendations (Ministry of Education 2009; SNAHE 2007). The subject and programme evaluations were to be proportionate based on a simplified national self-evaluation template. Those who seemed to live up to the quality requirements were to be evaluated less extensively (i.e. no site visits). The introduction of rewards (a distinction for an eminent educational environment) to departments that delivered “good quality” education was a new feature in this period (ibid.).

The design of the EQA system more or less emphasised the constant presence of external control but also directed some attention to the general idea of the relationship between (learning) objectives and outcomes. This system also represents a mix of sticks and carrots (i.e. threats to withdraw degrees and quality rewards) as a way to stimulate, or force, the HEIs to adapt to the new conditions.


Swedish higher education in the last period covered in this chapter was influenced by all previous reforms, which were layered on top of each other since none of them had been dramatically challenged or altered (Thelen 2003). There was a kind of incremental process towards a higher education system that became more and more characterised by New Public Management (see Pollitt 1995, Pollitt and Bouckeart 2017). As an additional step in this direction, the government decided on what has come to be called the “autonomy reform” in 2010 (Government Bill 2009/2010:149). This reform concerned local freedom for the HEIs to organise internally, make decisions on types of positions and requirements for employment, and allocate resources internally at their own discretion. Just before this autonomy reform, the government, after a tense conflict with the SNAHE, decided to reform the design of the national EQA system (Government Bill 2009/2010:139). Its results-oriented design was thereafter severely critiqued (see, e.g. Kettis and Lindberg-Sand 2013); we will return to some of the reasons for this in the coming chapters. The SHEA’s membership in the ENQA was revoked because this system did not fully adhere to the ENQA statutes, and the system was finally terminated. The last evaluations in the system were carried out in spring 2014, and the final public reports were published in 2016.

When the design of this EQA system was decided upon, it was justified in the policy texts by the need to increase quality in higher education (Government Bill 2009/2010:139). Sweden, it was argued, also needed to strengthen its international position in the global economy and education market. A third motive was the need to clarify education quality in relation to the students and to the society at large. As in the 2007–2011 system, it was quality in education, that is, in subject areas (the new term introduced earlier) that should be evaluated. Also, as in previous systems, accreditation for the right to award degrees and certificates was part of the design. The dramatic change concerned how quality in education should be evaluated: from a model where the relations between preconditions for education, the process, and the outcomes/results were the basis for assessing quality, to a new evaluation model decided by the government. (This decision was in itself quite unique, because the responsible national authority normally makes such detailed decisions.) This design was product oriented (Franke-Wikberg and Lundgren 1980, 1981; House 1978) and mainly directed to assess student outcomes as measured through the indicator of students’ independent projects (in the social sciences often limited empirically based studies presented as a small thesis/report). As before, a mandatory self-evaluation, in line with a national template asked for student grades, share that passed, etc. Quality assessment was delegated to an expert panel of peers, students, and representatives from areas external to the HEIs such as private companies or from the public sector. A sample of students’ independent projects for the bachelor, magister, and master degrees, the self-evaluation, and video interviews with department representatives, teachers, and students (instead of site visits) formed the basis for the assessments (SNAHE 2010, 2012, 2013). The external panel produced a public report including the SHEA’s decision. If the SHEA decided that the quality was insufficient, a plan for improvement was requested and a follow-up conducted. Sanctions were the same as before, but the carrots were resource allocation by state grants partly related to the assessment of quality (Government Bill 2009/2010:139). The political focus on “quality” during this period can be observed in the government bill (Government Bill 2009/2010:139) in which the design was proposed: The term “quality” alone or in connection with other terms appears eight times per page as a mean (Segerholm 2010), without being given any substantial meaning apart from student outcomes as measured by assessing students’ independent projects for the bachelor, magister, and master degrees.

As we will see in the chapter “ Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting”, the design of the national EQA system in this period marks a turn in evaluation ideology in two important ways. Firstly, the system is deliberately based on ideas regarding autonomy in the sense that it steers evaluation away from preconditions and education processes in an attempt to not interfere with the internal work of HEIs. Secondly, it involved a mode of governing by evaluation in which the government, by rather harsh means, forced a totally different model on the HEIs – a design based basically on the relationship between expected learning outcomes for a particular degree and student outcomes as measured by assessing students’ independent projects. The stress was clearly on what has been labelled as evaluation of effects (results), which were quite independent of education and/or learning process and preconditions. There are examples, however, where the self-evaluation reports were given more weight in the evaluations. This slightly increased stress on self-evaluations was something that occurred during the period, much as a response to the criticism from the HEIs, as we understand it. The emphasis on evaluation of effects is also visible in the composition of the external panels, where HEI external representatives (or potential employers) were now for the first time acting as evaluators.


In this chapter, we have described the major reforms in Swedish higher education and how the national EQA systems have been designed, used, and influenced HEIs. We have noted reproduction as well as change in the relatively short history of EQA in Sweden. Many components of systems in terms of design (e.g. peer reviews, self-evaluations, site visits, public reports) are introduced in the 1980s and 1990s as EQA was amplified on a broad national scale, and these were later complemented by additional components (e.g. thematic evaluations, accreditation) that were combined in different ways over time producing new aggregate system designs. We thus identify an overall process of expansion and change in the comprehensiveness of the various EQA system designs over time. The designs developed from a rather limited scope in direction as late as the 1990s, including evaluation of the HEIs’ IQA systems and accreditation, over a very comprehensive design including at a minimum quality evaluations, thematic evaluations, and accreditation, to some intermediate state in the 2011–2014 period. The result of these developments in which the designs of the different EQA systems are layered over one another, mixing and blending new parts with the old and, when combined, results in a reorientation (Mahoney and Thelen 2010).

Moreover, ways of organising designs by way of technologies like visibility, comparability, economic rewards, and sanctions that foster and trigger certain modes of behaviour have been added over time. Such technologies are productive from an organisational perspective in the sense that they make things happen, and they also have a radical impact on the ways individual components are employed. For example, if the quality of HEIs is to be comparable on the basis of public reports, these must be reliable and comparable to all others of a similar kind. A public report that is part of an EQA system and explicitly serves the purpose of comparability thus puts increasing demands on methods and ways of writing in terms of validity and reliability. Such issues related to changes in EQA designs and the implications of such changes on practices and behaviours will be explored more in the upcoming chapters of the book.

Governing by Changing Designs

The issue of change is interesting in different ways. First of all, changes in national EQA systems do not come about by themselves; each reform involves a political process and agency work, to plan, design, and implement new system parts and ways of organising them. Change thus produces fundamental shifts in the work of actors directly involved in evaluations. Our results therefore raise questions about whether the governing potential in the EQA systems in the Swedish case partly relies on the shifts themselves. By constantly changing the systems, expectations are also changed and form one important part of the work of governing. While change in EQA systems serves to produce change (i.e. improvement) within HEIs as new aspects of practices are evaluated over time, this change arguably also produce social acceleration (Rosa 2013) since each new national system brings about time-consuming efforts within the HEIs in terms of interpretation and translation (Ball et al. 2012) – efforts that are not only related to concrete practices of dealing with evaluations but also related to core activities such as organisation, planning, teaching, and examination.

Overall, our account confirms Hopmann’s (2008) thesis that higher education over the last two decades has moved from being an internally managed “ill-defined problem” (evaluated by professionals themselves who needed leeway to define their own practice) to a “well-defined problem” that is managed and controlled by external (and internal) “expertise” by way of using indicators and standards. According to Hopmann (2008): “Expectation management changes [higher education] dramatically. The core focus shifts to more or less well-defined expectations of what has to be achieved by whom” (p. 424).

Although we recognise that European (and global) EQA policy interacts with the Sweden’s national policy, the results also show that references regarding EQA design are formed rather endogenously through the social and institutional contexts in which the interactions are established. One observation is that the overall picture of the changes in the national EQA systems in terms of Mahoney and Thelen (2010; Thelen 2000, 2003) can be identified as institutional layering. However, over time there is also an ongoing process of displacement that has changed the entire direction of Swedish higher education through the different designs of EQA systems and particularly the various kinds of evaluation models. Displacement here involves fundamental change through more active interventions in prior arrangements in terms of democratic ideals and the creation of new market-oriented alternatives in their place. Overall, this development implied a relative state suspension of ambitions to guarantee equivalence within the higher education sector. Instead, students were increasingly seen as consumers, and diversification in terms of education quality was seen as a problem that could be targeted through competition. This displacement is particularly visible in the 2011–2014 system, in which increased stress was put on providing information to students to facilitate informed choices, informing the society at large of the accomplishments of higher education in general (accountability), and on including representatives of potential employers. This system was imposed on the HEIs, and traditional ideas on evaluation were displaced in favour of new components and technologies associated with new behavioural logics – overall, an alien approach to the higher education sector at that time.

A second observation is that the change in evaluation models displays a successive advancement of comparability among HEIs. Comparability as a technology hence serves the purpose of establishing common standards and agreements and organise the HEI sector within an international and national market. Taken together, external demands have increasingly, and bit-by-bit, resulted in a displacement, where HEIs become more consumer oriented, a movement provoked by the evaluation exercises. In these processes, HEIs need to precisely spell out what it is the students need to learn and also the HEIs’ degree of success in that respect (a declaration in advance of the quality of the service the student is “buying”, similar to custom charters.) Rider et al. (2013) describe the profound impacts of universities’ transformation from public and democratic institutions into marketised networks. These changes in the higher education system are similar to some of the changes observed in education more generally in Sweden and elsewhere (see, e.g. Ozga et al. 2011).

Governing by Expectations

When considering the expectations these different EQA systems may give rise to and the governing role they fill, we would like to emphasise the following:

After the 1993 reform, particularly from 1995 and onwards, there is a constant presence of some kind of national evaluation of higher education. This constant presence of external control also leads to the expectation for it to be there and to continue to be present. In turn, this is part of making higher education “auditable” according to Power (1996) or evaluable (see also Sahlin-Andersson 1995).

In the later periods, when the designs of the EQA systems first included sanctions, and then rewards, the HEIs also developed expectations of such sticks and carrots. The possibility of having the right to award degrees and certificates revoked makes the HEIs also expect such sanctions to be used (which they also are, albeit rarely). The consequences of expectations of such high-stakes evaluations are well known in educational research, particularly concerning widespread testing, where phenomena such as teach-to-the-test and window-dressing are developed (see, e.g. Linn 2000). Compliance is another consequence that easily makes educational considerations give way to juridical or managerial ones in order to avoid criticism (Lindgren et al. 2012; Solbrekke and Englund 2011).

From the implementation of the Bologna system, with Sweden’s stress on a rationale based on objectives and results, particularly manifested in the design of the 2011–2014 EQA system, and its emphasis on student outcomes and attainment, the state expects the HEIs to deliver students who produce independent projects that are assessed as good enough. Consequences of this that we know include, for example, changes in resource allocation so that supervisors in courses for independent projects get more time for supervision and teachers in other courses get less time teaching (cf. Sørenssen and Mejlgaard 2014, pp. 26–27). Overall, such strategic responses to EQA raise critical questions. Do the designs of national EQA systems provoke the desire to improve/comply but take away/distort the performance? Hence, as pointed out by Hopmann (2008): “only those results which can be ‘verified’ according to the stakes given and do not meet expectations become problematic, and only those outcomes which meet the pre-defined criteria are considered a success” (p. 424).

As noted above, accreditation has been part of all EQA systems and has stayed much the same over the different periods – an important continuity to acknowledge despite certain and other simultaneous and ongoing processes of change. The different national agencies (the SNAHE and the SHEA) more or less had the same expectations for what the HEIs have to show in order to get permission to offer PhD programmes or to get the right to award degrees and certificates. This leads to stability in what the HEIs expect these accreditation processes to direct attention to, and that is of foremost attention to certain preconditions, such as share of teachers with a doctorate. One known consequence of this, however, is that HEIs try more intensely to increase their shares of faculty with a doctoral degree when they apply for the right to start a PhD programme. In general terms, the consequences of these reciprocal expectations are that the HEIs direct resources to live up to the different requirements in the evaluations.

The successive additions in the external panels of students, and of future employers in the 2011–2014 system, teach the HEIs, and develop expectations that parts of society outside the higher education sector, have legitimate interests in the scrutiny of higher education. Expectations are also raised about them having knowledge and competence enough to evaluate higher education. Extended influence by external stakeholders is by no means a Swedish higher education phenomenon, as Deem et al. (2007) and Magalhães et al. (2018) show in their studies of managerialism and higher education governing boards.

The state, on the other hand, expects the HEIs to accept all sorts of evaluators and also expects the HEIs to acknowledge that the expertise of the external panel is sufficient when it comes to higher education. A plausible consequence of these last two sets of expectations is a shift in the mindset of HEI managers and teachers/researchers to be more receptive to external demands on the direction of their work, that is, to make higher education and research better adapted to market needs. For example, this may mean increased efforts to produce more “useful” (applied) education and research. This is an example of what Dahler-Larsen (2012) labels constitutive effects, pointing to the potential of evaluations to influence not only behaviours but also our perceptions of the phenomenon/activity/programme that is being evaluated.

The final kind of expectation we bring forward is based on the descriptions of the different designs of the national EQA systems. This expectation suggests that the shifts in designs themselves make the HEIs to expect changes. A consequence of that is that it has become necessary for the HEIs to always keep an eye on national policy developments and on what is required of them. They thus accept constant change, constant pressure, and constant control and must be on alert, thereby possibly avoiding the risk of drift (Mahoney and Thelen 2010). Depending on the more detailed shifting but also stable requirements in the different designs of the EQA systems, governing by expectations is both about what the HEIs expect the state (the national agencies) to do and what the state and national agencies (through decisions and policy) expect to happen at the HEIs.

Building an Evaluation Machinery?

Overall, we contend that the historical process of establishing national EQA systems in Sweden shows resemblance to what Dahler-Larsen (2012) describes in terms of evaluation machines. In this book, we use the evaluation machine analogy in our explorations of evaluation as a practice in governing the Swedish higher education case. As shown in this chapter, the EQA systems change constantly, leading us to use the notion of an “evaluation machinery” to denote the assemblage of elements that we have identified during the covered period. We equal an evaluation machinery with Dahler-Larsen’s characterisation of evaluation machines as an ideal typical concept that draws attention to development within the audit society, where evaluation has become institutionalised and professionalised so that “arbitrariness and subjectivity” are eliminated (Dahler-Larsen 2012, p. 176). They are:

[m]andatory procedures for automated and detailed surveillance that give an overview of organizational activities by means of documentation and intense data concentration. (Dahler-Larsen 2012, p. 176)

Similar to evaluation machines, the Swedish national EQA machinery has become permanent and repetitive over time and functions as a producer of “streams of information” rather than occasional reports (Dahler-Larsen 2012, p. 177). It has become increasingly embedded in HEIs organisational procedures of verification and resource allocation; EQA is thus framed by ideas of “organizational responsibility” (Dahler-Larsen 2012, p. 177). As such EQA has also become a prospective rather than just a summative form of evaluation. Broad scales of activities related to EQA are “planned in advance so they can be intentionally linked to decision and implementation process” (Dahler-Larsen 2012, p. 177). EQA is hence increasingly reciprocal and has become a natural condition for HEIs. Over time, EQA has become based on “distinctive epistemological perspectives” and increasingly “relies on a number of tools or scripts such as definitions, indicators, handbooks, procedures, guidelines, etc., to support fairly standardized operationalisations” (Dahler-Larsen 2012, pp. 177–178). Finally, as an evaluation machinery, EQA cover “phenomena that have broad scope in time and space” (Dahler-Larsen 2012, p. 178). Higher education involves extensive and complex activities that are detailed “in a systematic and integrated way that permits comparison among areas of activities” (Dahler-Larsen 2012, p. 178).


The notion of an evaluation machinery harbours a range of aspects that will be explored further in the upcoming chapters of the book. This notion draws attention to the role of documentation and specific forms of documentation in terms of self-evaluations that have become institutionalised over time. An evaluation machinery also require distinctive roles and knowledge for their functioning. They must be designed, engineered, and operated (Dahler-Larsen 2012). For example, what are the implications of increasing demands on external assessment panels and site visits in terms of forms of knowledge, expertise, experience, and social competence?

The issue of constitutive effects (Dahler-Larsen 2012) will also be pursued as an important theme in the book. In some of the following chapters, we will look more closely into this and explore some of the above described national EQA systems and processes, their consequences, and the way they influence and govern higher education.

In the next chapter, however, we turn to the wider international context and situate the Swedish example within Europe in order to understand development in higher education policy and EQA systems.


  1. 1.

    This text is a revision and extension of Segerholm et al. 2014.


  1. Ball, S. J., Maguire, M., & Braun, A. (2012). How schools do policy: Policy enactments in secondary schools. London: Routledge.CrossRefGoogle Scholar
  2. Bauer, M., Askling, B., Gerard Marton, S., & Marton, F. (1999). Transforming universities. Changing patterns of governance, structure and learning in Swedish higher education. London: Jessica Kingsley Publishers.Google Scholar
  3. Clarke, J. (2015). Inspection: Governing at a distance. In S. Grek & J. Lindgren (Eds.), Governing by inspection (pp. 11–26). London: Routledge.Google Scholar
  4. Dahler-Larsen, P. (2012). Evaluation society. Stanford: Stanford University Press.Google Scholar
  5. Dahler-Larsen, P. (2014). Constitutive effects of performance indicators. Getting beyond unintended consequences. Public Management Review, 16(7), 969–986.CrossRefGoogle Scholar
  6. Deem, R., Hillyard, S., & Reed, M. (2007). Knowledge, higher education, and the new managerialism: The changing management of UK universities. Oxford: Oxford University Press.CrossRefGoogle Scholar
  7. Fourcade, M. (2010). The problem of embodiment in the sociology of knowledge: Afterword to the special issue on knowledge in practice. Qualitative Sociology, 33, 569–574.CrossRefGoogle Scholar
  8. Franke, S., & Nitzler, R. (2008). Att kvalitetssäkra högre utbildning – en utvecklande resa från Umeå till Bologna [To quality assure education – A developmental journey from Umeå to Bologna]. Lund: Studentlitteratur.Google Scholar
  9. Franke-Wikberg, S. (1990). Utvärdering i och av gymnasieskolan. En principskiss [Evaluation in high school. A principle sketch] (Arbetsrapporter 86). Umeå: Umeå universitet, Pedagogiska institutionen.Google Scholar
  10. Franke-Wikberg, S., & Lundgren, U. P. (1980). Att värdera utbildning. Del 1 [To evaluate education. Part 1]. Stockholm: Wahlström & Widstrand.Google Scholar
  11. Franke-Wikberg, S., & Lundgren, U. P. (Eds.). (1981). Att värdera utbildning. Del 2. En antologi om pedagogisk utvärdering [To evaluate education. Part 2. An anthology]. Stockholm: Wahlström & Widstrand.Google Scholar
  12. Government Bill 1992/1993:1. Universitet och högskolor – frihet för kvalitet [Universities and university colleges – Freedom for quality].Google Scholar
  13. Government Bill 1999/2000:28. Studentinflytande och kvalitetsutveckling i högskolan [Student influence and quality development in higher education].Google Scholar
  14. Government Bill 2009/2010:139. Fokus på kunskap – kvalitet i den högre utbildningen [Focus on knowledge – Quality in higher education].Google Scholar
  15. Government Bill 2009/2010:149. En akademi i tiden – ökad frihet för universitet och högskolor [A contemporary academy – Increased freedom for universities and university colleges].Google Scholar
  16. Grek, S., & Lindgren, J. (Eds.). (2015). Governing by inspection. London: Routledge.Google Scholar
  17. Gröjer, A. (2004). Den utvärdera(n)de staten [The evalua(ing)ed state]. Doctoral dissertation. Stockholm: Stockholm University, Department of Political Science.Google Scholar
  18. Herbst, M. (2009). Financing public universities. The case of performance funding. Dordrecht: Springer.Google Scholar
  19. Hopmann, S. T. (2008). No child, no school, no state left behind: Schooling in the age of accountability. Journal of Curriculum Studies, 40(4), 417–456.CrossRefGoogle Scholar
  20. Hopmann, S. T., Brinek, G., & Retzel, M. (2007). Pisa zufolge Pisa – Pisa according to Pisa. Wien: Lit-Verlag.Google Scholar
  21. House, E. R. (1978). Assumptions underlying evaluation models. Educational Researcher, 7(3), 4–12.CrossRefGoogle Scholar
  22. Kettis, Å., & Lindberg-Sand, Å. (2013). Det svenska kvalitetssystemets kval – en dialog från olika utgångspunkter [The anguish of the Swedish quality system. A dialogue from different viewpoints]. Högre utbildning, 3(2), 139–149.Google Scholar
  23. Leeuw, F. L., & Furubo, J.-E. (2008). Evaluation systems. What are they and why study them? Evaluation, 14(2), 157–169.CrossRefGoogle Scholar
  24. Lindgren, J., Hult, A., Segerholm, C., & Rönnberg, L. (2012). Mediating school inspection – Key dimensions and keywords in agency text production 2003–2010. Education Inquiry, 3(4), 569–590.CrossRefGoogle Scholar
  25. Linn, R. (2000). Assessment and accountability. Educational Researcher, 29(2), 4–16.CrossRefGoogle Scholar
  26. Magalhães, A., Veiga, A., & Amaral, A. (2018). The changing role of external stakeholders: From imaginary friends to effective actors or non-interfering friends. Studies in Higher Education, 43(4), 737–753.CrossRefGoogle Scholar
  27. Mahoney, J., & Thelen, K. (2010). A theory of gradual institutional change. In J. Mahoney & K. Thelen (Eds.), Explaining institutional change. Ambiguity, agency and power (pp. 1–38). Cambridge: Cambridge University Press.Google Scholar
  28. Ministry of Education. (2009). Uppdrag till Högskoleverket att föreslå ett nytt system för kvalitetsutvärdering av högskoleutbildning. Regeringsbeslut, U2009/1444/UH. [Commission to the SNAHE to propose a new system for quality evaluation of higher education]. Stockholm: The Government, Ministry of Education.Google Scholar
  29. Neave, G. (1998). The evaluate state reconsidered. European Journal of Education, 33(3), 265–284.Google Scholar
  30. Ozga, J., Dahler-Larsen, P., Segerholm, C., & Simola, H. (Eds.). (2011). Fabrication quality in education: Data and governance in Europe. London: Routledge.Google Scholar
  31. Pollitt, C. (1995). Justification by works or by faith? Evaluating the new public management. Evaluation, 1(2), 133–154.CrossRefGoogle Scholar
  32. Pollitt, C., & Bouckaert, G. (2017). Public management reform (4th ed.). Oxford: Oxford University Press.Google Scholar
  33. Power, M. (1996). Making things auditable. Accounting, Organizations and Society, 21, 289–315.CrossRefGoogle Scholar
  34. Power, M. (1997). The audit society. Rituals of verification. Oxford: Oxford University Press.Google Scholar
  35. Rider, S., Hasselberg, Y., & Waluszewski, A. (Eds.). (2013). Transformations in research, higher education and the academic market. The breakdown of scientific thought. Dordrecht: Springer.Google Scholar
  36. Rosa, H. (2013). Social acceleration: A new theory of modernity. New York: Columbia University Press.CrossRefGoogle Scholar
  37. Sahlin-Andersson, K. (1995). Utvärderingars styrsignaler [The signals of governance in evaluations]. In S. Rombach & K. Sahlin-Andersson (Eds.), Från sanningssökande till styrmedel. Moderna utvärderingar i offentlig sektor [From a search for truth to a governing instrument. Modern evaluations in public sectors] (pp. 71–92). Stockholm: Nerenius & Santérus Förlag.Google Scholar
  38. Segerholm, C. (2001). National Evaluations as governing instruments: How do they govern? Evaluation, 7(4), 427–438.CrossRefGoogle Scholar
  39. Segerholm, C. (2010). Utbildning, forskning, apor och bananer. Installationsföreläsning vid Mittuniversitetets årshögtid den 19 november 2010 [Education, research, monkeys and bananas. Inauguration speech at MidSweden’s annual ceremony 19 November 2010]. Härnösand: Mittuniversitetet, the author.Google Scholar
  40. Segerholm, C., & Åström, E. (2007). Governance through institutionalized evaluation. Recentralization and influences at local levels in higher education in Sweden. Evaluation, 13(1), 48–67.CrossRefGoogle Scholar
  41. Segerholm, C., Rönnberg, L., Lindgren, J., Hult, A., & Olofsson, A. (2014). Changing evaluation frameworks – Changing expectations? The case of Swedish higher education. Paper presented at the European Conference for Educational Research, Porto, 2–5 September, 2014.Google Scholar
  42. SHEA. (2013). Reflektioner kring det nuvarande utvärderingssystemet. Erfarenheter 2011-september 2013 [Reflections over the present evaluation system. Experiences 2011-September 2013]. Stockholm: SHEA.Google Scholar
  43. SNAHE. (1998). Fortsatt granskning och bedömning av kvalitetsarbetet vid universitet och högskolor. Utgångspunkter samt angrepps- och tillvägagångssätt för Högskoleverkets bedömningsarbete [Continued evaluation and assessment of quality work at universities and university colleges. Starting points, approach and methods for the SNAHE’s evaluation work] (Högskoleverkets rapportserie 1998:21). Stockholm: SNAHE.Google Scholar
  44. SNAHE. (2001). National review of subjects and programmes in Sweden (Högskoleverkets rapportserie 2001:11R). Stockholm: SNAHE.Google Scholar
  45. SNAHE. (2003). Nationella ämnes- och programutvärderingar. Anvisningar och underlag för självvärdering. Reviderad maj 2003 [National evaluations of academic subjects and programmes. Instructions and template for self-evaluation. Revised May 2003]. Stockholm: SNAHE.Google Scholar
  46. SNAHE. (2007). Nationellt utvärderingssystem för perioden 2007–2012 [National evaluation system for the 2007–2012 period] (Högskoleverkets rapportserie 2007:59R). Stockholm: SNAHE.Google Scholar
  47. SNAHE. (2010). Högskoleverkets system för kvalitetsutvärdering 2011–2014 [The SNAHE’s system for quality evaluation 2011–2014] (Högskoleverkets rapportserie 2010:22R). Stockholm: SNAHE.Google Scholar
  48. SNAHE. (2012). Högskoleverkets system för kvalitetsutvärderingar 2011–2014. Examina på grundnivå och avancerad nivå [The SNAHE’s system for quality evaluations 2011–2014. Degrees at bachelor and master levels] (Högskoleverkets rapportserie 2012:4R). Stockholm: SNAHE.Google Scholar
  49. Solbrekke, T. D., & Englund, T. (2011). Bringing professional responsibility back in. Studies in Higher Education, 36(7), 847–861.CrossRefGoogle Scholar
  50. Sørenssen, M. P., & Mejlgaard, N. (2014). Autonomi och kvalitet – ett uppföljningsprojekt om implementering och effekter av två högskolereformer i Sverige. Delredovisning 2. 2013/14 RFR 22 [Autonomy and quality. A follow-up project about implementation and effects of two reforms in higher education in Sweden. Report 2]. Stockholm: Riksdagstryckeriet.Google Scholar
  51. Thelen, K. (2000). Timing and temporality in the analysis of institutional evolution and change. Studies in American Political Development, 14, 101–108.CrossRefGoogle Scholar
  52. Thelen, K. (2003). How institutions evolve: Insights from comparative historical analysis. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in social sciences (pp. 208–241). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  53. Wahlén, S. (2012). Från granskning och bedömning av kvalitetsarbete till utvärdering av resultat [From review and assessment of quality work to evaluation of results]. In SNAHE (Ed.), En högskolevärld i förändring. Högskoleverket 1995–2012 [A changing landscape of higher education] (pp. 27–35). Stockholm: SNAHE.Google Scholar

Copyright information

© The Author(s) 2019

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Department of EducationUmeå UniversityUmeåSweden
  2. 2.Department of Applied Educational ScienceUmeå UniversityUmeåSweden

Personalised recommendations