Keywords

The National System for Quality Assurance of Higher Education in Sweden

Since the middle of the 1990s a national system for quality assurance of higher education has been in place in Sweden. Although both the content and the focus of the system have shifted over time, the method for quality assurance has remained remarkably unchanged. At present, the system comprises four different components: appraisal of applications for degree-awarding powers, institutional reviews of the higher education institutions’ (HEIs) quality assurance processes, programme evaluations and thematic evaluations.

The method used in all four components is based on self-evaluations written by the HEI under review. The self-evaluation is assessed by a panel of independent peers, so-called peer review. The peers conduct interviews at the evaluated HEI and write a report stating their findings. Decision, based on the peer report, on the quality at the HEI is taken by the accountable governmental authority, at present the Swedish Higher Education Authority (hereafter referred to as the Authority). This is a well-recognised and internationally accepted method for quality assurance of higher education following the European Standards and Guidelines (ESG).

The national quality assurance system has been described as a “peer review method of evaluation,” peer review thus seen as a core element in the assessment processes. Hypothetically, the use of peers and peer review has also been important to the identity of the administrators working at the quality assurance Authority. In other words, peer review is at the heart of the Authority’s operations and therefore deserves further attention.

The History

To fully grasp the changes that have taken place within the national system for quality assurance in Sweden from the late 1990s up until today, it is important to look more closely at the historical developments over time. Already in 1992, “the Umeå model” for assessing quality was developed by researchers/evaluation experts at Umeå University. The model consisted of three main activities: self-evaluation, peer review and follow-up. The evaluation was carried out by groups of peers, and assessments were based on general aspects and criteria within a known framework of standards and guidelines. The aspects and criteria were aimed at guiding the institutions and the peers in the assessment processes. To a large extent, there were possibilities for the peers to interpret these quality aspects and criteria freely (Franke & Nitzler, 2008).

The model, although modified over time, has been used by the Authority for quality assurance from 1993 and onwards. Initially the Authority carried out institutional reviews assessing HEIs’ quality assurance processes and appraisal of applications for degree-awarding powers. However, the focus has shifted over time. In the period 2001–2006, programme evaluations were conducted at a large scale. Basically all programmes and courses leading up to a degree were evaluated until 2006. The assignment from the government implied that the evaluations were to be made systematically and fully during a long period of time. The framework for the reviews consisted again of general quality aspects, and it was stressed that they were not to be seen as “final or exhaustive.” They were designed to catch the aim of the reviews: to assess and present the quality in higher education and represent a stimulus for development and renewal of the programmes. During this time sanctions were also introduced into the system, meaning that a negative outcome of a review could result in a revocation of the right to award a degree (Franke & Nitzler, 2008).

Over time the Authority grew, and with it, its department of evaluation. The number of staff is still today approximately 35. During this period, a strong consensus was established in the sector of higher education that it was necessary to have a full national system for quality assurance of higher education. The number of peers involved in the reviews grew steadily. The system was successful, but by the end of this period the debate about how this system should be designed became more and more intense. At the heart of the debate—even though not always pronounced—was the method of peer review.

Critique

The method for assessing quality was put into question. A lot of the critics focused on the fact that the system was arbitrary, or rather that the results of the reviews and evaluations were arbitrary, that is, that assessments differed too much between the different evaluations and reviews, and that it seemed as if the result of the reviews depended too much on the individual views of the peers. There was said to be a lack of transparency in the system: it was not clear how and on what grounds peers choose what aspects, and according to which criteria they were examining the different programmes and courses (Franke & Nitzler, 2008).

The critics also pointed out a lack of comparability; they thought that similar programmes were reviewed differently and according to different standards. In short, the results differed too much between similar programmes and universities. Consequently, the system was said to suffer from a lack of legal certainty. This became a pressing issue due to the fact that a negative outcome of a review could lead to a revocation to award a degree.

One crucial point of critique, finally, was the assessments’ focus on processes and conditions of programmes and courses, instead of looking at students’ results. The argument went that if the students’ results are of high quality, then in what ways the institutions ensure quality in programmes and courses is of no importance. In addition, in the aftermath of an autonomy reform in the higher education sector initiated just by that time by the government, the argument that the state (in this case represented by the Authority) should keep away from reviewing higher education institutions’ design of programmes and courses grew strong (Segerholm et al., 2019).

2011: A New System Is Launched

After much debate, in 2011 a new national system for quality assurance was launched. This new system entailed a substantial shift in focus in both content and form. Now, the reviews should first and foremost focus on results: results of programmes, courses and students. As opposed to earlier systems, the students’ final theses were an important basis for the assessments of peers. The shift can be seen in the guidelines from the Authority as well as in the reports from this period. To a large extent, it can be seen as a result of the radical change in how the mission given to the Authority by the government was phrased. Before that, the Authority’s mission had been formulated in a broad way, giving the Authority a clear mandate to organise its operations, including its reviews, independently of the government in a way it saw fit. However, the new assignment was detailed to an extent previously unprecedented.

Transparency and predictability were now pinpointed as primary and essential conditions for the reviews. Decisions by the Authority based on the reviews had to be made on the basis of principles and criteria well known by the assessed institutions and programmes beforehand. The predictability was crucial. This can be seen as a result of the critique of the earlier system for being arbitrary and legally insecure. Also, in order to increase the equivalency in reviews, the Authority developed a new web-based tool for peers and HEIs for handing in data and material for the assessment.

Aside from the Authority developing new tools for carrying out the evaluations, other changes took place that affected the quality assurance system, and with that, the way peer review was carried out. During the evaluation cycle of 2011–2014 a new element of monetary reward was introduced as a result of a governmental reform. This was a new feature in the landscape of national quality assurance of higher education in Sweden, although it followed international trends. Thus, in 2011 performance-based funding was introduced, which meant that a positive result in reviews performed by the Authority rendered more money for the HEI in question. Connecting the results of the reviews to the redistribution of funding for higher education affected the conditions of peer review by strengthening the demands for legal security and transparency as overarching principles in the assessments.

The new system was heavily criticised by parts of the higher education sector, especially so in the beginning of the evaluation cycle. The critique predominantly focused on one thing: that the assessment model focused too strongly on students’ results (achieved learning outcomes), and that results tended to be measured by the “quality of students’ theses.”

However, the method of peer review was not in itself questioned, as far as we can find proof of in the studied material. Despite the complicated and somehow contradictory relationship between, on the one hand, ideals of peer review stressing constructive dialogue and exchange of ideas, and, on the other, demands for transparency and legal certainty, peers continued to be used in the programme evaluations, but the recruitment of peers was said to be in need of more transparency. As a consequence, the Authority produced routines regarding bias and how to handle biased peers within the framework of the evaluations. In addition, in order to ascertain the comparability and legal security of the evaluations, the information and, above all, the introduction of the method and the assessment criteria to the peers were to be much more extensive.

In fact, we have found only one example of critique, or worry rather: that if the assessment criteria were too fixed—in the name of predictability—it would undermine the method of peer review] for clarity. A few years before the new system was launched in 2011, in a report from 2009, peers were invited to reflect upon the method of peer review (HSV 2009: 8 R, p. 22.). The text, written in Norwegian, gives the impression of being unedited by the Authority, thus enabling the peers to voice their opinions outside the set framework of the assessment criteria. In the report, a fear is voiced that the assessment criteria (which the panel were involved in creating) would make assessments “mechanical,” encouraging a “box-ticking mentality” among both HEIs and reviewers. According to the panel this would not enhance quality (HSV 2009: 8 R, pp. 23–24):

The panel members represent expertise within specific fields on which they base their assessments, and they neither can nor should be set within an assessment matrix as if the evaluation were absolute science. It is important to stress that the assessments are made by “peer review” by groups that are to some extent heterogeneous, and that the final comments, assessments and recommendations are the result of a democratic discussion within the panel. As a result, the final assessments are a synthesis of diverse impressions and discussions. In addition it should be noted that the quality work at a HEI is complex matter, and that assessments thereof cannot be restrained by a simple templet. (HSV 2009: 8 R, p. 24)

The quote pinpoints the difficulty in guarding the integrity of peer review without making concessions, while at the same time attempting to make assessments more alike by using templates and assessment matrixes. Noteworthy, thus, that this chapter was written a couple of years before the launching of the new system of quality assurance that would standardise assessments further. However, this innate tension between peer review and governance, openness and restrain, seems to end up in the shadow of the discussion of results and performance-based funding.

Peer Review

As a starting point to understand how peer review processes are used under different circumstances and in different settings, we have identified four ideal types of peer review in the literature.

The core idea behind what is sometimes referred to as the classical peer review is that only individual experts with a research field closely matching those they are assessing are able to comprehend research output and therefore pass judgment on scientific quality. This type of peer review focuses on the performance of the individual, and might be distinguished from assessments focusing on group performance. For the latter type of peer review the term modified peer review is sometimes used (see, e.g., Langfeldt, 2002; Sandström & Harding, 2002).

A third model, like the second model mainly used for assessing group performance, is the performance indicator model, which is based on bibliometric indicators and economic input-output models. One advantage of this model, which is often emphasised, is that it can be conducted remotely and therefore is of low cost. However, some researchers argue that the indicator model is unable to provide the opinion and broader insight generated by a more qualitative approach. Thus, a fourth model has been developed, sometimes called the informed peer review model. This is a mixture of modified peer review and the performance indicator standard within which panels of experts are asked to review groups using both quantitative indicators and more qualitative assessment material such as self-evaluations or interviews. The key benefit of this approach, often stressed, is that it combines “hard” data with “soft” opinions, resulting in a more comprehensive picture of the assessed unit (Hammarfelt et al., 2016).

As types, these models rarely occur in their ideal form but rather in variations. We would argue that the peer review model used by the Authority is a mixture of all four models, but mainly resembles the modified peer review and informed peer review models.

As discussed in earlier research, the method of peer review carries both positive features and possible pitfalls. One feature about peer review, which is often emphasised as something positive by the academic community, is that the individual assessor can keep in mind several variables at the same time and see complex relationships that cannot always be quantified. In addition, the peer can see potential in a way that quantitative data cannot capture. Moreover, the peer review process provides the opportunity for idea flow and feedback between the reviewer and the reviewed. However, there are also potential weaknesses in peer assessments such as the risk of protectionism, which is often stressed by critics in the academic community, along with the risk that peers might believe that what they like is also the best (see, e.g., Carlsson et al., 2014).

Peer review is also closely linked to what Sahlin and Eriksson-Zetterquist (2016) refer to as collegiality, which is a form of decision-making and governance. According to Sahlin and Eriksson-Zetterquist, collegiality, when properly designed, puts knowledge and search for knowledge at the centre. In addition, collegiality as a form of governance permits autonomy and creativity, and gives space to independent action and thinking. It builds on, and supports, the idea that organisations should be built bottom-up or, more precisely, starting in activities and competences in interaction between free and reflective individuals and groups. Collegial processes and structures should, according to Sahlin and Eriksson-Zetterquist (2016), give space for discussions, criticism and trial of arguments.

Within the system of national quality assurance of higher education in Sweden, peer review has been used as a way of assessing quality, and collegiality has played a prominent role in decision-making and governance. As shown by Sahlin and Eriksson-Zetterquist, evaluation of research and higher education are often performed by mixing collegial governance with bureaucratic and result-based assessments and decision-making (2016, s. 70). We are interested in exploring how these different forms of management are mixed, within the system for quality assurance of higher education in Sweden and how this has changed over time. Not least are we interested in exploring the possible tensions this has created and how they have been discussed by the Authority and by the peers involved in the assessments.

Sources

To get a comprehensive picture of how peer review has been used within the national system of quality assurance of higher education in Sweden, we have analysed a large variety of documents published by the Swedish Authority of Higher Education and its predecessors (referred to as the Authority) during the period 1995–2017. These include the following categories:

  • Descriptions of quality assurance systems

  • Guidelines for peers

  • Guidelines for HEIs

  • Panel reports

  • Reports covering specific government assignments to the Authority regarding quality assurance of higher education

  • Other reports, for example, reports with methodological discussions and meta-analysis of evaluations and reviews performed under the supervision of the Authority

Going through the sources, we have looked for descriptions of the peer review process and instructions to peers on how to perform the evaluations. We have looked for statements of how the peers themselves describe the peer review process of which they have been part. In addition, we have analysed the content of the reports regarding the assessments to get information on what level of freedom was given to the peers within the framework, as opposed to the peer assessments being fitted inside pre-defined templates. Reports covering government assignments have given us a more comprehensive picture of how the method of peer review has been discussed and presented in different contexts. Other reports have been studied in order to see if the method of peer review has been analysed and problematised from a broader comparative perspective.

We have also been interested in quality aspects and criteria used in the evaluations and how they have been formulated and what level of freedom of individual interpretation and variation has been permitted. In addition, we have been interested in the relationship between the quality criteria and the assessments in the panel reports; that is, how the criteria have been used and to what extent they have functioned as a way of structuring the content in the assessments in the reports. Furthermore, we have looked for more detailed descriptions of how the peer review process has been carried out both according to instructions from the Authority and in accounts from the peers describing the work they have done. We have used this information to draw conclusions about the extent of freedom in the peer review process; if and, if so, how this level of freedom has changed over time within the framework of national quality assurance of higher education in Sweden. In the analyses below, we have selected examples from two reviews, 1997–1998 and 2016–2017, that serve to illustrate what we have detected in the larger material.

The Role of the Peers and the Method of Peer Review

We start our analyses by looking more closely at the ways in which the role of the peers and the method of peer review has been presented in official descriptions, policies and guidelines regarding the Swedish national quality assurance system(-s) for higher education during the period 1995–2017. Two examples have been chosen to illustrate what we have seen in the larger material.

Early Assessments 1997–1998

In the Authority’s guidelines for peers from 1997 the starting points for the reviews and assessments of HEIs’ quality assurance work are described. The peers are instructed on what should be assessed and how the assessment should be carried out. The guidelines emphasise that

the assessment is not about judging right or wrong. Rather, the [adequate] metaphor would be the public defence of a doctoral thesis and its exchange of opinions and viewpoints, but [it] clearly also [contains] elements of judgment and control of applied methods and their reliability. (HSV 1997: 33R, p. 10)

Thus, the assessment is compared to an academic seminar, or even the act of public defence of a doctoral thesis. As such, it contains both an exchange of opinions and “obviously” an act of control (HSV 1997: 33R, p. 10). The guidelines stress that the external review functions as a way for the peers to, in collaboration with the HEI, reach insights into, and draw conclusions regarding, the quality assurance work of the HEI. This is described in the following manner:

The main task of the assessment panel is to initiate discussions, create reflection and contribute with material for problem solving. Such an open approach to the assessment means that the consultative role of the assessment panel is accentuated and that the self-evaluation of the HEI is given a prominent role in the assessment. (HSV 1997: 33R, p. 19)

Hence, when the role of the assessment panel is described, the open attitude is stressed, and the role of the peers is mainly characterised as “consultative” by “initiating discussions,” creating reflection and contributing with support in order to solve problems.

This description and understanding of the role of the peer clearly resembles the classical peer review described above. Ideally, the basis of classical peer review is that decisions should be built on knowledge as well as on critical assessments of such knowledge. It should be based on an ongoing trying and critical conversation in which there is, at the same time, a review and development of knowledge claims. Ideally, the best argument wins regardless of whether your own group gains or loses from it. Peer review in this sense is often compared to the seminar or round where the colleague or peer is someone you listen to, receive criticism from and give criticism to. When seeking a solution to a common problem, one talks and listens to each other’s arguments; one is also prepared to change one’s opinion if the opposing party has stronger arguments. The individual participant in a peer review process represents their competence, not an interest or a group (Sahlin & Eriksson-Zetterquist, 2016, p. 34f). This concept of giving and taking, of receiving criticism and negotiating knowledge claims, resembles the way decision-making is described in the guidelines. In the report discussed above, to accentuate the consultative role not only of the peers but also of the Authority, it is stressed that the final decision on quality at the HEI is a result of negotiations between the Authority, the HEI and the external peers (HSV 1997: 33R, p. 19).

Later Assessments 2016–2017

In the guidelines for peers published by the Authority almost 20 years later, in 2016, the method of peer is not discussed but rather taken for granted, as an underlying assumption. The same can be said about the role of the peers, which is not deliberated upon or explained. Instead, it is established that the review is made by an independent panel consisting of experts from the discipline that is being assessed (earlier referred to as peers) and representatives of working life and students. The main point made is that the panel should be unbiased and that all panel members should participate in the evaluation on equal terms (Vägledning för granskning av lärosätenas kvalitetssäkringsarbete, pilotstudie, UKÄ 2016). This use of the term “equal” does not refer to the content of the assessment, that it should be fare and well grounded. Instead, it refers to the way in which the assessment panel should work during the peer review process.

Regarding the focal point of the assignment for the peers it is stated that the main objective of the review is the assessment of

the result of the quality work, that is, that it assures the quality and develops the programme in a systematic and effective way. (Vägledning för granskning av lärosätenas kvalitetssäkringsarbete, pilotstudie, UKÄ 2016, p. 13)

The way of describing the assessment has changed—words like “result,” “assures the quality,” “systematic” and “effective” are crucial in this quote. Here is one example of the vocabulary used in the guidelines and in the assessments described:

The outcome of the assessment panel’s judgment of the HEI’s fulfilment of the bases of assessment for the reviewed aspect areas and perspectives is stated in a report which functions as the basis for UKÄ’s decision. All aspect areas and perspectives have to be deemed satisfactory for the overall judgment to be positive. (Vägledning för granskning av lärosätenas kvalitetssäkringsarbete, pilotstudie, UKÄ 2016)

In this quote the role of the peers is pinpointed: it is to assess whether the HEI fulfils the “areas of focus” and meets the criteria for the ”aspects” and “perspectives” decided by the Authority. In order for the HEI to receive the judgement “approved,” all the aspect areas and perspectives must be judged as meeting the criteria. Compared to the example from the end of the 1990s, the HEI is no longer part of the decision-making, which is no longer described as a negotiation between Authority, the HEI and the peers. The seminar-like meetings, and the discussion between colleagues, have been replaced by stricter governance, and the ambition to measure the HEI’s fulfilment of pre-defined quality criteria is in focus.

Grasping the Concept of Result

As the examples above have shown, the role of peers and the descriptions of the peer review process have undergone radical changes over the last 20 years. Closely connected to this is the concept of result. We are interested in how it has been defined and used within the national framework of quality assurance of higher education, and whether it has undergone similar changes over time. Therefore, we continue our analysis by exploring how the concept of result has been described and realised by the Authority and by the peers. Again, we start with an analysis of the early assessments in the late 1990s and move forward to the later assessments within the current national system of quality assurance, letting examples illustrate our findings within the larger material.

The Early Assessments 1997–1998

To fully grasp how the peer review process has changed over time we have also looked at peer reports, focusing on the concept of result, in terms of peer judgement. In a report from 1998, the peers who had been assessing the quality assurance work at comprehensive university put it this way:

The additional material we were given access to over time in addition to the many discussions during the site visit gave us a new and more comprehensive picture of the rootedness of the quality assurance work and its strategies, as well as the various approaches to the distribution of roles at a university, depending on where in the organisation a person is located. (HSV 1998: 38 R, p. 34).

The text concerns the main strategies for quality assurance at a comprehensive university at the time. It seems like the peers were working “organically” and had time to collect complementary material in order to “see the whole picture.” Even though peers stated earlier in the report that they had followed the guidelines from the Authority in assessing the “quality work” at the university, this is not completely clear in the text. In the report peers do not explicitly refer to the aspects and guidelines. It seems instead as if the author of the text has been free to dispose it in her own manner.

When reading the report, the impression given is that the panel to a large extent was able to make independent decisions on working methods, and that these decisions were made in dialogue with the university. This includes, for example, how and when assessment material should be handed in and how site visits should be organised. In addition, it is clear from the report that there was room for the panel chair to have an independent view of the purpose of review, which might or might not correspond to the aims of the Authority. The peers were also at liberty to make independent decisions on what the review should focus on:

In our review we have chosen to focus our attention on some overarching characteristics at […] University. Some of them have an indirect connection to the quality work, others a more direct. (HSV 1998: 38 R, p. 13)

These characteristics included “the HEI’s self-image,” “the idea of the existence of a set of shared values,” “views on identity,” “boundaries,” “student participation” and the “somewhat mythical concept of the local university spirit,” which is explained as the “prevailing notion of the informal and close contacts between students, teachers, researchers, administration and management on campus” (HSV 1998: 38 R, p. 29ff).

As stated above, what ideally characterises classical peer review is the ongoing discussion between peers in which knowledge and quality is (re-)defined. New knowledge is created in a dialogue in which the strongest argument wins. The way peers describe their assessment process, and coming to conclusions in this early assessment, does lead thoughts to the ideal of classical peer review. An example is the following quote from the panel’s report:

Another conclusion is therefore that the panel’s first assumption that the quality work at the university was characterized by “top-down” principles must be abandoned. There is today a clear impact of a bottom-up [approach]. (HSV 1998: 38 R, p. 34)

New material and interaction with faculty result in change of perception and the report gives a clear account of how this change has taken place. The peers are present in the text as individuals, not instruments. They even describe an emotional reaction, “surprise,” when confronted with colleagues at the assessed HEI:

When we have discussed this with representatives of the HEI our observations have not always been met with an understanding attitude, something which has surprised us. (HSV 1998: 38 R, p. 34)

The discussion in the report covers a large spectrum of topics, some of which might seem far from the actual aspects the panel was set to assess. One example of this is the topic of co-creation of knowledge, loosely connected to discussions on student participation and student influence. The way in which these issues are covered is very different from today’s reports. In addition, the panel expands on broader issues such as universities’ role in society, and mass education and its ramifications (HSV 1998: 38 R, p. 36ff). This way of contextualising the results is prominent in the reports from the late 1990s (see, e.g., HSV 1996: 6 R; HSV 1997: 1 R; 1997: 38 R; 1998: 27 R). Also more academic texts are mentioned in the list of references at the end of the report, thus making it clear that the panel has drawn its conclusions within a larger scientific setting, relating its findings to research on topics connected to the review. This broad leeway for peers to relate the outcomes and results of the reviews to a larger societal setting disappears somewhere between the mid-1990s and today.

Later Assessments 2016–2017

In contrast to the panel report from the 1990s, in an example from an institutional review of the quality assurance procedures at the same university in 2016, the text in the report strictly follows the structure decided on beforehand by the Authority. The text is written in a fixed form, a template, and the peers account for their assessments in the separate aspect areas. The peers put it this way in the report:

The system for quality assurance of higher education at first and second cycle is seen as a system with elements at all levels of the operations [at the HEI]. The eleven items create a whole with a clear sequential structure. (UKÄ 2019, reg.no 411-00483-16, pp. 9–10)

Words like “system,” “element” and “structure” can be seen as typical in the kind of language that is used in order to fulfil the task, that is, to review the different “aspect areas.” Of course, it is difficult to fully grasp what this really means taken out of context, but nonetheless it is a typical way of expression in reports today (see, e.g., UKÄ 2019, reg.no 411-00488-17; UKÄ 2019, reg.no 411-00483-17; UKÄ 2019, reg.no 411-00486-17).

More general observations, and historical or societal contextualisation, which held a prominent role in the report from the late 1990s discussed above, are absent in today’s reports. No references are made to research related to topics assessed during the review. There is no detailed account of how the review has been carried out. The panel and its views are presented in a formal and bureaucratic way, strictly following the quality criteria set up for the assessment:

The assessment panel finds that the quality assurance system reflects and is constructed in a way that makes systematic and proactive quality work possible at xx university. (UKÄ 2019, reg.no 411-00483-16, p. 8)

Again the systematic way of working with quality is stressed, thus following the criteria set up by the Authority. The panel continues:

A quality indicator to assure high quality in doctoral theses and the public defence is that the examination is done in a legally secure way, according to rules and regulations. (UKÄ 2019, reg.no 411-00483-16, p. 9)

Quality, in this quote, is linked to following rules and regulations and working in a way that is “legally secure.” There is little room for discussions between the panel and the university, and no room for emotions such as surprise. The discussions that take place follow a structured form and are conducted as interviews, which function as a way to corroborate the statements in the self-evaluation (or not).

Thus, the panel report from 2016–2017 clearly shows that today’s quality assessments are made in order to deliver sharp and clear judgements of the quality (or quality assurance work) in higher education. It is also clear that the national quality assurance system has been made more efficient in order for the Authority to be able to fulfil its assignment from the government. If the 1990s panel report mentioned above is more similar to a research report, with almost tentative reflective analysis, texts in panel reports from the latest panel reports focus directly on quality assessments and the impression is almost that it can be assessments of the quality of any product, not specifically higher education. How does this affect the peer review method?

Collegiality as a Form of Governance

Peer review, as it has been practised within the national system for quality assurance of higher education in Sweden, contains several elements. Most notably, and especially in the beginning of the period in the 1990s, it contains the element of discussion between peers, the trial of arguments and counterarguments resulting in a shared view of quality. This is maybe most clearly expressed in the co-making of decisions about the outcome of institutional reviews in the 1990s in which not only the peers and the Authority, but also the reviewed HEI, took part. However, this co-creation of knowledge claims about quality was gradually replaced by other ways of understanding the peer review process and its outcomes. These new ways entailed linking the assessments to pre-defined quality criteria and ideals of transparency and legal security. One way of understanding how and why these changes took place is by connecting them to the new ways of public monitoring and control that were gradually introduced during this time. In the 1990s the term “new public management” was launched, referring to new types of governance and control. The term is related to what Michael Power calls the “audit society.” Power shows how the amount of auditing activities exploded in the United Kingdom and in North America from the 1980s and onwards (Power, 1997). The creation of a national, full system of quality assurance of higher education in Sweden might be seen in itself an expression of the audit society and the emergence of new public management. A tentative analysis is that the development of peer review within the Swedish system of quality assurance of higher education from the beginning of the 1990s until today can be seen as an example of how result-based management, which characterises new public management, is gradually strengthened within the governmental sector in Sweden during this period.

Looking at the national system for quality assurance of higher education, the institutional reviews in the 1990s were based on ideals of collegiality. As shown by Sahlin and Eriksson-Zetterquist, in this form of governance, knowledge is built through argumentation and the negotiation of truth claims is seen as a core ideal. As a consequence, this way of decision-making is often both complicated and time-consuming. In contrast, the system of quality assurance at work since 2011 shows different characteristics. Quality is still a core value, but from 2011 and onwards, the concept of quality is linked to concepts such as results, control and efficiency. The reviews, which previously took place within a loose and bendable framework, from 2011 and onwards take place within a set system. Any discussions, criticism and trial of arguments between individuals and groups are bound to the system and its pre-defined criteria for quality. Although decisions are based on peer review, the reports, including the findings, are calibrated by the Authority beforehand to assure equivalency. Moreover, reports from this time show that the Authority acknowledged that time was a problem.

Hypothetically, we suggest that this creates tensions within the framework of national quality assurance between conflicting ideals of openness and trial of arguments, on the one hand, and predictability and legal security, creating needs for calibration, on the other hand. Although the assessments in the end were made by the peers and not by the Authority, the strict framework of quality criteria can be seen as inhibiting the more open-ended, explorative side of the peer review process. In other words, contrary to ideals of collegial decision-making, the decisions within the national system for quality assurance of higher education became increasingly the result of a top-down process.

Although collegiality can be seen as an ideal closely corresponding to the ideals of peer review, as pointed out by Sahlin and Eriksson-Zetterquist (2016), in reality most organisations are governed by a mixed form of governance. Thus, collegiality is often blended with more bureaucratic forms of governance. This mix of forms can be a good thing, when the different forms complement each other. But the mix might just as well lead to continual compromises, troublesome and unclear structures for governance, and consequently “institutional unclearness” (Sahlin, 2014). As a result, one has to ask oneself: When does the interplay between the different forms of governance lead to undermine individual forms of governance—in this case collegiality? When do compromises and the transformation of a form of governance go too far—when does it become “perverted” (Hernes, 1978), or lose its purpose and become empty words?

Peer Review in the National Quality Assurance System

Well known from previous research (e.g. Sahlin & Eriksson-Zetterquist, 2016), different forms of governance are often interacting and exist in organisations side by side or, rather, intertwined—hopefully complementing and strengthening each other and promoting better governance overall. In the quality assurance processes at the Authority, the collegial form with peer review at its core, has been an important identity factor for project managers—they themselves educated at universities and quite often with a PhD degree. Somehow, it seems logical using peer review in a system assessing and reviewing the quality of higher education. The Authority “borrows” an academic touch—or more than that, an academic method and orientation—to its bureaucratic governance. As discussed by Franke and Nitzler (2008, p. 114), one reason for the use of peer review within quality assurance of higher education is to gain acceptance for the reviews from those affected by it. In other words, an important objective for using the method of peer review might well be that the peers involved in the exercise render the evaluation legitimacy.

All along, since 1993, collegiality has been mixed with bureaucratic decisions and increasingly along the way with (new public) management, that is, performance management or result-based management. In this case, performance-based funding might be the most obvious expression. It was abandoned in 2015, but it remains crucial to look at performance or results in the quality assurance system today. How this mix and interaction has expressed itself more in detail still remains to be examined more closely. But it is clear that the changes that the national system of quality assurance in higher education underwent in 2011, and which are still working to a great extent, implicate challenges for the peer review method. Moreover, these challenges have not really been addressed in a thorough way by the Authority.

Claims for transparency, predictability and equivalency are difficult to combine with ideals of collegiality, as they might undermine the authority of the peers. The claims for transparency, predictability, equivalence and the focus on quality defined as “good student performances,” that is, to what extent students achieved the intended learning outcomes within the national qualifications framework, coincided with the introduction of performance-based financing in 2011. Introducing economic benefits as a result of the assessments carried out by the Authority radically changed the conditions for quality assurance. Demands for transparency and legal certainty overruled ideals of creativity and independence in search of new knowledge. Peers, instead of being consultative discussion partners, became judges, making verdict and exercising control.

As shown above, we have found that the national system for quality assurance has been surprisingly consistent, and that it has undergone remarkably few changes from 1995 up until 2011. We argue that the new quality assurance system launched in 2011 amounted to a turning point, during which the method of peer review changed concordant with result-based management and the emergence of the audit society (Power, 1997). However, although we have identified 2011 as a turning point, the changes that took place then were part of trends that started almost 20 years earlier in 1993, when the first national system for quality assurance of higher education in Sweden was launched. Already in the early 1990s, and as a result of the reforms in higher education that took place at that time, the state had identified needs to install a stronger and more systematic way of exercising control. In 2011, this element of control, already introduced, was enhanced due to a change of government, opening up new possibilities of governance along the lines of new public management and introducing monetary rewards into the national quality assurance system, thereby strengthening demands for predictability, transparency and legal security, thus changing the preconditions for peer review.

The process of change in 1993 today remains to be studied more in detail; in this chapter, we have only been able to highlight a few examples to make our point. The system launched in 2011 came to an end in 2017, when yet another system of quality assurance was put in place. Result-based management and new public management have been vividly criticised in the latest years and seem to be gradually replaced by trust-based public management, which has already affected the peer review processes performed by the Authority. In what way, and with what result, is for the future to tell.