Keywords

This book is the result of the comprehensive research project Governing by evaluation in higher education in Sweden, 2013–2018. As we reasoned in the chapter “Governing by Evaluation: Setting the Scene”, some of the findings were made possible because of the particular design of the project and its sub-studies. Therefore, a description of the general theoretical approach, methodology, design, and the different sub-studies that were the basis for the material we generated is provided.Footnote 1 We also touch upon the problem of access that we encountered in our efforts to come as close as possible to the actual evaluation and quality assurance processes we were interested in studying as a way to better understand contemporary education governing in higher education.

Project Approach

The project started out in line with the explanation-oriented (or theory-directed) evaluation approach (Franke-Wikberg and Lundgren 1980). This approach was further developed by Segerholm (2003) in order to enable critical studies of evaluations or of evaluation and quality assurance (EQA) systems. The latter approach was used to structure the project and its sub-studies. With this approach, several factors are related to each other in order to better understand and explain the consequences of EQA systems: (a) the context, intentions, and aims of EQA systems, (b) how they are carried out/the processes, and (c) their outcomes − results as expressed in reports and decisions and as experienced by representatives of the evaluators and evaluated. Issues of evaluation influence, during EQA processes and their outcomes, need to be included in such critical studies. Examples include the use of reports and decisions and enactment of decisions by different actors (Segerholm 2003). The approach also requires the development of a theoretical framework to help understand both processes and outcomes. This is typically done in an evolving process in relation to the empirical material, as was also the case in our project. In this book, such a stance is notable in our exploration of the relation between governing, evaluation, and knowledge, that is, the nexus, which is elaborated in the chapters “Governing by Evaluation: Setting the Scene” and “Evaluation Machinery, Qualocrats, and the Seemingly Inevitable Problem of Expansion”. The various theoretical resources used in the different chapters in the book are also evidence of our theoretically eclectic approach. Even so, the overall conceptual frame concerning the nexus is more or less visible in the chapters.

Methodology, Design, and Data

Methodologically, Stake’s case study approach (Stake 1995, 2006) was used since its logic fits well with the aims and theoretical approach of Franke-Wikberg and Lundgren (1980) and Segerholm (2003). It allowed us to view Sweden as a case, with emphasis on the latest national EQA system (the 2016 system). The case methodology also permitted us to carry out sub-studies and use several different methods for data collection and production, analysis, and reporting. A strategy of progressive focusing was helpful (Parlett and Hamilton 1972, p. 18; Stake 1995, p. 9), ensuring adaption of the line of inquiry to the shifting policy contexts, such as an unexpected period with the rare situation of a lack of a political decision on a coming national EQA system – what we call a ‘reform interval’ (see the chapter “Enacting a National Reform Interval in Times of Uncertainty: Evaluation Gluttony Among the Willing”). Progressive focusing implies flexibility to pursue issues that are not identified in advance. The design opted to cover a–c in the project approach described above, in particular concerning the 2016 national EQA system. The following sub-studies were carried out to achieve this:

  • Sub-study 1: Mapping the European policy context of evaluation in higher education and its intersection with the Swedish policy context

    Research questions: What is the European policy on evaluation and quality assurance in higher education? How does it enter Swedish policy-making spaces? To what extent and through what channels do European ideas on EQA in higher education shape or influence Swedish practices? The study included:

    1. (a)

      A review of a selection of policy literature by the European Association for Quality Assurance in Higher Education (ENQA), official texts, web-based information, brochures, and letters. Also included was an analysis of the ENQA policy of EQA in higher education in Europe and between Europe and the national context of Sweden, connecting it to literature on policy activities through which policy is disseminated, brokered, translated, and interpreted to fit national/local contexts.

    2. (b)

      A study of policy channels and influences at the intersection of European and national policy brokering and mediation based on interviews with ten national ‘policy brokers’ responsible for the policy contacts between Sweden and Europe (particularly ENQA).

  • Sub-study 2: Mapping the Swedish policy context, intentions, and aims of the 2016 national EQA system, as well as key actors’ notions of quality

    Research questions: What are the characteristics of the 2016 EQA system’s context, and to what extent does it diverge or converge with European ideas? What were the intentions and aims of the system? What different notions of quality in higher education are visible in the historical and present policy contexts? What forms of evidence do these notions of quality require? The study included:

    1. (a)

      A review of the policy context and the national EQA systems in higher education in Sweden from the significant reforms of 1993–2018, exploring the main changes and the relation between European ideas on evaluation of higher education and Swedish ideas.

    2. (b)

      A review of policy literature on the particular processes leading to the highly criticised 2011–2014 national EQA system, and to the present 2016 system, including the intentions, aims, and key actors, − that is, the politics of EQA. Interviews with key policy actors (so-called policy brokers) on the development of the quality evaluation system including one representative of the Association for Swedish Higher Education Institutions’(ASHEI) task force for quality issues, five officers at the two national agencies responsible for EQA in higher education during the period under study, one representative from the Ministry of Education, and one student union representative, mapping the different standpoints and notions of quality in higher education.

    3. (c)

      An interview study with 33 out of 35 vice-chancellors at the higher education institutions (HEI) listed at the national agency in 2014 as central policy actors in ASHEI on their notions of quality in higher education.

  • Sub-study 3: Mapping a reform interval, higher education institutions’ and the national agency’s preparations for a new national EQA system

    As policy developments unfolded, the study turned out to be an unexpected opportunity to follow the process when a new national EQA system was in the making. Initially we applied for funds to study the 2011–2014 system, which was terminated by the time we got the research grant. At this point, we decided to redirect one sub-study to follow the process of the development of the 2016 national EQA system from an HEI perspective, in parallel with the national agency’s (the Swedish Higher Education Authority, SHEA’s) perspective.

    Research questions: How do HEIs prepare for a new national EQA system that is not yet decided? How does the responsible national agency prepare and navigate during such a reform interval? What is the governing potential in such a reform interval?

    1. (a)

      Four HEIs of various sizes, locations, histories, and previous evaluation judgements were studied through ten interviews with central quality assurance managers, faculty representatives, and teachers at the HEIs. These were a bit unevenly distributed depending on the size of the individual HEI. Documents from the HEIs, such as internal policy documents, vice-chancellors’ blogs, and earlier evaluation reports, were collected and analysed.

    2. (b)

      Interviews with four officers at the national agency, the SHEA, covering questions about how they worked with and prepared others to work with the 2016 national EQA system.

  • Sub-study 4: Mapping the quality evaluation regime and evaluator practice, i.e. the pilot of so-called institutional reviews of the HEIs’ internal quality assurance (IQA) systems

    Research questions: What are the characteristics of the national EQA system’s and the institutional reviews pilot’s designs, and of the evaluators (SHEA officers and external assessment panels), and what are their notions of quality in higher education? How is the pilot negotiated with HEIs? How is the pilot carried out, what knowledge constitutes evidence, and what is the basis for judgements and decisions? How is the pilot used for policy learning?

    1. (a)

      A study of the design of the present 2016 national EQA system using official texts and internal materials from the SHEA and interviews with eight SHEA staff was conducted.

    2. (b)

      A study of the background, training, experience and notions of quality in higher education of staff responsible for the EQA system at the SHEA and the external experts/academic professionals (peers) was conducted. This included staff members’ networks, their claims to expertise (the basis for their judgements), the selection of external assessor panels, and their modes of operation (how the evaluations are planned and carried out, what is examined, against what criteria, for how long, with what evidence). Here, we planned to collect information through interviews with assessors and observations of their work processes, such as site visits, meetings, and introductory meetings with HEI actors in the so-called institutional review pilot. However, observations were denied (see further below), so we relied on interviews on two to three occasions throughout the institutional review pilot process with the SHEA project leaders for two of the HEIs in the pilot (the Eagle and the Falcon) and on interviews at three occasions with the chairs of the external assessor panels for the same two HEIs. Additional interviews were also conducted with one other member of the external assessor panel. A set of reference interviews with a SHEA project leader, and chair of the assessor panel, and another member of the panel for a third HEI (the Hawke) were also conducted. A staff person at the SHEA responsible for the training of the assessors supplemented this study. In order to get a holistic understanding of the institutional reviews of the Eagle and Falcon, sub-study 4 was carried out in parallel to sub-study 5 and targeted the same HEIs.

  • Sub-study 5: Mapping higher education practice of how to deal with the institutional review pilot of the HEIs’ internal quality assurance systems

    Research questions: How does the 2016 national EQA system (the pilot) enter particular HEIs? How do key HEI actors experience and react to the pilot and the 2016 system? How are they involved in evaluation events? To what extent do these processes shape their work and their views on higher education and quality in higher education?

    1. (a)

      At the HEI level: We conducted case studies of reception and handling at the Eagle and the Falcon, the ‘enactment’ of the pilot as part of the national EQA system, its policy and practice, and its effects and consequences at the HEI level. This was accomplished through interviews with key actors responsible for quality issues at the selected HEIs and other actors taking part as representatives of different groups at the HEIs, such as vice-chancellors, programme directors, teachers, and students. At the Falcon, we interviewed eight persons and at the Eagle ten persons, some of them at more than one occasion. Local policy documents concerning the delegation of power, management structures, plans and descriptions of internal quality assurance systems, and schedules for and distribution of internal quality assurance related HEI activities in relation to the pilot were collected and incorporated in the study. As in sub-study 4, we had planned to follow these processes by observations of the site visits being part of the review/evaluation process, but that was turned down due to denial of access by the SHEA (see further below).

    2. (b)

      At the teaching staff level: four telephone interviews took place with randomly selected teachers at each HEI (the Eagle and the Falcon, a total of eight) in which questions about their knowledge about the ongoing pilot were asked. We inquired if and how teaching staff at the assessed HEIs had noticed or experienced that their HEI had been included in the institutional reviews pilot.

  • Sub-study 6: Review and synthesis

    Many of the results from our project are reported in this book. Several interviews formed the basis for more than one study since the same informants were central to a number of issues and processes we studied. In synthesising and writing the book, we strived to integrate different types of information into a chronology, wrestling with the governing-evaluation-knowledge nexus, and doing justice to what we learned throughout the project. There are still avenues and materials that are not fully explored, but we believe that this project with its comprehensive scope concerning one case – the Swedish – and its special design, shed light on and may be used as a comparative basis for studies of other national/state/regional EQA systems and their part in governing higher education.

Summary of interviewsa

Vice-chancellors

33

Policy brokers

10

SHEA staff

5 + 5 SHEA project leaders for Falcon, Eagle and Hawke institutional reviews pilot

HEIs:

Hercules

1

Orion

4

Pegasus

2

Virgo

3

Falcon

14

Eagle

14

External assessors

10 including assessors from Falcon, Eagle and Hawke institutional reviews pilot

  1. aNote that the number of interviews listed here and the number of interviewees reported in the sub-studies are not the same. The reason for this is that some of the interviews were used in more than one sub-study (e.g. the SHEA project leaders and vice-chancellors) and some interviewees were interviewed more than once

Most of our documentary material and interviews are in Swedish, meaning that all citations from texts and quotations from interviews are translated into English by us. We have noted in several chapters the importance of translation and are of course aware of the difficulties in capturing and transmitting meaning in these interpretation and translation processes.

The Problem of Access

As indicated in the above description, we had planned to observe assessment and judgement practise on the one hand and enactment processes at the HEIs on the other hand, in the same reviews/evaluation processes. In our contact with the evaluation department at the SHEA, we requested access to observe the site visits that the external assessment panels and the SHEA project leaders carried out at the HEIs in the institutional review pilot. In our request, we underscored that we would not select review processes in which the HEIs we worked in ourselves were assessed. We also asked for access to the electronic systems for the external panels’ internal communications as well as the HEI’s uploaded material (self-evaluation reports, etc.). Both our requests for access to review process and materials were however denied. The reason was:

When exercise of public authority is at hand, serious demands are made on administration in line with central requirements for the rule of law. It is important that the higher education institutions trust that the reviews are not influenced by irrelevant concerns. To allow researchers employed at some of the higher education institutions reviewed by the SHEA to follow the review process may risk that the administration of the review is questioned. Yet another circumstance to take into account is that the working material that is developed during the review process would be public documents when you get access to it, since these documents would be considered public when they come in to other authorities, Mid Sweden University and Umeå University.Footnote 2 This would jeopardise or risk inhibiting the SHEA’s review work. (SHEA staff, email 18 March 2016)

In the same communication, we were offered the opportunity to interview the SHEA staff we found necessary to get information from. We have indeed experienced good access to SHEA staff, and they have been willing to be interviewed and share their experiences and perspectives. This has of curse been very valuable to us. Still, we discussed this reply from the SHEA and contacted a university lawyer in order to find out the legal grounds for the SHEA position and answer. The information from the lawyer was that the SHEA could not prevent us from participating in the HEI site visits if the HEIs allowed us to be there. However, we decided not to proceed, as explicit consent from SHEA was not obtained. In December 2017, when the whole process of the institutional review pilot was finished and the decisions had been formally issued by the SHEA, we again approached the agency and asked for permission to access the electronic platform used by the external assessment panels, this time in retrospect after the formal decision, but were again denied access.

From our previous experiences with empirical research on school inspection carried out from 2010 to 2013, we knew that observations of on-site process activities add important information. In this light, and having been permitted to access such activities in previous projects, we regret that we were not allowed to observe actions, interactions, gestures, mimics, and verbal communication this time. We wanted to accompany assessors to their HEI site visits and observe internal SHEA meetings and the deliberations in external assessors’ meetings. This would have added additional and valuable dimensions to our data and analysis. As Travers put it, we wanted:

to observe what actually happens when inspectors deliberate about the performance of an institution, or professionals review issues about quality, or managers discuss concerns about the performance of professionals. It is by spending time in what the American sociologist Erving Goffman (1959) called these ‘backstage’ settings that one can gain most insight into what people understand by the term ‘quality’. (Travers 2007, p. 2)

We also think that it is important in a democratic society to be able to study the practices in national authorities to gain insight into how policy and governing come about in education (and other policy areas). We still hope this will be possible in the future.