Keywords

Introduction

In this chapter, we turn to the implementation of the intentions of the 2016 national evaluation and quality assurance system that was analysed in the previous chapter. More specifically, this chapter focuses on a particular part of the national evaluation and quality assurance (EQA) system, namely, the Swedish Higher Education Authority’s (SHEA) institutional review of higher education institutions’ (HEIs) internal quality assurance systems and processes (IQA). This form of external national evaluation was piloted on a small scale to test and develop the institutional review before its full-scale implementation, which was carried out during 2017 and included a handful of HEIs. We examined this pilot study at two of these HEIs (the Falcon and the Eagle) as the institutional review unfolded.

This chapter aims to explore and discuss enactments in the 2016 EQA pilot process of so-called institutional reviews in which HEIs’ IQA systems were evaluated. We do this by analysing different actors’ work and experiences. Returning to our theoretical understanding of governing as a verb, and thereby “doings”, and our interest in exploring the relationships between governing, evaluation, and knowledge, these questions serve as guides:

  • What enactments did the pilot entail, and what actors were involved?

  • What kind of knowledge was mobilised and used in these enactments?

  • How can we understand the pilot as governing in relation to the higher education institutions taking part in the pilot and in relation to the higher education sector as a whole?

We did about 30 interviews with actors working at the SHEA in external assessor panels and at different levels at the assessed HEIs. We did initial and follow-up interviews with these actors to capture experiences and opinions as close to the unfolding of events in the pilot as possible. We asked questions about thoughts and experiences from the different stages in these processes, as well as more general questions about perceptions of quality and EQA, to contextualise and situate the responses. We also collected and analysed a range of documentary materials, such as schedules and plans for the pilot study, assessment panels’ reports, and the pilot decisions by the SHEA. Another crucial artefact in the pilot was the self-evaluations from the HEIs. A self-evaluation is an instrument that literally materialises core aspects of governing through knowledge. The act of writing involves a transformation of knowledge from the diffused and abstract forms that it may take in actions, practices, and peoples’ minds into (supposedly) more concrete textual forms. Like all the other methods in the pilot portfolio, self-evaluation is not a direct evaluation of first-order activities; it is a mode of “control of control” (Power 1994, p. 15) that renders quality visible to external observers through inscription. However, the production of self-evaluations does not only make quality observable, it may also consolidate processes of meaning making; self-evaluations may “construct and define quality itself” (Power 1994, p. 293).

In particular, we studied the review of two HEIs: the Falcon and the Eagle. These HEIs are neither full universities nor extremely small and/or specialised, and this was our main reason for focusing on these HEIs for a more in-depth study of the work, experiences, and governing in the SHEA pilot.

The rest of the chapter has the following structure: next, a description of the institutional review process offers a brief chronological overview of the process in the pilot. These stages, starting with preparatory work and ending in post-decision actions and reactions, are dealt with in more detail and with a particular focus on “governing work” (cf. Clarke 2015). We do this by analysing different actors’ activities, including what they were doing and what they experienced, throughout the different stages. After this, the chapter situates the pilot in relation to our analytical frame by revisiting the initial three guiding questions.

The Sequencing of the Review

Looking at the SHEA’s work, extensive preparation work for the pilot was carried out by the agency – not in the least by designing the process for the pilot (see the chapter “Relaunching National Evaluation and Quality Assurance: Expectations and Preparations”) or by developing guidelines, templates, and materials for external assessor panels’ training, as well as by developing electronic software for sharing and storing information. Meetings with HEIs and others to disseminate information about the new EQA system and the design of the activity studied here – that is, the institutional reviews of HEIs’ IQA system – were also carried out. The HEIs that took part in the pilot volunteered to do so, which is important to recognise as one premise for the entire process. Another premise for the participating HEIs was set by the SHEA; this premise stated that any HEIs that would pass the pilot review and that were approved did not need to be reassessed in the full scale and mandatory cycle, while those that were not approved had to be reviewed again at a later date as a part of the mandatory cycle.

The Sequencing of the Pilot

The pilot started with an initial information meeting organised by the SHEA that targeted representatives from the handful of HEIs included in the pilot and the chairmen of the external assessor panels (one panel for each HEI). The SHEA staff – that is, the project leaders for the individual HEIs taking part in the pilot – were also at this meeting. The next step in the review process was the production of the HEIs’ self-evaluation reports to be submitted to the SHEA approximately 2.5 months after the initial start-up meeting. About a month after receiving the self-evaluation reports, the external assessment panels, assisted by the SHEA project leaders, conducted web-based interviews with the HEIs. About 1.5 months after the web-based interviews, site visits took place at the HEIs. Members of the external assessment panels were accompanied and assisted by the SHEA project leaders during this process. Over a period of 2 months, the panels worked on finalising their reports. The finalisation of the report also included a meeting, during which all assessors met with the SHEA project leaders to discuss the different assessments with an aim of comparability between the panels in the review. Ten months after the initial information meeting, preliminary reports from the external assessor panels were sent to the HEIs for comments (called “sharing”), meaning that the HEIs were provided an opportunity to correct factual errors and comment on the reports. The reports were then finalised, and the SHEA published their decisions, which included the final report by the external assessment panels.

Preparing to Assess and Preparing to Be Assessed

In the following section, we examine different stages in the process outlined above more closely by highlighting how central groups of actors in this pilot study (HEI staff, assessors, and SHEA staff) initially enacted the different stages of the institutional review by turning to their preparatory work.

HEIs

As noted above, participation in the pilot was voluntary. The decision to take part, however, was based on different justifications in the two HEIs. Falcon management, who had an IQA system in operation, said that “we were ready [mature] for audit” (Falcon Vice Chancellor). Key actors within the management had been involved in national policy discussions, knowledge exchanges, and preparations of the new national system, such as in a referential group within the SHEA and the quality group of the Association of Swedish Higher Education Institutions (ASHEI, in Swedish SUHF), which organises vice chancellors and top-level HEI management.Footnote 1 The experiences managers at the Falcon had participating in such higher education networks provided insights into the new national system in general and into the institutional pilot review in particular. As a result, the invitation to take part in the pilot was enthusiastically accepted at the Falcon based on the conviction that this pilot could stimulate the improvement of an already existing IQA.

The Eagle joined the pilot on rather different grounds. Participation was justified on the basis that it could offer external help in setting up and developing their IQA: “we may not be ready, but we will learn a lot on the way” (Eagle Vice Chancellor). Efforts at the Eagle to set up the new IQA were paralleled with no less than three other external evaluations in 2017. This engagement in evaluative practices is emblamatic for the zeitgeist and something that the forthcoming account of the pilot will provide many examples of.

After being accepted for inclusion, the two HEIs immediately initiated extensive efforts to organise the internal work in terms of roles and tasks. First, the review process had to be thoroughly scheduled and coordinated in alignment with the overall SHEA schedule. Each HEI carried out inventories to find out what the organisation knew about itself. For example, they had to identify who in the organisation carried specific knowledge about particular organisational areas and processes and what internal documents could make that knowledge visible. They had to identify key actors within the organisation who could provide such knowledge as a part of the self-evaluations and/or answer questions about the IQA in the upcoming interviews. Soon after, the HEIs also attended an information meeting with the chair of the assessment panel; the meeting was organised by the SHEA. The contact with the chairpersons was expressed by representatives at the Eagle as being soothing and comforting, since the approach – that is, to assess how the HEI can make sense of the way in which their IQA covers their needs – was perceived as fair.

The work on the self-evaluations gave rise to particular challenges. Although the Falcon and the Eagle QA management both knew their weaknesses fairly well before the review, the interpretation of the SHEA’s intentions demanded collective efforts. The HEIs’ self-evaluations had to be organised according to a SHEA template. The basic challenge was to meet the SHEA’s expectations, which were perceived as rather vaguely formulated in the guidelines. The self-evaluations were crafted in a successive and collective process of interpretation and translation. The Falcon’s internal time schedule for the self-evaluation, visualised in a detailed spreadsheet, displays how the work with the self-evaluation was initiated in December 2016 and how it was followed by 27 activities, including identifying coordinators and reference groups, interpreting guidelines, holding numerous meetings, collecting data, holding discussions, undergoing multiple peer review processes, doing editorial work, doing proof reading, ensuring formal finalisation, submitting, and completing self-evaluation of the work.

Writing a self-evaluation is not only a laborious matter of describing your own strengths and weaknesses; it is a puzzle to find out what the assessor may “want” and is thereby a source for queries among those involved: “What do you think they [the SHEA] want us to write here?” (Eagle Quality Management Staff 1). As noted by Falcon management, responding to items in a template to be submitted for external review is not an easy process of representation. On the contrary, it fuels insecurity. The number of instances that the HEIs could nevertheless “misunderstand” a given explanation of a ground for judgement by the SHEA is numerous:

… the range of possible interpretations are endless (…): “well, it could be like this”, “well, it could just as well be like this”, “yes, but do they mean process or?”. So, we were occupied up until the last day with trying to define [it]. And in the end, we just had to make up our minds and say, “no, this is what they mean”. (Falcon Quality Management Staff 1)

The HEIs also discovered that they had to design images that would visually display their IQA systems. These images alone invoked collaborative achievement. As it turned out, the self-evaluations contained a number of detailed, complex, and multicoloured organisational models that served to illustrate aims, flows, visions, schemes, activities, differentiation, hierarchies, processes, and cycles. Overall, the basic ideas communicated in the self-evaluations drew on ideas from Total Quality Management (TQM) characterised by an emphasis on management responsibility for quality improvement, systematic and continuous analysis, and the improvement of work processes conducted throughout the organisation through means of involvement and empowerment. Explicit reference to the so-called Deming Cycle (i.e. the wheel of continuous improvement stemming back to the evolution of engineering and industrial production in the 1930s) displays how such modernist schemes and modes of thinking are repeatedly repacked and moulded into HEI governance regimes (cf. Stensaker 2007).

The self-evaluations also had to show that the HEIs worked with all four aspects (governance and organisation; environment, resources, and area; design, teaching/learning, and outcomes; and follow-up, actions, and feedback) and three perspectives (working life, student influence, and gender equity; see the chapter “Relaunching National Evaluation and Quality Assurance: Expectations and Preparations”). This was to be corroborated by documents describing how this was achieved. Hence, according to the HEI informants, a lot of time was spent revising documents and uploading them on the SHEA’s web-based administrative system (UKÄ Direkt) because it was “important to clarify internal processes so they could be evaluated” (Eagle Teaching Staff 1). To illustrate the scope of this work in preparing for the review, the Falcon submitted no less than 77 supplementary documents to “exemplify”, “demonstrate”, and “emphasise” different dimensions of their IQA (Falcon, self-evaluation document). Examples of such documents include the following:

  • Internal rules

  • Handbooks

  • Self-evaluations

  • Audits and reports

  • Development plans

  • Annual reports

  • Expert statements

  • Visions and annual reports

  • Matrixes and syllabi

  • Study guides

  • Examinations

  • Course evaluations

  • Seminar instructions

  • Schedules

  • Benchmarking reports

  • Guides

  • Routines

  • Information material

  • Application forms

This body of documentation, and the institutional logic that their accumulation and employment constitute within the pilot, corresponds to Michael Power’s (2013) conceptualisation of audit trails. These “involve the routine production of artefacts which document work routines, but which are also the micro-manifestation of larger performance regimes shaped by institutional demands for accountability” (Power 2013, p. 1). According to Power (2013), the mediating function of this form of documentation “means that audit trails are definable as: evidential pathways which connect traces of micro-routines to performance reporting regimes and institutional environments” (ibid.).

Thus, this early phase involved collective interpretation and translation, as well as tedious work that was strenuous and time consuming. As noted by one key actor at the Falcon, pressure to produce the self-evaluation led to extraordinary working conditions:

We were up [late], working after midnight, the last several weeks. The last time I did such a thing was when I was a doctoral student, a very long time ago. I mean, a group of people drinking coffee after midnight…. (Falcon Quality Management Staff 2)

This picture of a tight gathering of HEI actors taking a short break from work in the middle of the cold, dark Swedish winter night will be kept in mind when the reactions to the formal SHEA decision are described later in this chapter. The midnight coffee break serves as an image of the substantial personal and collective investment that a review process of this kind can entail. Despite such demanding conditions, the actors did not emphasise the heavy workload or the time and resources required as particularly problematic – rather, the tight deadline for the self-evaluation that was set by the SHEA produced frustration and discontent at both HEIs. As one of our HEI informants said, there was not enough time between the initial meeting of 1 December 2016 and the deadline for submitting the self-evaluation of 24 February 2017 to sufficiently anchor the self-evaluation or to work and reflect in decentralised groups within the organisation:

Within this short time span, we really had to compromise. We would have liked to do much more thorough, solid, and inclusive organisational processes. We did the best we could; we were out [in the organisation] and talked and collected documents and did such things (…) but it could have been done more extensively. (Falcon Vice Chancellor)

Two observations deserve to be highlighted in this context. The first is that the Falcon found the focus of the review model to be too rigid. It is based on summative ideas on evaluation rather than formative ones (Scriven 1967). As a result, there is not enough space within the boundaries of the model to qualitatively produce, develop, and use local organisational knowledge as a means for improvement. Secondly, it is noteworthy – once again – how the enterprise of quality management and IQA work appears to trigger instincts and eagerness to do more and more and to add and incorporate evaluative activities to an expanding IQA. The SHEA is not the one pushing HEIs into further expansion and immersion; it is particular categories of staff within HEI management, what Jacobsson and Nordström (2010, p. 178) would call “a community of the willing”.

The SHEA

Let us initially describe the actors at the SHEA in the context of this particular pilot. One SHEA member of staff was assigned to each institutional review and named project leader. In addition, other SHEA employees were involved in designing and preparing the new national framework in general and the institutional review in the pilot in particular. These actors were also activated during different stages of the review. The SHEA actors’ backgrounds show many similar features and experiences. They had often worked at the agency for many years and held an academic degree, most commonly a Ph.D. Many of them also had previous experience from academic work and leadership. Extensive experience from EQA was also common. The SHEA project leaders were continuously and closely following “their” respective review by informing, organising, and interacting with review participants. In this way, the project leaders, to some extent, came to represent the agency in the eyes of the HEIs and the assessors.

Within the SHEA, this first phase of the pilot consisted of two basic tasks: implementing the review model and organising the actual pilot process. A handful of HEIs initially volunteered to be reviewed in the pilot. The SHEA strived for HEI variation with regards to size, scope, and geographical location. Four HEIs was considered to be enough to fulfil this. The SHEA project leaders organised external assessment panels using strategic invitations to higher education actors and based on the names nominated by the HEIs. In addition to a chairperson, the assessment panel also included a non-Swedish assessor, an actor from the HEI sector, a working life representative, and a student. According to the SHEA informants, knowledge and experience from earlier work at the agency, HEIs, or similar settings were considered to be particularly valuable for assessment panel members – similar to the experiences and knowledge the SHEA project leaders possessed. In addition, leadership skills and abilities to organise processes and instruct participants in the review were considered to be important.

Moreover, the SHEA project leaders administered the previously mentioned web-based platform where review material could be shared among parties, as well as a section on the web-based platform where the assessors in the external panels could communicate and share their work. The project leaders were also responsible for ensuring uniform and standardised processes across the different panels. This task also entailed making sure that the assessors understood the method that was to be employed in the review and that the review was consistent with principles of legal certainty. Thus, on the one hand, emphasis during the panels work was on a kind of sector-specific and contextual knowledge, such as from previous work in HEIs. On the other hand, such forms of embodied knowledge seemed to exist in parallel with ideals about objectivity based on formal principals of justice, where all cases must be treated equally (cf. Molander 2016, p. 32). We will return to this inconsistency when discussing the epistemic dimension of the pilot later in the chapter.

Extensive preparations were conducted to design templates that were meant to serve as guides in the review. In addition, the SHEA produced training and guidance material for assessors, which they used during a full-day training session organised for all assessors. The training of assessors foremost aimed at standardising grounds for judgements and assessments across the panels and members to make their future judgements trustworthy and legitimate. In our interviews, they repeatedly underscored the importance of dialogue (presumably a strategy to avoid the emotionally loaded policy processes preceding the existing national system described in earlier chapters in this volume). This dialogue orientation was manifested in the form of continuous meetings with assessors and with the HEI representatives and in the organised training of assessors.

Assessors

The SHEA requirements and strategic recruitment of external assessors resulted in panels with members who had the desired backgrounds. Most of them had substantial experience of evaluations in higher education, both as being evaluated in their department, discipline, educational programme, or institution and as assessors of earlier and former agency evaluations. Some of them took part in the reference groups that SHEA organised to develop the new 2016 EQA system. As a result, the assessors possessed knowledge about the system and its intentions, which they brought to the panel and the process. They also had great experience in academic work and management at different levels and were involved in QA at their department/institution. When assessors described what they need to know to take on their task, they emphasised experience with top management work at HEIs, previous experience from the SHEA evaluations, and knowledge about QA. As noted by the Falcon chairperson, their panel also benefited from the fact that the working life representative had specific and vast experience and knowledge about quality technology; it was a person “who knew the essence of the craft” (Falcon Assessment Panel Chair 1). The informants argued that the panels also benefitted from student and employer perspectives, which the representatives from these groups brought to the panels.

The chairpersons (not the full panel) took part in an initial meeting at the SHEA and with HEI actors in December 2017. The panels then received documents and guidelines from the SHEA, which they reviewed. A second meeting at the SHEA (with all panel members) aimed at presenting information to and training the assessors by outlining the preconditions of the pilot, the process, the key indicators, and the components. During these initial meetings, a set of conceptual and practical questions were raised by the assessors. As we will see later, these questions – which were advanced at the beginning of the process – were to become imperative during the whole pilot. The questions concerned, for instance, how to weigh judgements, the role of the three perspectives (working life, student influence, and gender equity), details in the SHEA guidelines, and questions that concerned central concepts, such as the meaning of the word “to assure” [att säkerställa, in Swedish]. Hence, the assessors had an opportunity to meet each other, with the SHEA and the HEI actors.

At this stage, the assessors did not have a clear picture of the HEIs that they were reviewing or the roles and tasks that they were expected to perform. During the first meeting, the HEIs presented their organisations and IQA systems. Since the Falcon and the Eagle were perceived as deviating from “traditional modes” of how to organise HEIs, it invoked challenges for the assessors. Initial queries about who at the HEIs were suitable for taking part in the first web-based interviews at the respective HEIs were also made. The Falcon chairperson said that this panel used the first meeting to understand the particular organisational features of the HEI that they were reviewing: “We had to translate in order to understand. ‘What does this role mean?’ ‘Is this the same as head of department?’” (Falcon Assessment Panel Chair 1). This is another example of the particular translation problem that arises when the review implies a norm against which the object of evaluation is measured. These translations affect not only the actors under review (as exemplified in the problems described above in terms of writing a self-evaluation on the basis of a template) but the assessors who had to assimilate new knowledge about previously unknown organisational structures to conduct the review.

Collecting and Providing Data

The next stage of the pilot involved additional activities in which the assessors accessed data – uploaded by the HEIs – from the web-based platform. For the former group, deliberations and struggles about what to assess and with what criteria became increasingly pertinent. The panels used the HEIs’ self-evaluations as a basis for forming an initial impression and then organised their work and prepared for the upcoming web-based interviews. Each panel member produced an individual assessment with suggestions for questions the HEI representatives would be asked, and these individual panel member texts were then edited into one document by the SHEA project leader. Each assessment panel then held a meeting to decide which questions to ask the HEI representatives in the web-based interviews, in what interview each set of questions should be raised, and to whom they should be directed. This also meant that some questions were laid aside for the upcoming site visit. The assessors who voiced certain preferences about informants for the interviews requested informants who represented different subject areas and units and worked on quality issues. At the Falcon, where management of external evaluation was more developed, a “liaison central” was organised to prepare, organise, and follow up on different activities during the pilot. Among other things, this included the web interviews and organising post-interview deliberations.

Web-Based Interviews

The web interviews were organised as 1-day sessions with scheduled focus group interviews with approximately 20 HEI actors in total. At the Falcon, the web-based interviews were not perceived as satisfactory. The format appears to have produced distance and certain difficulties. First, some of the students and teachers who were interviewed found it a bit daunting to have a group of top management professionals and professors asking “evasive semi-bureaucratic questions” (Falcon Student 1). The interviewees at the Falcon also felt they were being “put to the test” in a situation characterised by obvious power asymmetries and in which assessors were perceived to have certain normative preconceptions. These emotional reactions are partly linked to one of the interviews, where the Falcon actors claim that they were told to prepare for certain topics and questions, and then the assessors asked questions about other aspects. These actors felt that these questions were difficult to answer, as they did not target the actors’ roles in the organisation. In this situation, the earlier mentioned “liaison central” served an important function: to comfort interviewees in what can be described as a kind of “emotional debriefing”. The actors at the Eagle were, however, not as critical. Here, the assessment panel’s questions were perceived as being connected to the previously submitted documents, which fostered a situation where the interviewees were constituted as being capable and well-informed.

We would like to draw attention to two issues. First, interviews taking place in a context of accountability are framed by stressors and anxieties. Regardless of whether the assessors manage to produce a safe situation, the actors representing their HEI entered the interview with the expectation of being able to answer questions that do justice to their specific organisation. Still, such systems are sophisticated, and whereas some knowledge is easily recapitulated, other knowledge often remains unformulated and is thus difficult to communicate. The preparations of the HEIs were characterised by the actors’ eagerness to be honest and knowledgeable about the organisation, as well as its strengths and weaknesses. In this context, the HEIs’ preparations before interviews served to inform these actors about their own work and organisation to pass this information on to the assessors. The second aspect that we would like to point out is the perceived problems in terms of fuzziness about what is assessed and how. For example, the HEI actors found it difficult to determine where the cut score was from the panel’s questions. For the assessors, the self-evaluation was a central document that served as a point of departure for what to focus on and what to ask about in the web-based interviews. During these interviews, the assessment panel members took turns interviewing and taking notes, as well as deciding if the information provided was satisfactory. The Eagle assessment panel specifically found the purposes of the web interview occasion and the site visit to be a bit unclear and struggled when it came to separating issues for the web interview and the later on-site visit.

Site Visits

Generally, our informants describe on-site meetings as far more positive and productive than web interviews. These meetings were seen as a central activity for developing mutual understandings and exchanges for all involved parties. Thus, face-to-face meetings seem to bring dimensions of embodiment into the interview that encourages discussions that move the review process forward. The HEI actors described the encounters as nice conversations and the panel as interested and well prepared. However, on-site visits also implied additional work because they were not limited to the actual meetings between assessors and HEI actors; they required planning, organisation, and preparations, and they led to forms of post-production. Overall, the exercises and the work undertaken before and after the site visit point to the fact that, even though the pilot can be divided into different phases and activities for analytical purposes, all of these activities were closely interlinked as a continuous process.

The SHEA project leaders continued to have a less visible but important role. They organised the site visit schedules, supported the panel chairperson, and kept time and discussions on track. The site visits were organised as 2 full-day sessions consisting mainly of 45-min focus group interviews with approximately 40–50 HEI actors in total, including management, teachers, students, doctoral students, and other staff and working life representatives. Each HEI actor involved in the site visit received a memorandum from the SHEA project leader outlining the visit’s schedule. The memorandum also described the pilot’s rationale, the assessment panel’s role, and the time schedule. It declared, among other things, that each HEI actor was expected to bring a nameplate because there would be no time for presenting individuals or the organisations and units they represented. There were 15-min breaks scheduled between each session, allowing the assessors time to prepare and the HEI actors a chance to switch places.

The site visits were prepared on the basis of the self-evaluations and the SHEA guidelines, and they were specifically directed to the “audit trails” (SHEA 2016, p. 12) of areas identified by the assessors. To qualify their understanding, the assessors prepared the interviews and requested additional materials from HEIs. The QA management at the Eagle experienced a new workload peak as documents had to be uploaded to the platform all over again, this time divided differently from the uploads undertaken in connection with the self-evaluation submission. They also had to produce new documents that retrospectively described decisions already made. In all, “it was like putting together half a new self-evaluation” (Eagle Quality Management Staff 1).

In both HEIs, the site visit was prepared by specific pre-meetings for those selected for an interview with the assessors. Post-meetings were also held during site visits to transfer experiences from the initial interviewees to colleagues in line for upcoming interviews. As was the case when preparing for the web interviews, these measures also increased people’s awareness about their own IQA to facilitate “thick descriptions” during interviews. This proved to be a successful strategy because one purpose of the site visits appeared to be controlling whether or not the HEI actors were grounded in the presentations given in the self-evaluations. In other words, site visits secured evidence that dialogue and enhancement had spread organisational knowledge within the organisation. One SHAE project leader explained:

One important matter is that they have an IQA [system] that is inscribed in some kind of policy document. Then, we come and visit them with our basis for judgements and aspects, and we look at their policy documents and their systems [IQA systems], and we do our interviews with a whole range of people from different programmes and departments, levels, management from the top to the bottom, and students and doctoral students. Then we (…) examine whether it corresponds to what they have written in their own policy documents. Here, it is especially important that they have a dialogue. Most of the people we talked to knew about the system; in this dialogue-based system where they constantly talk to each other about what they do, they have a process and discuss quality issues. They think about how they can improve. It was not perfect, and there was someone who did not know [about the IQA], but if you think about all the people we talked to during the site visit… But the assessment panel felt that “this seems to work, they appear to keep track, they know their system, they talk to each other, management is in control over departments and programmes”. Enough people were aware and knew the system to motivate a Pass grade. It could have been better, but it was sufficiently many. If I try to quantify (…) if we say that we spoke to 80 people, and 65 of them kept track of the system, this is just an approximation, since you are asking about a level, just to get a theoretical number. If it had been only 20 of the 80 people we talked to, well then, it might not have been possible to give them a Pass grade. (Falcon project leader SHEA)

Here we see that quality enhancement ideas are framed within a wider logic of accountability. One of the most important facets of quality enhancement, and what the institutional review seeks to measure, is that HEIs have an IQA system that is not just put on paper but is also actually a concern of the entire organisation. The exploration of the pilot’s next phase will show that formal assessments of such organisational processes are challenging.

Reaching, Communicating, and Receiving Judgements

Assessments involve complex processes of professional discretion and judgement. Instruments and frameworks in the form of rules and guidelines do not guarantee determinacy (Evans 2010). Thus, judgments in the pilot required individual and collective work and deliberation. In this process, the primary sources of judgement were, once again, the self-evaluations, their attachments and uploaded documents, and the SHEA guidelines. In addition, the site visits were also important sources of data to verify and clarify different areas.

The SHEA

The SHEA project leaders assembled each assessors’ individual judgements and texts into a single document to facilitate panels’ communication. This was also done to prepare for a forthcoming meeting where review panels were invited by the SHEA to discuss assessments and judgements across the different panels. Furthermore, the SHEA project leaders supervised the panels in their judgement processes. For instance, project leaders asked assessors to specify what forms of evidence they could use to support their judgements. The project leaders emphasised that assessors’ overall judgements must be unanimous and comparable between assessment panels. They were also responsible for “calibrating” the individual decisions for each HEI in the pilot. Preliminary assessments were processed by the panels, officially endorsed by the agency, and later communicated to the HEIs. The HEIs, in turn, were allowed to comment on the content for accuracy. The report then went back to the panels, and they eventually submitted their final report to the SHEA. At this final stage of the review process, several internal QA activities took place within the agency itself and involved different actors, units, and levels of the agency. Finally, the director general sanctioned the decisions to be published.

Assessors

The two assessment panels experienced different intricacy levels. The Falcon panel started sensing that they were looking at a system that already met the criteria in the beginning of the process. As the IQA was perceived as functioning well overall, the actual work of making final judgements and writing the report was viewed as unproblematic. Still, some difficulties arose in deciding the “limits for pass or fail”. The Eagle panel had more difficulties. The Eagle IQA was quite new, and some parts were not operational and therefore difficult to assess. Within this panel, there were early discussions about relative weighting between criteria and about difficulties with how to interpret and assess the three perspectives. For example, insecurity emerged about what it really means that the IQA system “should be proactive, it should be systematic, it should be integrated and so on” (Eagle Assessment Panel Chair) and how this criterion ought to be operationalised. The panel members also discovered that the criteria were not internally calibrated. The criteria for gender integration, for instance, were so ambitious that they were perceived as almost impossible to pass. The assessment work went on throughout the entire review process in the form of consecutive discussions within the group. During the process, the SHEA project leader “emphasises, or she is particularly driven by, for example, the student perspective and also pushed that quite hard” (Eagle Assessment Panel member). The Eagle panel member argued that this weakened the relative independence of the assessors’ work, thereby adding further entanglement. As the deadline approached, there were still diverging opinions in the external panel. At this stage, the SHEA project leader insisted on a unified and unanimous judgement from the panel. A consensus in the panel was considered necessary, and diverging opinions were to be transformed into a unified decision accepted by all panel members. After some deliberation, this was finally settled by formulating the overall judgement in a positive way while pointing to the areas that could not be judged as sufficient. The chair and project leader then worked with internal calibration and revision and carefully edited the final adjustments of critical formulations.

The method for summarising the judgements as outlined in the SHEA guidelines stated that all four aspects and subareas and all three perspectives must be approved for a final passing grade. If any of these failed to meet the required standards, the HEI must be included in a new review in the full-scale implementation. This method of summarising judgements eventually made it clear that the Eagle did not pass, in contrast to what some panel members had suggested. Because the guidelines did not allow for weighting between the subsections, i.e. a weakness in one subsection could not be “compensated” by strong performances in another subsection, the IQA as a whole could not be approved and given a passing grade in this case.

The main goal in the assessment process, just like in the Falcon panel, was to check for a feedback loop in the QA work at all levels, i.e. to check that quality enhancement was sufficiently established at all levels and in all parts of the organisation in a spirit of engagement and responsibility. In one SHEA decision, for example, it is argued that:

The site visit and the web interviews unambiguously show that substantial engagement and broad responsibility are taken for QA in daily work among staff and among students. However, the assessors got the impression that participation could have been even more widespread, especially when it comes to the category staff and students. (…) During the interviews, it was shown that many teachers and students did not know about the novice student survey or the improvement measures it resulted in. This is an area that the HEI needs to work on.

Once again, this is a vision of a complicated system continuously engineered to secure improvement. Each specialised role or function within it has extensive knowledge about the overall system’s design, including its engineering as well as the effects of each component. Such ideas differ dramatically with the Hayekian ideals (Hayek 1945) presented in the chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting” on how the governing of complex human systems ought to be decentralised and evolve on the basis of dispersed and tacit forms of knowledge.

HEIs

For the HEIs, the time between site visits and before the preliminary and definitive reports was perceived as long. When the preliminary judgements were communicated in the autumn of 2017, they evoked mixed emotions. The Falcon’s IQA was approved and got a passing grade. The management and the board were relieved by the outcome, which they argued gave the organisation the positive affirmation they had hoped for when they initially signed up for the pilot. At the same time, they were a bit confused and even upset as they identified a lack of understanding of their IQA as it was interpreted by the assessors. The key actors at the Falcon had discussed whether or not to address these perceived problems in their comments to the preliminary report. These careful considerations indicate that the process was seen as risky by actors who worked within the HEIs and wanted their organisations on the safe side of a passing grade.

Among the key actors at the Eagle were various expectations for the assessment’s outcome; some of the Eagle staff believed it would be a pass, and some believed it would fail. As it turned out, the Eagle did not pass, resulting in some disappointment and irritation. For example, some critical remarks were made on the assessors’ sensitivity to what came across as one single critique from students. Moreover, the preliminary outline of the SHEA report was perceived as hard to understand, and it was not correct on all points. Key actors at the Eagle commented on those errors in their formal reply to the preliminary report. They had hoped for revisions in the final decision, but, instead, corrections were attached to the end of the report. In other words, the perception was that the HEIs’ comments were not adhered to.

Post-decision Actions and Reactions

After the pilot decisions had been made public, the SHEA organised two meetings where the pilot was discussed and evaluated. A meeting with the HEIs that took part in the pilot took place first, and a month later, in January 2018, a larger so-called dialogue meeting took place with other HEIs that was to be reviewed in the regular upcoming cycle. The SHEA also conducted surveys with the HEIs involved in the pilot to map their experiences with the process. The SHEA worked intensely to refine the method on the basis of the pilot and the input from HEIs. In parallel with these institutional reviews, ongoing discussions were organised by the ASHEI. At four conferences between 2015 and 2017, representatives from HEIs shared their experiences by presenting and discussing their IQA systems. At an ASHEI conference in June 2017 (after the site visits), the HEIs that took part in the pilot presented their specific experiences and their suggestions to improve the new institutional review model. Key actors from the SHEA also attended these conferences, provided input, and gave updated information about their work. First, these activities display that the development of the institutional review was not limited to the pilot. Secondly, an analysis of the range of HEI and IQA presentations during the ASHEI conferences reveals a prevailing isomorphism in terms of IQA designs, as shown in the chapter “Enacting a National Reform Interval in Times of Uncertainty: Evaluation Gluttony Among the Willing”.

In January 2018, the institutional review to be implemented in the regular cycle was presented at a dialogue meeting. The grades had been changed to three levels instead of two: pass, pass with reservation, and fail. The web interviews were replaced by an additional site visit. There were also new guidelines with reduced details, new templates for self-evaluations, and student appendices that were published on the SHEA website. More specific requirements concerning assessors were added, such as a requirement of experience with gender equality work (SHEA 2018a, p. 25). The short time frames for the HEIs to deliver materials were questioned, but the SHEA explained that “three weeks resulted in so much materials, it was the wrong signal. It should be exclusive materials” (SHEA staff, observation notes). The new model was presented at another meeting in March of 2018, with assessors and representatives from the four HEIs and student unions included in the first regular cycle. In an interview on the SHEA website, the SHEA coordinator notes that:

There were not that many questions during the meeting, which suggests that the HEIs have followed the process of developing the method and are well-informed about the content in the new guide that describes how the review operates. From the response at the meeting, we can see that our dialogue meetings have contributed to the HEIs being well-informed and have a grip of the method and the process. (SHEA 2018b)

Thus, the pilot developed and tested the institutional review, and it worked as a governing tool that informed and legitimised the new model within HEIs. As we will show in the following, however, governing signals from the pilot were not unequivocal.

HEIs

Contrary to previous national evaluations, this pilot did not receive much attention – neither media attention nor broad attention within each HEI. Moreover, the pilot seemed to largely confirm what the HEIs already knew. After receiving the decisions, the responsible staff at the Eagle worked with the identified weaknesses, most of which they were already aware of and prepared for, to improve their IQA before the upcoming reassessment in the ordinary cycle. It was seen as an efficient and safe way to improve and learn, to seek inspiration, and to align their IQA with other HEIs. Extensive work was directed to the assessed perspectives where the Eagle failed to achieve a “full score”, i.e. gender equality and student influence. Actors from the Eagle also sought to make sure that course evaluation follow-ups and feedback to students were improved, and they worked to clarify internal documents and processes for colleagues. Internal documents were also revised to more clearly accommodate a student perspective and to – as it was said – make texts more “alive”. Bureaucratic textual vividness was created by including reflections and knowledge about the work and the organisation in general. The Falcon continued with their well-established and now SHEA-approved IQA work. However, considering the resources, time, and work invested in preparing and carrying out the pilot’s activities, issues about the actual value added from the review were raised. The HEI side was concerned that the pilot may have contributed to a growing sense of insecurity. The decisions did not include any information about what was actually missing in the failed IQA systems, and thus it remained unclear what actually had to be accomplished to pass.

Perspectives on the Pilot: Work, Actors, and Knowledge

We would like to highlight some observations relating to the questions raised in the beginning of the chapter on enactments, actors, knowledge forms, and governing.

Expansion of QA Work

As one of our informants told us, an effect from the 2016 EQA system and the institutional reviews may be that HEIs “start doing much more than what would really be required, just in order to be on the perceived safe side” (Falcon Quality Management Staff 1). This general expansion of practices and activities linked to QA is a discernible trend in the empirical data presented in this book. QA expansion can be seen as an important driver within a willing community (Jacobsson and Nordström 2010). This tendency to “do more QA” may be further energised by the pilot, as the informant above implied. Moreover, the comprehensive work within the SHEA and the external assessment panels to establish a systematic assessment framework proved to be very challenging and continued to elicit challenges for all parties. For example, authoritative decisions in terms of pass or fail grades triggered the HEI actors to analyse and compare decisions, but it was also accompanied by the insight that assessors’ work is – and cannot be anything else than – subjective. Thus, it varies between assessment panels and across time and space.

When mapping the entire pilot process, it becomes evident that this kind of exercise implicates a huge amount of work and various forms of work that ultimately serve the purpose of governing. As noted by Clarke (2012), these efforts:

[l]ike all other forms of human labour, involve practices of transforming things. Whether the objective is to govern populations, projects, problems or processes, work (and people to do the work) is essential. Forms of work (and types of worker) are a condition of possibility: that populations might be regulated; that projects might materialize; that problems might be resolved or that processes might run smoothly. (Clarke 2012, p. 209)

The remark about work as an engine of transformation might seem obvious, particularly in relation to EQA practices, because a basic rationale underlying these exercises is orchestrating qualitative changes or improvements. Even so, we think an analysis of actual work is productive for our purposes. For instance, our attempts to make work visible have displayed examples of when evaluative practices do not “work”. As noted by Clarke (2012, p. 209), “these forms of work are also the condition of other possibilities, in which the anticipated or desired outcomes do not materialise”.

We would also like to highlight the amount of work and how the expectations around this work are framed and formed. On one hand, this capacious labour is required by the SHEA and manifested in, for instance, the assessors’ assignments and the HEIs’ facilitation of assessors’ data collection (i.e. to produce and deliver documents, to actively take part in interviews, to respond to decisions, etc.). On the other hand, there is no external limit on how much work HEIs can actually put into the process, other than the restrictions of deadlines at certain stages in the process. In our data, we observe that HEIs tended to expand their work and that certain forms of work were intensified within and between organisations. The HEIs engaged in voluntary activities of knowledge exchange and in various forms of collaborations in national networks, such as the ASHEI. The pilot evidently required a lot of time and work in the studied HEIs, at least for top- and faculty-level management and for QA staff. When we asked about the amount of resources put into the pilot, however, the HEI actors hesitated. Despite the process’s careful planning and organisation in terms of activities and people, issues of time, money, and working hours were largely absent. In a sense, this “invisibilisation” of the resource aspect of QA work is to be expected as the actual work is embedded within the rationale of QA and enhancement. Such work is supposed to be integrated into all other forms of work within the organisation, and separating QA work from other work and activities is neither possible nor desirable. As noted by Power (1994), the scale and the complexity of organisations were reasons for the institutionalisation of this supposedly cheaper mode of “evaluation of evaluation”. Power’s concern is that indirect forms of quality control have become rituals based on compliance without substance or relevance to organisations (Power 1994). Our findings also show that EQA is not entirely functional for the HEIs. Rather, EQA is to be understood as a state mode of governing HEIs.

The Work of Translations

Work is also carried out in the numerous translations of these processes. Our data show how HEIs struggle to make IQA systems fit the self-evaluation template; assessors struggle to understand the assessment criteria and the HEIs’ self-evaluations, and the SHEA staff struggle to reinforce valid and comparable judgements. Assessors and the assessed have to produce a fit between rules and cases. Even common concepts provoked uncertainty and translation processes. The assessment panels consisted of hybrid and potentially contradictory formations that collaborated under specific circumstances. They were also traversed by a particular “logic” inherent in the ESGs and the SHEA guidelines that produced indeterminacy and required them to resolve tensions, paradoxes, and contradictions. In other words, they had to translate from one form to another. The expectations were that such rules, the organisation and forms of practices within the pilot, and the personal qualities of assessors would enable politically desired equivalent and comparable review decisions (Standing Parliamentary Committee on Education 2016). From our interviews, we learn that this is a potentially impossible challenge.

The Work of Qualocrats

Our focus on work also encourages us to consider those who perform the work. As noted by Clarke (2012), “[t]hese are not just ‘hands’ or ‘bureaucratic functionaries’ but need to be understood as having social characters, positions and dispositions that are formed in social relationships and trajectories” (Clarke 2012, p. 213). Throughout our project, we observed that persons working with EQA and its management and development are mobile and well networked. They move between HEIs, government agencies and associations, and various kinds of organisations. As we discussed earlier in this chapter, they tend to share knowledge and previous experiences. In this book, we use “qualocrats” to denote this somewhat heterogeneous group of actors who have taken on the authority and mission to, on behalf of the higher education sector, move between various domains to translate and promote certain forms of knowledge in and of QA. They carry highly valued expertise related to enacting and managing EQA, including the institutional review we studied in this chapter. Similar to Enders and Naidoo’s identification of “audit-market intermediaries”, we also observed “the institutional work of the new professionals who make sense of, buffer and translate institutional pressures” and how “normative frameworks and expectations [are] de-coupled, hybridised and sedimented” (Enders and Naidoo 2018, p. 10). Even if their approach is institutionally oriented, the attempt to “bring actors back in” to the institutional analysis is promising. Our project findings support such an argument, and we suggest that the qualocrats identified in our study are central actors for understanding important higher education transformations and their possible implications.

The qualocrats also contribute to the formation of what has been labelled a “quality assurance community”:

A “quality assurance community” is thus both fabricated and projected (Grek et al. 2009), conceived as a homogenised collection of individual stakeholders “committed to continuous quality improvement” (ENQA 2010: 3). (Brady and Bates 2016, p. 70)

Within this community, higher education is understood in particular ways. An orientation towards goals and results is prominent, which also implies a rational, quality-oriented, and constantly improving organisation (Brady and Bates 2016). In their work, qualocrats use and promote certain forms of knowledge, which we find important to highlight. We observed a certain vocabulary and particular expertise in QA. The knowledge base is largely derived from TQM. Historically, these ideas stem from management theories such as Taylorism, scientific management, and the human relation school and are imported from the industry and from business (Newton 2000, 2010; Palmer and Saunders 1992).

Qualocrats are experts (Barrow 1999; Normand 2016), and their work can be understood as an activity of expertise:

Expertise is a specific activity of knowledge production participating in a process of negotiation and orientation of public policy. This knowledge is technical and comes from professionals working in administrations, international organisations, universities and other higher education institutions, agencies, think tanks or interest groups. (Normand 2016, p. 131)

Technical knowledge here must be understood broadly because expertise also involves know-how, such as the capacity to successfully engage in dialogue and in guidance (Normand 2016). While qualocrats draw on scientific legitimacy – where TQM offers general ideas – their expertise is often “based on experience and social fame acquired for years” (Normand 2016, p. 131).

Such an understanding of expertise resonates with our interest in exploring various forms or phases of knowledge (Freeman and Sturdy 2014). Looking at the pilot, it is clear that certain key actors are particularly important, particularly qualocrats due to their embodied knowledge and expertise. They also possess the skills and networks to rapidly update this knowledge and to incorporate new information they learn from meetings with each other. Inscribed knowledge (e.g. in the ESG, assessment materials, decisions, etc.) is also crucial in the pilot’s work. Qualocrats draw on their embodied knowledge to interpret and translate such written material to make it “actionable” (Grundmann 2017, p. 31). As noted by Normand (2016, p. 156) “the worlds of expertise [and qualocrats] are plural and they have their own logic of action which, contrary to what the technocrats think, cannot be prescribed in advance through quality frameworks and protocols”.

Importantly, however, inscribed knowledge “entails particular ways of seeing, thinking and knowing and serve to constrain and discipline interactions with the world and with one another” (Freeman and Sturdy 2014, p. 11). Our data illustrates how this is played out in the various stages of the pilot process. In accordance with our focus on governing as a verb, we take a particular interest in what actors do with their knowledge. As noted by Freeman and Sturdy (2014), enactment may give rise to new knowledge beyond what was previously embodied or inscribed. In this context, meetings are crucial for such enacted knowledge. In this pilot, meetings were indeed “the basic unit of the policy process” (Freeman and Sturdy 2014, p. 12). Qualocrats and their different networking and meeting activities resemble what Haas (1992) described as an epistemic community, i.e. “a network of professionals with recognised expertise and competence in a particular domain and an authoritative claim to policy-relevant knowledge within that domain or issue-area” (Haas 1992, p. 3, cited in Freeman 2008). As Freeman (2008) points out, such epistemic communities have an important role in processing uncertainty. In the pilot, there was a continuous need for the:

[i]nterpretation of inadequate or complicated and sometimes contradictory information, and on a corollary requirement to stabilise and sustain the flow of information and interpretation to policy makers. Understandings of complex systems, in turn, have come to cast the process of communication as generative, and systems as in some sense creating both themselves and their environments. (Freeman 2008, p. 3)

Meetings, says Freeman, are essentially concerned with “the definition of the problem they are designed to address” (Freeman 2008, p. 17). In other words, meetings are sites of enactment and learning, and they are performative in the sense that objects of mutual interest are constructed through discussions.

Governing by Piloting

This leads us to a final observation on the pilot’s governing dimension, what we have labelled “governing by piloting”. The pilot’s very form and deliberately temporal design include work that links people, places, policies, practices, and power in particular ways (Clarke 2015) and in the form of a particular “governing project”. Framing it as a pilot opened up spaces for mutual adjustments, learning, and dialogue and a priori reduced the “stakes”. In these processes, there are still certain anticipatory governing signals sent to HEIs in the pilot and to HEIs not in the pilot. In this case, one governing signal was “to be on their toes”. It turned out that most HEIs involved in the pilot got a failing grade, which clearly sent a message that “this system can bite, too”. At the same time, there is a continued emphasis on soft norms in terms of dialogue, trust, mutuality, enhancement, and openness. This orientation and “affective atmosphere” were prominent in the process leading up to the 2016 EQA system and in the pilot. In sum, we suggest that this orientation conveys somewhat contradictory signals to actors that will be consolidated, processed, and “done” in further enactments within and beyond the regular and full-scale implementation.

Finally

This chapter analysed the pilot of the 2016 EQA institutional reviews. The chapter concentrated on the work of different actors involved in these processes and structured the “story” via three main chronological stages in the pilot, i.e. preparations, data collection, decision-making, and finally post-decision work, which included some partial adjustments of the institutional reviews. We empirically emphasised the amount of work and the forms of work done by various actors in these enactment processes. In this context, we also noted the numerous translations that continuously take place – even if the planning of the process and the level of detail in the prepared guidelines are quite thorough. One important group of actors that are enacting and brokering EQA knowledge and the work and activities that go along with it are the so-called qualocrats. We identified them as particularly central and important actors, and we will continue to highlight this group, among other things, in the upcoming concluding chapter.