1 Introduction

The use of data for governance purposes is widely recognised as a way for national authorities to coordinate activities across administrative levels and improve educational quality (e.g. Altrichter & Merki, 2010; Borer & Lawn, 2013; Souto-Otero & Beneito-Montagut, 2016). A central component of using data in governance is the role of the mid-central authority level as the drivers and motivators of data use in schools. This understanding of the mid-central authority levelFootnote 1 reflects global education policy trends aimed at improving teaching and learning by assigning the responsibility for educational change and improvement of student learning outcomes to municipal authorities (Farrell & Coburn, 2017). Studies have shown that new expectations for local quality work have changed and that the relation between local autonomy and national control has been reinvented by a stronger focus on accountability, pressuring local actors to change their behaviour (Bergh, 2015; Camphuijsen et al., 2020). Avidov-Ungar and Reingold (2018:294) calls for more studies on midrange educational leadership and the role of superintendents, in contrast to the abundance of studies on educational leadership at the school level . Researchers have directed limited attention to the importance of activities at this level in governance research in general and in systemic education reform research in particular, even though the mid-central authority level is recognised as a ‘web of interrelated and interdependent roles, responsibilities and relationships’ (Rorrer et al., 2008, p. 208; Prøitz et al., 2017). Furthermore, administrators’ roles as data interpreters in decentralised education settings, which include school leaders and teachers, offer discretion in the development of varied governing styles; indeed, this will affect how data are used in municipalities and schools (Prøitz et al., 2019).

The current article explores the interpretation and use of numbers and data to define performance goals and school development standards at the mid-central authority level. The study is conducted in a Norwegian setting and is guided by two research questions:

  • RQ1. How are the representations of performance in education made by administrators’ use of numerical data?

  • RQ2. How do such representations influence interpretations and shape decision-making processes at the mid-central authority level when it comes to educational matters?

In the current study, we investigate how mainly numerical (but also textual and symbolic) performance data are considered, interpreted and processed at the mid-central authority level as part of the social meaning making and decision-making processes of administrators in education. The study is informed by social representation theory (Sammut et al., 2015); this means that we understand representations as ‘the collective elaboration of a social object by the community for the purpose of behaving and communicating’ (Moscovici, in Wagner et al., 1999, p. 96). Moreover, a representation is to be understood as a ‘system of values, ideas and practices with a twofold function: first to establish an order which will enable individuals to orient themselves in their material and social world and to master it; and secondly to enable communication by providing a code for social exchange and a code for naming and classifying unambiguously the various aspects of their worlds…’ (Moscovici, in Wagner et al., 1999, p. 96). The present article contributes important insights into the representations of performance goals in education and the role of municipal administrators as developers, interpreters and users of such representations in their work with policy goals at a crucial yet understudied level of the education system. The multiple roles of administrators in representing performance can also be considered as reflecting how performance goals can both constitute an underlying logic that builds towards policy goals, while performance goals also can be policy goals in and of themselves. The article’s focus on administrators helps explain the varied local approaches to policy goals and data use in schools and by the mid-central authorities themselves. Hence, the study allows for a more nuanced understanding of the institutional, social and meaning-making processes related to the use of data for setting performance goals, here in contrast to the dominant perspective in the data use literature, which often addresses implementation, effectiveness and how data use practices can ideally be designed and performed (e.g. Kelly & Downey, 2012; Wayman et al., 2012). Further, the study extends the literature by considering how the connections between data use and performance goals and strategies for quality assurance might be intertwined (Schwandt, 2012), contributing to the assessment of performance goals (Dahler-Larsen, 2011). Finally, data use research has been presented predominantly within an Anglo-American context, though the investigative modes within the fields of data use in education vary depending on geographic location and the educational system (Prøitz et al., 2017). By zooming in on the Norwegian setting, the current study contributes to the field with insights into how performance data are used in a Nordic setting.

2 The context of the study

As in many other countries, the Norwegian mid-central education authorities are increasingly responsible for improving student learning outcomes, along with initiating and following up on school development processes. Since the early 2000s, Norway has increasingly focused on numbers and data as indicators of student performance. National tests were administered for the first time in 2004, and the National Quality Assessment System (NQAS) was introduced in 2005.Footnote 2 The subsequent comprehensive Knowledge Promotion Reform of 2006 introduced learning outcomes and a stronger focus on assessment. Much like the policy documents that introduced it, the reform reinforced deregulation, emphasising the responsibilities of local education authorities and school leadership, as well as the importance of holding key actors in the system accountable (Møller & Skedsmo, 2013). Since this reform, all municipalities have been required to ensure a local quality assessment system that documents a varied set of older and newer performance data, such as final grades, national exam data, national test data and student survey data. Although the NQAS provides local authorities, school leaders and teachers with information about students’ achievements regarding their competencies, the mid-central authorities differ in terms of how and to what extent they integrate the results and their uses in local quality assessment systems and other local governing tools (Prøitz et al., 2019; Skedsmo & Møller, 2016; Aasen et al., 2012). In other words, although data use practices have been studied to some extent at the school level, we still know little about these practices at the mid-central authority level of education governance.

Norway is divided into 11 administrative regions called counties (fylker). These form the first-level subdivisions of Norway and are further divided into municipalities (kommuner).Footnote 3 At the time of data collection, Norway had more than 400 municipalities.Footnote 4 Although Norway’s municipalitiesFootnote 5 have a long tradition of local autonomy, they must provide education in accordance with the Norwegian Education Act and its related regulations (i.e. the national curriculum, national regulations for assessment and reporting results for monitoring purposes). This reflects the national governance of the publicly funded Norwegian education system, which aims to ensure the continuation of the country’s long-standing values and traditions of inclusion and students’ rights to equal education. However, recent studies have shown how difficult it is to uphold these values nationwide, and local variations in terms of structural and organisational elements—as well as school results—persist, even with these recent measures (Steffensen et al., 2017; Aasen et al., 2012). Thus, the current study investigates local variations through the cases of three municipal authorities; they are similar in that they are subject to the same national regulations, such as the national curriculum and guidelines and the National Education Act and its supplementary regulations. However, they differ in terms of their responsibility for different numbers of schools and students, geographic location, area classification, structure and organisation and results.

Next, we situate our study within the broader context of data use practices as a means to govern education and pinpoint some key studies on data use at the municipal level (at the district level in the USA). In the subsequent section, we introduce our conceptual and analytical framework, including the case study approach we employed and the material on which we based our study. Thereafter, we provide a contextual account of the case (Norway) and analyse its three embedded cases. We conclude by discussing our cases in relation to the theoretical concepts and research questions.

3 Previous research on data use and representations of performance

Over the past decade, the education research field has increasingly expanded in the area of data use practices within schools and school districts, particularly in Anglo-Saxon research (e.g. Coburn & Turner, 2011; Fenwick & Edwards, 2016; Huber & Skedsmo, 2016; Jennings, 2012; Kelly and Downey, 2012; Little, 2012; Racherbäumer et al., 2013; Schildkamp et al., 2014; Spillane, 2012; Sun et al., 2016; Mausethagen et al., 2018, 2019). A general finding is that data use depends on the factors related to organisational routines, such as access to data, time, financial resources, leadership and the norms of interaction (Farrell & Coburn, 2017; Selwyn, 2016). Furthermore, local authorities and districts can consider data useful, even though challenges regarding data use often emerge among school leaders and teachers (Amrein-Beardsley et al., 2016; Datnow et al., 2012; Park, 2012; Paufler & Clark, 2019; Aasen et al., 2012). For example, DuFour and Marzano (2011) show that data use processes are effective in terms of enhancing student learning outcomes, and studies have confirmed the significance of local authorities’ focus on school leaders’ use of school results (see, e.g. Miller, 2010) and whether policy prioritises, supports or requires the use of data. One enduring challenge for local education authorities is to make the available data easy to use and of value to teachers and school leaders in their daily work. Here, there is evidence that local education authorities often find that the data generated by national authorities are not ideal for teachers’ and school leaders’ decision making (see, e.g. Kerr et al., 2006; Wayman et al., 2012).

Moreover, studies on the work of mid-central and district-level authorities regarding data use typically highlight the variations in different schools’ approaches towards data use (Jimerson, 2016). Examples include whether a data use policy exists and, if so, whether data use is a prioritised, supported or coerced activity by local school authorities (Kerr et al., 2006; Miller, 2010; Quintelier et al., 2020). Researchers also focus on how different local and organisational cultures frame, foster or hinder data use in schools; for example, studies have explored how a high degree of coerciveness linked to control weakens the development of productive organisational data use cultures (Herzog-Punzenberger et al., 2020; Wayman et al., 2012; Young, 2006; Young & Kim, 2010;). Studies on micro-processes have exemplified how data can constitute a platform for the very different governing styles employed by education leaders in local school administration when in a dialogue with their school leaders (Prøitz et al., 2019). Numerous studies raise critical issues, for example, by demonstrating how data use practices—in combination with leaders who emphasise accountability and student learning outcomes—narrow the scope of educational goals (e.g. Hallett, 2010; Valli & Buese, 2007;). Characteristic of the overall literature on data use is its emphasis on the organisation and structuring of data use practices and how to best develop and secure such practices (Prøitz et al., 2017; Guarino et al., 2019). However, because most research has been conducted at the school level, knowledge is lacking regarding the processes that influence the various representations of performance through numbers and data, as well as how these can influence interpretations and shape decision-making processes at the mid-central authority level (Prøitz et al., 2019; for an exception, see Carlbaum, 2016).

4 Conceptual and analytical framework

Some key analytical concepts related to the ubiquitous existence of numbers and data form a central point of departure for the current study (Lundahl & Waldow, 2009; Porter, 2012a, 2012b). The growing focus on numbers and data has been criticised for encouraging the use of indicators as a ‘new calculative rationality (Bauman, 1992) of modern governance’ (Lawn, 2011, p. 278). For example, data use has been described as the ‘quick language’ of standardised testing that reduces complexities and makes educational matters accessible to a wider audience while introducing a language that appears both modern and rational, providing operational and functional features for changing education systems at the administrative level (Lundahl & Waldow, 2009). In other words, various kinds of data represent and constitute the ‘social facts’ that the participants are expected to accept as the terms of the debate (Bowker & Star, 2009; Mehan, 1997, 2000). Any particular representation of performance—its form, the categories invoked and its selection—is partial, embodying assumptions about the social world on the part of those doing the representing (Bowker & Star, 2009; Sauder & Espeland, 2009). What data are presented and how they are arranged and displayed (e.g. aggregated vs. disaggregated, student vs. school level, etc.) make some aspects of the social world available to be studied while obscuring others (Little, 2003; Spillane et al., 2011). Furthermore, the categories (e.g. subgroups, proficiency levels) influence the meanings people assign to them, the inferences they make and how they organise their responses (Colyvas, 2012; Little, 2012; Sauder & Espeland, 2009; Selwyn, 2016).

Following Porter (2012a, 2012b), we understand the role of numbers as the tools of decentralisation that highlight the indirect forms of power resulting from governing by numbers. Porter describes the problem of those working in the midst of decentralised organisations: they have the advantage of having the best local knowledge, and this also applies to the use of numbers. He explains this with the example of how central/national administrators define broad quantitative goals and provide local administrators with incentives to find more efficient ways to reach those goals locally, showing how this logic may lead to temptations to optimise the numbers in ways that evade the actual goal of the work (i.e. improving student learning). The ambiguity of various measures for the fulfilment of defined goals can be exploited without spending any resources on what is measured. Moreover, thin prescriptions (i.e. those characterised by the judgement of a person or institution, ideally by using one number) may be followed by ethics of impersonal regulation and the use of statistics to prevent the use of situational or case-based interpretations or reasoning instead of evidence (Porter, 2012b). These changes to education policy in general and to quality assurance evaluation (QAE) in particular can also be problematised in terms of their ‘constitutive effects’ (Dahler-Larsen, 2011), a process by which ‘QAE redefines the meaning of education and the practices of education by means of installing new discursive and cultural markers defining standards, targets and criteria’ (Dahler-Larsen, 2011, p. 153). Thus, the constitutive effects reflect and simplify the understanding of how assessments can define or redefine what Dahler-Larsen (2007) calls the socially constructed reality in which the assessment is made. Hence, the goals, means and strategies for quality assurance in education should not be considered only as connected in a technical or external way (Schwandt, 2012), but also as closely and constitutively intertwined. In other words, quality assurance (QA) strategies are not neutral regarding their objectives; rather, they contribute to the nature of the goals they assess. In sum, in the present study, we employ the concepts of quick language, thin prescription and constitutive effects to analyse and discuss modern education governance practices related to data use at the mid-central authority level, here by using Norway as an example.

5 Methods and empirical material

The current study’s design is informed by Stake’s (1995, 2006) approach to case studies. Our case research can be characterised as instrumentalFootnote 6 because we have aimed to provide insights into a particular issue within a specific setting. The particular issue is how representations of performance are created using numbers and data and how these influence interpretations while shaping decision-making processes at the mid-central authority level. The specific setting is Norway, which is an example of a form of contemporary education governance in which the mid-central authority level plays a vital role, one that researchers often overlook. Three cases embedded in the larger case of Norway functioned as the sites at which we explored and analysed the interpretation and use of numbers and data to define the performance goals and school development standards at the mid-central authority level.

The three embedded cases consist of three Norwegian mid-central authorities in three municipalities. The municipalities were strategically selected to reflect different contextual factors, such as geographical location (rural, small urban and urban areas) and how developed the municipalities’ quality assessment systems were (from emergent to well established and highly sophisticated). The selected municipal authorities have been anonymised and are referred to as A, B and C. See Table 1 for an overview of the three cases.

Table 1 Overview of the embedded cases

The data for each embedded case were collected from 2015 to 2018 as part of a larger research project on data use in municipalities and schools. For the current article, we draw on the policy documents produced by three municipal administrations and data from interviews with three municipal administrators, one from each mid-central authority. Municipal annual reports are the main sources of the textual material. Developing the three embedded cases entailed analysing policy documents and interviewing the participants. To maintain confidentiality, we use the informants’ positions and case letters when providing quotations from the interviewees.

The text analyses were conducted by first reading the documents to identify relevant content and then engaging in an in-depth reading, interpretation and analysis of the texts (Bowen, 2009). The process aimed at identifying representations of education related to numbers and data in the municipal policy documents. We did not cite these documents to avoid compromising the municipalities’ confidentiality.

We interviewed local administrators in 2015 and 2016. During this process, we employed a semistructured interview guide (see Appendix 1) that was thematically organised around questions concerning the administrators’ descriptions of data use characteristics in the municipal district administration; here, the questions focused on school development and local policy, data use related to policy representatives and the administrators’ views on and practices related to data use. In particular, the present study focuses on those questions related to the representation of various score charts, tables with numerical performance goals and student and school performance data of local policy documents. The interviews were recorded and transcribed verbatim.

The researchers recruited the interviewed administrators through direct contact by e-mail and phone before holding the meetings. The informants had school leadership backgrounds and were responsible for 11, 15 and 21 primary and secondary schools, respectively. Although one administrator reported directly to the chief municipal district executive, who was the head of the municipality, the others reported to their chief municipal district education officers, who, in the larger municipalities, were the department heads responsible for education in the municipality. These administrative differences reflect the municipalities’ sizes and available resources, staff and support systems.Footnote 7 Despite this variation, their work was largely similar regarding their responsibilities, all of which were connected to municipal district policy, regulations, budgets and the overall administrative system. The interview analysis provides insights into the administrators’ interpretations, approaches and choices related to the municipal representation of performance when using numbers and data.

The analysis employed the following three steps anchored in the research questions and analytical framework: First, key local policy documents were analysed, here with a focus on identifying the representations of numerical data and, to a certain degree, textual and symbolic data, performance goals and policy goals. Second, transcripts of the interviews with the district administrators were analysed to identify their thoughts on and practices regarding numerical data, their interpretations of the data and the relationship between the performance data and policy goals.

6 Representations of data in three cases

In the following, the findings of the analysis of the three cases are presented. The findings are illustrated by three tables that provide extracts from the studied documents and that give examples of data representation for each of the cases. Tables 2, 3, and 4 respectively, are translated tables presented in the annual reports. However, content that could compromise anonymity has been omitted. Because the current study focuses on lower secondary schools, minor changes have been made to remove the data on secondary education. The below presentation also shows how numbers have been calculated and represented differently in the three cases through the interviews with the municipal administrators.

6.1 Case A: Aiming for the national average

The studied documents state that the overall municipal goals of leadership development are ‘to improve dialogue, ownership and competence and achieve a stronger focus on following up on results, particularly in the subject area of reading’. The municipal policy documents define the goals as follows: ‘To provide all students, based on their abilities and capacities, the opportunity to maximise their learning outcomes in both subject learning and in social learning’. The three stated priority areas in education are reading, leadership and motivation. Furthermore, it is emphasised that ‘Children and young people are competent, and they shall be met by competent, engaged and assertive adults in school’. The document further defines how the adults working in schools should behave and communicate respectfully with students.

In the annual municipal report, the schools’ status is described more thoroughly in terms of economic data, demographic statistical information, student achievement data and survey data of the learning environment. In the report, goals are presented in a table that provides an overview of numerical data in two columns labelled ‘Status’ and ‘Goals’ (see the extracts from the report in Table 2). Between these, a third column of red, green and yellow smiley faces signifies the municipal district’s performance on the relevant measure. Sad faces are red, neutral faces are yellow, and happy faces are green. In general, the results with sad faces also include more elaborate points that describe in detail what the results mean and how these challenges should be met. For example, the national test results for reading have a sad face, which is explained by the fact that many students earned low reading proficiency scores on that year’s national test. In particular, the text emphasises that many boys are weak readers, but some girls also score below the national average. The measures taken include more professional development education for teachers and school leaders to promote increased competence in using the results for adaptive teaching. The overall measures for future work are summarised in a few points emphasising ownership and leadership through a dialogue between politicians, school administrators and the teachers’ union to secure improved learning outcomes. Another measure is to continue working with the reading results as part of a long-term commitment to get every student to read at a proficiency level that enables them to make use of reading in all subjects.

Table 2 An example of data representation in Case A’s annual report

This municipality’s numbers and data constitute a report on the development of indicators between the years 2014 and 2015. The national average results serve as a standard. Table 2 also displays how the results presented in the second and third columns are linked to defined policy and performance goals for 2015 and, in the first column, the 3-year period from 2015 to 2018. Characteristic of this table is how the 2014 and 2015 results are described as definitional facts, and the situation is defined as a status. By comparing the municipality’s student achievement scores to the national average for the selected indicators, such as the difference between the students’ results on the national reading test and average results of the national exams in Norwegian, mathematics and English, the tables represent the performance results using various smiley faces. Thus, the municipality has constructed a representation of itself and its students’ performance based on the extent to which the municipality’s student performance scores differ from the national average, which then serves as a local standard.

In the interviews, when we asked the administrator from Case A about the table, she replied that the defined goals ‘are awful’. She also said that the goal table, which was introduced by a previous administrator, had been like this for many years and that they continue to use these tables because politicians still want them: ‘The politicians think this is nice, you know. They want it, but we…it is a question of how much effort you want to put into it’. She elaborated further by explaining that she and her staff want more process-oriented goals ‘but have not been able to convince people (in and around the municipality, including the politicians) to adopt these’. However, they also have not devoted much energy to convincing politicians to make changes: ‘That is not where I have made the strongest push’.

When asked if she believes the politicians in her municipal district are concerned about student performance data, she said yes, but she also commented that not all politicians are alike. Indeed, she recently experienced what she characterised as a huge change after the last local election. Before the election, the previous politicians were very concerned about results, particularly ‘all things that did not work or were not good. Then, the goals and the tables were used heavily, even when they had good results’. She described a situation in which she and her colleagues were proud of the results and looked forward to presenting them to the municipal district council. They felt that they had finally obtained results showing they were moving in the right direction, but this received almost no attention from the politicians, who interpreted the results as an isolated incident: ‘It was like…it was not okay to be proud and happy. We had a bad relationship with the politicians closest to our sector’.

She described how this changed when they got new politicians after the election. Some of them remained the same, but there were mostly new people on the council. She explained this by noting that the members of the council’s various political parties could never be on the same side and always had to disagree. This changed after the election, and they now cooperate more often on educational matters. In addition, two of the politicians have committed more strongly to the education sector. When asked about the extent to which politicians use student performance results to govern the municipality’s schools, she stated that she and the administrative staff present the results in meetings with the politicians. The politicians listen when she informs the council of the results and what the administration would like to do; the politicians are usually satisfied with this.

6.2 Case B: Aiming for own ambitious standards

The studied documents state Case B’s strong focus on goals, such as increasing all students’ learning in select subjects, in basic skills and in social development, as well as focusing on developing and continuing good and safe learning environments. The chosen strategies emphasise student learning outcomes in subjects and basic skills, the use of standardised materials and tools and qualified leaders and teachers. The importance of using systematic documentation to analyse the data competently and of using results to provide high-quality follow-ups on an individual student’s learning is particularly underscored. The annual municipal report describes what are considered to be ‘ambitious goals for student learning and development’, along with the schools’ economic status, demographic statistical information, student achievement data and survey data on learning environments. The measures presented in the annual report cover various data based on the national student survey, the Norwegian national tests and the National Norwegian language and mathematics exams.

In this annual report, the percentages of students in the two lowest levels (levels 1 and 2) and in the two highest levels (levels 4 and 5) on the national test scales are used to describe performance development, to define goals for 2015 and to identify deviations between the defined goals and achieved results. The standard or goal for 2015 is based on student performance from the two previous years for only the municipal district. Case B defines the standard for all indicators as higher than in previous years. The reasoning behind this is not described, but it might be to set a higher goal than what was reached, hence aiming for further development. In Table 3, none of the 2015 results achieved the defined goal, but some were closer than others.

Table 3 An example of data representation in Case B’s annual report

An interview with the municipal’s administration explains how the score chart’s data are placed into the yearly dialogue between local politicians and administrators regarding fiscal priorities and the annual budget. The administrator described how they need goals and results to direct governance and to ground the design of organisations and activities, and also that the performance of the students is a defining element that provides direction. Regarding student performance the administrator here referred to the overall goal of the municipality to keep a constant focus on and to follow up on the performance of every student: ‘I am very happy with this system for goal and result management; we cannot lead organisations without principles, goals and governing by results’.

The municipal council chooses the fiscal year priorities proposed by the administration: ‘They very seldom makes any changes to it’, and by the end of the year, the same data in the same tables are investigated: ‘In the report, I write an introduction about results that are fine and those that are not so fine…and then we can luckily see a rise in this area while we are perhaps at the same place in another area. Unfortunately, there are a few things that are going down’.

When asked about whether politicians are interested or might be interested in student learning outcomes because the administration is highly focused on results, she replied that the politicians are definitely interested. The council changes after the local election, so new policies might appear. The previous politicians were very interested in the results, and the new politicians also seemed to emphasise results.

When asked about how they get the tables’ numbers, she explained that there is an interpretation process involving various people: ‘The data presented in the chart have been derived from a not very complex analysis, but they are set on the basis of our collaborative interpretations of the data and on what we consider realistic developments’. She also emphasised another side of the interpretations because they also want to set varied expectations for different schools by considering their various features: ‘For example, when a school has five new teachers, we adjust the goals, and we heighten expectations for the results of the next year’. She emphasised a time dilemma in setting these expectations each year. She would prefer a 3-year perspective, but she also expects improvements each year because ‘the students are in the schools now, and we cannot wait’.

6.3 Case C: Aiming higher that the national standard

The studied document of Case C states that there are three primary goals for enhancing students’ basic skills competence and knowledge, enhancing student mastery and motivation and ensuring the use of systematic assessment based on national policy and on education’s status in the municipal, as measured by tests, surveys and evaluations. The strategies and criteria for follow-ups on the primary goals’ framework function within a setting where learning is a long-term project and respect for individual students and teachers is high.

Case C presents its overall goal for education as having ‘a school oriented towards the future with a focus on the opportunities of individual students to develop their skills to master tomorrow’s society and working life’. The overall goal precedes a description of values for students, teachers and parents, representing ‘how we want our school in [a municipality] to be’. This goal also underscores that respect, inclusion and responsibility are binding values for all. Overall, school owners and the schools themselves should continue to develop a culture of assessment that has learning as its goal. This happens via increased competence and an understanding of assessment as a tool for learning. Among the criteria for this goal is that all students know what is expected from them, all students get feedback on the quality of their work, all teachers regularly and actively evaluate their own contributions to student learning, school leaders systematically use results (from diagnostic and national tests, from half-term and final grades and from the national student survey) as a basis for developing practice and parents are informed about students’ subject and social learning in relation to the goals so that they can actively help their children. The three goals and criteria are part of the municipal’s quality assessment system, and schools are expected to report on all criteria on a scale from 1 to 5 in their annual self-evaluation report.

The data in Case C’s annual report combine the overall policy goals for schools with success factors and various indicators (Table 4). The data from the student survey and national tests are used but without the national exam’s data. Here, the percentage of students at performance levels 3, 4 and 5 on national tests are compared with students’ performances nationally. The standard and the level of ambition are somewhat higher than the national level. This ambition is set seemingly independently of the municipal district scores (these are much lower than the level of ambition). To a high degree, the defined standards are not only set standards but also communicate the ambitions to aim for.

Table 4 An example of data representation in Case C’s annual report

When asked about how the table’s numbers were set, the administrator described that the municipality’s councillor wanted a standard that could be presented in the annual report. He also described their collaboration in choosing the indicators for the table. Data that actually enable a comparison are an important criterion for selecting measures. He also explained how they adjusted the data’s presentation:

But there has been a twist in the target board to see how many students are at different levels…and to sum up how many we have—instead of just saying that we have an average of 48, 49 or up to a score that very few really understand. It’s a little bit easier to understand that we have quite a few students at the lowest level and that we may need to do something about. So, it’s a twist we’ve made to make it stand out. (Administrator, Case C)

When asked about how he has chosen the ambition level, he described calculating the municipality’s scores and how he ‘can do the corresponding calculus for this nationally and say we should not be worse than that’. He explicitly stated that they do not want lower scores but that the performance goals are aspirational numbers. He also explained how they look at numbers for national test results and from final exams. In particular, their municipality has challenges in numeracy and mathematics, with lower scores than other county municipalities and the nation. He also described how local politicians are generally interested in learning outcomes and their good or bad results, but this varies with the political party. Regarding the score board’s impact on politicians’ priorities and focus, he considered local and national developments, their own points of view and perspectives, who they talk to and what they hear as all being more influential. When asked about whether the score board’s data are important in the municipal’s governing of school development, he considered the data important for discussing improvements and measures and for understanding long-term developments. He underscored that this is a discussion about fiscal resources, the competence of school and municipality staff members and what happens in schools and classrooms.

7 Discussion and conclusions

7.1 Variations in performance representation

The three cases display both similarities and differences in how they represent performance via numbers and data and in setting standards and defining goals based on these data. The cases also display various understandings of the functionality of data and score tables in local policies, ranging from using tables to inform policy towards being a central part of the annual budget process (from discussing and prioritising fiscal budgets to monitoring developments at the end of the year). Data and score tables also encourage reflection about schools’ situations and identify competency needs among teachers. The analysis illustrates how certain data and numbers are used at a municipal level to define and capture education’s complexity via the quick language of data (Lundahl & Waldow, 2009). Goal tables are the reference points for setting budgets and prioritising resources towards reading and mathematics and for generally developing teacher competency. The same data and tables are then used to verify that the defined results have been reached.

Partial goals are based on previous numbers, partly in comparison with the national average and partly with the municipals’ own results from previous years. Thus, the performance indicators are calculated very differently—and here for various reasons—to simplify, clarify or explicate the results. The variations stretch from using symbolic forms, such as smiley faces, to comparing the exact municipal and national percentages of the students at the lowest and highest proficiency levels. The numbers can also deviate between expectancy goals defined via the results of previous years and the principle of reaching towards higher performance levels. The three cases display how the same type of national data are represented via highly varied approaches. The performance representations vary in terms of how they are calculated, how they are clustered and how they are selected to display various concerns and to highlight certain issues. The various performance representations based on the exact same national data, such as the national test data or the national exam data, exemplify how various representations of such measurements through different practices of data use can form the grounds for various ‘constitutive effects’ (Dahler-Larsen, 2011).

7.2 Thin prescriptions and paradoxical bureaucratic rituals of performance representation

Defining goals and standards by data and numbers can be understood as a thin prescription (Porter, 2012b), which is when people or organisations are judged based on one or a few numbers. To a certain extent, we see thin prescription in the current study. The mid-central authorities represent developments in achievements and goals by referring to one or a few numbers. In two cases (B and C), future numbers are based on the results of previous years combined with a goal to constantly reach for a higher level. This requires that new goals are set with a higher number. The reality of reaching these numbers is not problematised or questioned, nor seen in relation to the resource situation in the schools. Furthermore, administrators’ descriptions of the calculus behind the defined goals appear simplistic and somewhat disconnected from the realities of the schools and sometimes even arbitrary, such as choosing a number that is higher than the present year or the previous year or thinking of a number that is similar to the national average and higher than the present mid-central authority’s goal—regardless of the schools’ probability of reaching these goals. Given how commonly educational performances are compared with national averages, it is striking to see the national performance average (a statistical construct representing a result that does not exist in any real-life school) used so prominently as a standard to measure against without being supplemented by stronger reasoning or reflection. It can be questioned whether this link to the national average in the smaller, municipalities may represent a safer way to do what is required by the local policy and national authorities. Interestingly, and in support of this question, the larger municipality (B) does not compare municipal performances with the national average, only with its own averages.

In several ways, these thin prescriptions and their corresponding performance representations can, in Porter’s (2012a) words, be characterised as ‘funny numbers’ because they are often not achievable; they are only goals defined through operations that provide a calculative narrative (Bauman, 1992), possibly to ensure an image of objectivity and scientific rationale. This also appears in the material in terms of how performance representations raise questions about politicians’ interest, engagement and understanding regarding the data and numbers presented to them—aside from the politicians’ interest in holding the mid-central authority administration accountable for negative results. In her critical analysis of performance representation for marketing on school and municipal websites in Sweden, Carlbaum states, ‘Good school performance may be exploited in showcasing good results…but evaluations can also be used to convey that a particular situation is being dealt with appropriately’ (2016, p. 328). By ‘evaluations’, Carlbaum is referring to monitoring, performance data, inspections and quality audits—all of which generate ‘constitutive effects’ because they ‘define the social realities of which they are a part’ (Dahler-Larsen, 2012a, p. 3; see also Carlbaum, 2016, p. 331). In addition to the representations based on thin prescriptions, which themselves are based on one or two performances, we also see a range of numbers from several measures used in combination with broader goals; these are defined in rather elaborate text in the annual reports, forming a complex mosaic of various data and information selected, combined and clustered to make calculative narratives and arguments about a municipal’s status and measures. However, both the thinner prescriptions and more complex mosaic narratives brought forward by the bureaucratic rituals of performance representation paradoxically seem to fail in their efforts to become efficient, explicit, intuitive and objective. The use of varied data sets and numbers together develops new forms of educational complexity that are generally only comprehensible to those closely involved. As such, the goals when defined by numbers can again be considered ‘funny’ because the various bureaucratic calculative rituals actually create complexity and require insider knowledge to be understood. This ritualistic practice can also contribute to explain why the goals set constantly strive for unachievable goals, often without any additional financial or personnel support.

Although the governance by numbers approach in contemporary Norwegian education is seen by national and local politicians as admirably ‘objective’, the current article identifies several challenges linked to working in the midst of decentralised organisations. One challenge empirically demonstrated here is that the administrators’ advantage of having the best local knowledge also applies to priorities in the use of numbers to represent school performance. We see that administrators at the mid-central authority level define broad quantitative goals and provide school administrators with incentives to more efficiently reach these goals at the local level. This may tempt people to optimise the numbers to evade the work’s actual purposes. As stated by Porter (2012a, p. 597), ‘Thin prescription, in its highest forms, has two outstanding characteristics: it is typically presented as hard objective fact, the counter to special pleading; and yet these thin measures are readily and invisibly manipulated by interested actors’. A central paradox of thin prescriptions is that the involved participants are often bound up with professional and bureaucratic rituals (such as using numerical data), while outsiders dismiss these rituals as dull and technical.

7.3 Constitutive effects

The concept of constitutive effects was introduced earlier when it came to the past decade’s changes in educational policy and education governing. Following Schwandt (2012) and Dahler-Larsen (2011), we have argued that the goals, means and strategies for QA in education should not only be considered as connected in a technical or external way, but also as closely and constitutively intertwined in a process by which QA ‘redefines the meaning of education and the practices of education by means of installing new discursive and cultural markers defining standards, targets and criteria’ (Dahler-Larsen, 2011, p. 153). The added value of the term constitutive effects lies in showing how the language and social interaction related to the assessment criteria have broad human, social and political ramifications in a number of domains. In other words, the strategies used in and the criteria established for QA are not neutral in relation to the objectives but contribute to the nature of the goals they assess—not only today but, more importantly, in the future (Dahler-Larsen, 2011). Regarding the use of numerical data as exemplified by the three cases, the indicators and numbers that the administrators used to assess and represent education performance and goal fulfilment are closely related to measurable knowledge and values. The dichotomy between measuring knowledge as an ongoing action and measuring the result provided by the ongoing learning process may bring about several challenges. One central challenge emerges when measuring the results outweighs assessing the process that leads to the desired position. Doing so risks unfavourably distorting the pedagogical practice. A second and closely related constitutive effect of the dominant performance representation (via numbers and data) that has been shown in the current paper is that responsibility is channelled mainly through numbers and comparisons. The obligation to ensure a good education is enforced, in turn, by politicians who hold mid-central authorities accountable for poor or insufficient results. A third constitutive effect is that performance goals can be turned into moving targets. As described by the administrator interviewed in Case C, regular data use practices include comparisons with the previous year’s performance results, and the performance goals’ targets are constantly pushed forward. However, they are not only constantly moved ahead; they also help constitute a discursive space in which education policies and priorities can be conducted.

Constitutive effects can be intensified by circumstances linked to QA systems. Examples include the institutionalisation of indicators for measurement and the perception of the indicators’ relevancy and validity for what is measured (Dahler-Larsen, 2007). Regarding the present study’s empirical material—and as earlier described—the administrator interviewed in Case B did not spend much time reflecting on the assessment areas and their chosen indicators. To reiterate, the administrator said, ‘It is a system for goal and results management; we cannot lead organisations without principles, goals and governing by results’.

Yet constitutive effects can also be intensified by installing market mechanisms. ‘Marketisation’ is a complex phenomenon involving many facets. The introduction of school competition, voucher systems, free school choice and private companies in the production of education are some key expressions of marketisation in public education. Education researchers claim that the marketisation of public education has raised new expectations for schools to promote and make visible their educational services to prospective ‘consumers’, especially in terms of results (Carlbaum, 2016). However, marketisation has also raised new expectations for teachers (Ball, 2003; Fredriksson, 2009). In his critical work on the discursive consequences of marketisation, Fredriksson (2009) argues that the reconstruction of public education has changed the meaning of being a teacher, creating new ‘market-oriented’ teacher subjects. However, given the recurring discussion of free school choice in Norway and the recognition of new discourses about education performance, it appears necessary to continue studying and analysing how these discourses influence the role of municipal administrators as policy brokers and interpreters of policy goals at the mid-central authority level of education governance. Indeed, it seems necessary to analyse how these discourses influence and shape teacher behaviour as well.

7.4 Final remarks and pressing questions

The current study aimed to explore and analyse the interpretation and use of numbers and data to represent performance goals and school development standards at the mid-central authority level in Norway. The study was guided by the following two research questions: RQ1. How are representations of performance in education made by administrators’ use of numbers and data? RQ2. How do such representations influence interpretations and shape decision-making processes at the mid-central authority level when it comes to educational matters? We have shown that the three mid-central authority cases have developed varied types of local representations of performance based on the same national datasets. We have also shown that the varied practices of bureaucratic rituals of thin prescriptions and complex, calculated narratives—although anchored in the local contexts—seem to generally drive an understanding of education as a race for higher numbers, regardless of the circumstances. The constitutive effects of these practices might be understood as numerical and data-driven processes that define and redefine the system the data governs, thereby keeping the machinery going while, at the same time, developing it further. Thus, policy goals become a moving target. The current study indicates a certain discrepancy between the performance goals represented in the documents and the daily life of schools, making the efforts of mid-central authorities into more of a symbolic and ritualist practice than something valuable for the professionals working in schools. These indications raise important questions about administrators’ responsibility and the contribution of skilled professionals in these practices. The current study illuminates the importance of the administrative position of professional judgement in the development, interpretation, definition and use of student performance data to avoid bureaucratic and ritualistic representations without meaning for those outside the administrative units.