In the previous section, we discussed how the excessive and instrumental measurement of education gives rise to a ‘culture of performativity’ (Biesta 2009: 35), in which ‘targets and indicators of quality become mistaken for quality itself”. As a consequence, evaluation is largely oriented towards technical concerns, and away from values and pedagogically-informed purposes. In this section, we propose that non-datafied understandings are needed alongside diverse kinds of evaluation data, in order to generate more nuanced conceptions of education, underpinned by a clear, explicit set of purposes. In doing so, we acknowledge the challenge raised by the mutual shaping of pedagogies and data-driven practices; while data cannot neutrally capture education, nor can technological practices be simply discarded as secondary to pedagogy. Our guiding principle is that data should enhance, rather than obscure, our understanding of the relational nature of agency and responsibility.
For instance, we caution that giving too much weight to outcome measures, such as grades, retention, employment, or salary, risks marginalizing valuable, yet less visible, forms of student and teacher practices. Overuse of outcome measures suggests that the benefits of education can be pre-determined, and creates an unrealistic distribution of responsibility, where teachers, programmes or institutions are held accountable for the relational contributions of the students. As we have argued, it can be difficult or impossible for students to predict in advance what they need to learn (see also Aitken et al. 2019). Further, by ignoring the educational process, outcome measures are confounded by the student’s effort, approach to learning and performance, strategies for success, and numerous variables such as socioeconomic and demographic variables, as well as health, family support, etc. (Uttl et al. 2017).
Similarly, an obvious limitation of both teacher-centric and student-centric analytics is that each does not sufficiently account for the other (Sergis and Sampson 2017). Evaluation should consider not just what teachers do but also the relationship between teachers and students (Braskamp 2000; Biesta 2012). While satisfaction surveys might pick up elements of the teacher-student relationship, they tend to place responsibility for the relationship on the shoulders of individual teachers. Not only do students contribute significantly to the educational process, but teaching is often a collective activity, dispersed across faculty, external specialists, and, indeed, each student’s peers. Making individual teachers overly accountable for this collective activity may encourage risk-averse behaviour and a reduction in collaboration (Oravec 2019).
Evaluation is highly complex because it requires not only judgement of teaching against parameters of purpose, pedagogy, approach, and rationale, but also judgement of those parameters themselves. This cannot be done purely through measurement and analysis; it calls for discussion and dialogue because there is no absolute, value-free position against which evaluation can be calibrated (Vo et al. 2018). As we have noted, evaluation is not only pedagogical, but also political and economic. Our position is that, irrespective of the educational approach, responsibility is distributed across teachers, students, and systems, and reductionist metrics cannot capture these fluid and dynamic relationships. For these reasons, we call for analytics that are part of a wider, ecological view of education, where relationships and holistic conceptions of practice are valued above individual variables (Goodyear and Carvalho 2019), and where it is acknowledged that metrics do not capture everything that is important.
As Goodyear and Carvalho (2019) argue, our ways of analysing and interpreting data should support ecological conceptions of education and inform complex judgements. They describe educational ecologies as relational, in which all elements (e.g. teachers, learners, technologies, policies, environments) are intertwined. From an ecological view, evaluation should not be reduced to the merits of individual elements, but should be based on a holistic analysis of the wider system. In a postdigital ecology, data is only one element, and is entangled in non-digital (physical, social, economic, political) activity. In the previous section, we have argued that, in much contemporary evaluation, digital data generates an oversimplified representation and, therefore, conception of education which, in turn, amplifies its position within educational ecologies. Our aim is to redress this balance, so that datafied and non-datafied understandings of educational quality shape each other in complementary ways that support the development of relational agency across students, teachers, and other stakeholders. For this, we argue for data and non-datafied information about relationships, practices (and where and how they diverge from policy), environments, and pedagogy. We call for analyses that are interpretative, holistic, complementary, ongoing, and formative.
Aitken’s (2020) postdigital exploration of online postgraduate learning, in this issue, provides one example of the kind of analysis that can contribute to a wider, ecological view of evaluation. Aitken interviewed both teachers and students of online, postgraduate healthcare programmes, using these conversations to generate ways of understanding the educational process and outcomes, and the factors that underpinned these. Her analysis does not give the full picture, and no single method of evaluation can do this. However, it provides a valuable piece of the puzzle, which can be used to nuance our understanding of other pieces. Aitken acknowledges the role she plays as both researcher and Programme Director in shaping the analysis. She neither brackets herself off as external to a neutral process of evaluation, nor simply determines her results. Her evaluation is a synthesis of her conversations with students and teachers, and her own judgement as an experienced and expert practitioner in the field. Further, this particular analysis is understood in conjunction with other ways of understanding online education, including metrics such as student satisfaction scores, grades, etc. We argue that this balance of data, evidence, dialogue, and expertise is necessary for the cross-fertilization of data and non-datafied understandings, and appropriate to generating evaluations in which educational quality is distributed across teacher, student, institution and context.
Within an ecological view, rather than considering the performance of individuals in isolation, evaluation can also look at the distribution of resources, policies, systems and environments, and how these support and constrain educational practice (Goodyear and Carvalho 2019). Many practices are not adequately captured in data that rely on teachers’ and students’ activities conforming to an anticipated model, yet they are worth paying attention to, because they can help evaluators understand how teachers and students negotiate the systems and settings of their education (Fawns and O’Shea 2019). While we refute assumptions that education ‘can be described in digital terms’ (Fawns 2019: 139), we believe that digital traces of activity can complement the tacit forms of data collection and analysis that teachers intuitively employ-by observing activity and engaging with students-to support valuable conversations about practice.
Dialogue between teachers and students can make sense of how the formal curriculum intertwines with informal, extra-curricular activity in which ‘sites of learning are constantly emergent’ (Gourlay 2015: 402). Engaging in dialogue around practices can encourage the development of ways of working and learning (Brown and Duguid 2002; Fawns and O’Shea 2019), which is particularly important where students learn across different settings (such as in professional, postgraduate programmes). There is an additional benefit of both teachers and students developing their own practices outside of formal structures and expectations. In contrast to the top-down, automated ‘personalisation’ of many learning analytics implementations (which is really about encouraging users to conform to standardized expectations in relation to broad and blunt categories to which they are assigned based on their usage data), these ways of working evolve through complex and idiosyncratic relations with situated resources and constraints. These practices really are customized and personalized, as students work out what works for them, and take control of their own direction of development. It is worth taking account of idiosyncratic practices within evaluation (e.g. via observation and dialogue), because they can shed light on student and teacher preferences, as well as the limitations of formal structures and systems. Exploring and discussing actual practices, including subversions and workarounds, can reveal aspects of performance that would otherwise be invisible to evaluation, as well as areas where policy is not, or cannot, be implemented (Brown and Duguid 2002).
If education is not bounded in terms of technologies and structures, neither is it bounded in time. The diverse effects of educational programmes are complex and may not become clear until well after graduation, if ever (Aitken et al. 2019). Therefore, the timing of evaluation matters, yet it is very difficult to know when it is most appropriate to evaluate. Since ecologies have no clear beginning or end, it makes sense to see evaluation as ‘not a single snapshot but rather a continuous view’ (Ory 2000: 16), in which formative evaluation is emphasized over summative. Efforts can then be oriented to supporting the development of constituent elements (teachers, students, administrators, environments and systems), their practices, and the capacity of each to contribute to the holistic manifestation of educational programmes.
Methods of data generation and analysis have the potential to contribute to understandings of important elements of the educational process, and their relationships to each other. However, it is far from clear that ongoing, comprehensive measurement is good for students or teachers, or that it actually makes things more transparent. Care needs to be taken if datafied practices are to enhance teacher or student agency, without placing undue summative emphasis on behaviours that are part of an ongoing developmental process. For example, too much surveillance and summative judgement may reduce innovation and creativity in teaching design and practice (Gourlay and Stevenson 2017). This is a risk for newer teachers, in particular, who do not have an established reputation, may be less confident in their abilities, and may be on less secure contracts (Darwin 2017). Further, by interpreting performance according to historical patterns, teachers are judged against historical biases, neglecting the possibilities that might be seen through aspirational data (Biesta 2009). As Biesta argues, evaluation is about not only judging teaching that has already happened, it is also about supporting the desirable development of teachers, and of teaching that will happen in the future.
We believe that, wherever possible, the emphasis of evaluation should be on improving future quality, rather than providing accounts of past quality, and that the oversight function of institutions and administrators should be as much about support as it is about regulation and control. However, we recognize that judgements still need to be made about the quality of teaching. Prospective students need to be able to make informed decisions about institutions and programmes, and institutions and managers need to be able to identify when teachers require additional support or intervention. Our position is that these judgements should locate the teacher within a holistic, ecological view of education. For us, the quality of a teacher’s activity is determined by both how it fits with, and how it shapes, the educational ecology. Further, we argue that teachers should be a meaningful part of (but not in control of) the distributed system that passes judgement. Firstly, teachers are well-positioned to understand the context, purpose and parameters of the design and orchestration of their teaching. Secondly, while teachers are, hopefully, knowledgeable about education, the practice of contributing to evaluation provides opportunities to develop ideas about quality and its improvement, and to contribute to the pedagogical understanding of other stakeholders. Finally, by supporting teachers to contribute to the evaluation of their teaching, institutions can foster trusting relationships with their academic staff. This, then, might empower teachers to be innovative and creative, and to defend and explain their choices. As Ory (2000: 17) argued, evaluation is not ‘a scientific endeavor, with absolute truth as its goal, but rather… a form of argument where the faculty use their data to make a case for their teaching.’ There is, of course, a significant implicit burden on teachers here, and so we are arguing for both taking some of the responsibility for teaching quality away from teachers (and sharing it with students, institutions and context) and, simultaneously, giving teachers more responsibility for contributing to distributed understandings of educational quality. The kinds of expertise teachers need and the implications of this are discussed in the next section.