Astra looked out the panel of windows in the exterior door of her university’s building. It was later than she wanted to leave and the sun had already set behind the row of evergreen trees that flanked the left side of her perch. She glanced up quickly and spotted the lights of a Companion. It was waiting for her.

She sighed and pushed open the door. Lingering too long, well, they would notice.

The notion of the ‘post’ has been explored a lot already in this journal, communicating the messy theoretical boundaries between rupture and continuity (Jandrić et al. 2018). Our contribution is the notion of ‘post-surveillance’ – building on a project we have been developing over the past year. Higher Education After SurveillanceFootnote 1 is aiming to analyze current surveillance practices in the higher education sector (including broader educational technology, policy and other spaces) and trying to understand what post-surveillance futures might be desirable and how to work toward these. It is a project – or perhaps a network of people and projects, it is still too soon to tell – that is bringing together colleagues in a number of countries (so far mostly the UK, US, and Canada) and roles (including instructional designers, digital leaders, academics, EdTech people, and doctoral students) to develop ideas, approaches, and resources on the theme of higher education and surveillance.

Astra had attempted suicide when she was a teenager. In the months that followed, a range of ‘care procedures’ had been enacted. Those procedures determined which universities she qualified for—ones with high rating of ‘care practices.’ Esperanza University’s care practices included the deployment of drones called Companions that monitored high-risk students for any abnormal patterns.

If Astra could have afforded an elite school, she would have counsellors and mentors to support her. But here she has Mella, her mentor bot. Mella is a tiny machine in Astra’s dorm room that listens for sounds associated with anxiety. If Mella detects low levels of anxiety, it will soothe Astra in conversation and affirmation. If it detects high levels, it will issue a report to mental health monitor who will send a law enforcement officer to her dorm.

The term surveillance is currently used in a wide variety of contexts in higher education, and a broad definition serves us well: ‘the focused, systematic and routine attention to personal details for purposes of influence, management, protection or detection.’ Surveillance, according to Lyon, ‘is not random, occasional or spontaneous; it is deliberate and depends on certain protocols and techniques’ (Lyon 2007: 14). Higher education systems have always involved monitoring through data collection, assessment, and evaluation, shaping the intellectual work, and tracking the bodies and activities of students and teachers. Contemporary technologies and datafication are part of, but not the source of, surveillance practices in higher education. By using the phrase ‘after surveillance,’ we are not able to draw comfort from the notion of an idyllic educational past before such practices existed. Rather, we are gesturing toward a future that involves a deeper understanding of the role surveillance has played and continues to play in universities and tactics and strategies for interrupting and perhaps reducing or reconfiguring its impacts. This requires a willingness to speculate that some of the surveillance roles we have come to accept could be otherwise, along with an acknowledgment that we are implicated in what Lyon terms ‘surveillance culture’ (2017) in education. What can we do with that knowledge, and what culture shifts can we collectively provoke?

Walking down the darkening sidewalk to the bus station, Astra knows Mella will detect her heightened emotional state. She knows her frustrations were recorded by the Class Assistant as it whirled around the room scanning student faces to take attendance, to monitor participation, and to look for signs of dishonesty or cheating. She thinks about what she’ll say when the professor pulls up her Participation dashboard and notes her emotional irregularities in class today, combined with Mella’s reports on her in-dorm anxiety levels.

The need for provocation is clear. Surveillance has become increasingly pervasive and fine-grained as monitoring and data-gathering technologies grow in sophistication and as the quantification and measurement of everything from outcomes to student satisfaction to engagement is increasingly valued in higher education. Alongside this, surveillance practices are not evenly distributed. They affect some students and teachers more than others – a current example being the extensive monitoring of international students and staff in the UK (Unis Resist Border Controls 2019). Even the understanding of the extent to which surveillance can be harmful is unevenly distributed. Our colleague Chris Gilliard, writing recently about the differential impacts of surveillance on marginalized people, critiques the use of an apparently ‘harmless’ college assignment (to ‘eavesdrop on and surveil unsuspecting folks in public to see what information they could gather about them, using only Google search on their phones’) (Gilliard 2019). He argues that ‘the idea that surveillance would be used as an assignment on those with no options for consent speaks to how broken our ideas about consent have become, trivializing what to many people is a life and death matter of their lived existence’ (Gilliard 2019). Technologically mediated practices in higher education are not often seen in terms of surveillance, but they should be, for example, the widespread use of plagiarism detection software (Ross and Macleod 2018), where:

the academic essay (with its associated credits) is enacted as the site of economic exchange—academic writing for credit, credit for degree, degree for employment, and so forth. Within such a rationality, academic writing is an important commodity whose originality (or ownership) needs to be ensured—that is, against the unoriginal copy, presented fraudulently. (Introna 2016: 33)

More often, technology’s use in education is tied to what Audrey Watters (2019) calls the ‘ed-tech imaginary’ – stories that we tell ourselves about the role that educational technology plays in preparing students for the future. Breathless evocations of technology for the sake of innovation, revolution, or salvation, trump concerns for student and staff data privacy. If people who care about higher education do not stop to question those stories and their assumptions, the risk of harms increases – harms that may undermine the futures for which they are working.

A number of questions have been raised by the Higher Education After Surveillance project so far.Footnote 2 Others will evolve as the research and practice landscapes shift: for example, what are the possible future relationships between surveillance capitalism (Zuboff 2019), platform capitalism (Srnicek 2017), and higher education? What emerging privacy rights do we need and how do we establish and protect those rights (see Williamson (2017) for a discussion of ‘neurocomputation’ in the classroom)?

Astra stops walking. She turns back toward the building. Maybe she could talk to her professor, ask for some help. As she turns, she hears the whirring get louder as her Companions draw nearer, honing in on her new behaviour. Nervous now, she quickens her pace back to the classroom building. As she rounds the corner toward the building’s step a voice calls out to her: ‘Astra. What are you doing?’

In our first Higher Education After Surveillance meeting, we used speculative scenarios to foreground visible and less-visible forces behind educational surveillance and to create conversations about possible futures in which those forces may be resisted or eliminated.Footnote 3 In doing so, we were informed by our work on ‘not-yetness’ in digital education (Ross and Collier 2016); inspired by the work of colleagues such as sava saheli singh’s screening surveillance seriesFootnote 4; and drew on the critical use of speculative design to step away from market- and production-driven ideas and approaches (Dunne and Raby 2013). Dunne and Raby (2013: 3) note that ‘futures are not a destination or something to be strived for but a medium to aid imaginative thought-- to speculate with. Not just about the future but about today as well, as this is where they become critique, especially when they highlight limitations that can be removed and loosen, even just a bit, reality’s grip on our imagination.’

A uniformed human steps out of a car—she had not heard it arrive. She panics briefly—a flutter no doubt detected by the now-four hovering Companions—but musters a mumbled, ‘I want to talk to my professor.’ The officer moves toward her and extends his arm. ‘Not tonight, Astra,’ he says, ‘let’s get you in to see Dr. Lyfer instead.’ Her panic recedes, replaced with resignation. She would go see Dr. Lyfer and he would place her under closer watch.

Astra’s story, recounted above, gives some insight into the potential impacts of emergent social monitoring technology. In the UK, JISC, a non-profit membership organization providing network and technology services, has recently launched a project to explore the use of data analytics to ‘tackle the student mental health challenge’ (JISC 2019). By telling stories about the futures this might usher in, we can raise issues about the impacts of various kinds of ‘policing-style’ solutions, the consequences of unequal access to mental health services, the obligations of universities to different stakeholders, and more. We can ask: What is the role of trust in this scenario? Who is entrusted, why, and how? Who has agency and how is that agency handled? What justifications are provided for forms of surveillance, and what are other alternatives?

Discussion of our speculative scenarios in the first Higher Education After Surveillance meeting led to an observation that we are not always able to respond quickly to emergent practices or understand how they fit into a wider picture of surveillance. This prompted us to imagine a crowdsourced observatory, which could help us gain a better understanding of the landscape of surveillance and privacy across the higher education sector. We know about examples of good and bad (or perhaps ‘mixed’) practice in some institutions, companies, and organizations, but our group might, working together, finds a way to gather more of this information in an organized and centralized place. Such information may be useful in helping individual students, teachers, or administrators to make decisions about what technologies they use; it could even be useful to institutions to develop policies and practices that shift their orientation to student data privacy altogether. We also imagine uses that launch or support social/educational movements that challenge neoliberally inscribed purposes of education – working against uncritical understandings of ‘value of’ or ‘return on investment on’ education.

We offer this commentary as a snapshot of our early work toward ‘Higher Education After Surveillance,’ and hope it will generate some discussion, critique, and debate, as well as help us make connections with other colleagues and groups who are doing related work. There are a number of tensions for us to navigate, perhaps especially around whose perspectives and voices are amplified in this work, and how to handle the backlash that any success in drawing more sustained attention to the nature of surveillance in universities would bring. The question of how to leverage the privilege some members have, while ensuring that those who are more precarious, more surveilled, or benefit less from principles of academic freedom can take leading roles without disproportionate risk, is on our minds. Get in touch if you would like to talk more.