Keywords

The persistence of data-scienceFootnote 1 practices that commonly result in injustices, especially for minoritized populations, is puzzling. A large, critical literature on algorithmic governance associated with advances in the digital sciences aptly identifies biases and limitations of big-data analyses and prescriptions, how these problems have conditioned life in the digital economy, and the destructive, uneven, and unjust effects, but we nonetheless lack an explanation for why and how this dire situation remains tolerated and continues relatively unabated. Alongside climate change, I regard deepening socio-economic polarization worldwide and the perpetuation of systemic racism as crucial existential problems that demand critical attention. Broadly, this paper contributes to explaining one dimension of our societal predicament, namely the persistence of the production and deepening of inequality and injustice through data-science practices despite abundant evidence of their destructive effects.

Based on a critical synthesis of literature from the interdisciplinary field of critical data studies,Footnote 2 education studies, economic geography and innovation studies, I develop several interrelated arguments. I locate the problem of toleration and normalization of new, digital forms of injustice in the production of knowledges that is crystallizing in educational institutions, which broadly shape thought processes. The unfolding of algorithmic governance in the new millennium has pervaded the education sector, specifically regarding the burgeoning “edtech” industry, an assemblage of apps, devices, software, hardware, and platforms designed to datafy student knowledges (Witzenberger & Gulson, 2021); that is, it quantifies knowledges for the purposes of analysis and manipulation for profit (Mayer-Schönberger & Cukier, 2013; van Dijck, 2014; Zuboff, 2019). This approach to knowledge production is accompanied by a particular pedagogy, which I argue inculcates values that are conducive to technocratic thinking, and frames knowledge generation in decontextualized, non-relational terms, thereby prefiguring social injustice in a world beset with intensifying societal tensions and polarization. Contextualization and relationality through the lens of social justice are crucial missing links that would permit actors to situate their actions (Haraway, 1988); they signify key mental capacities and related practices that enable subjects to connect abstract ideas with on-the-ground processes across time and space and to recognize the power relations that lace social relations (e.g., Ettlinger, 2003; Massey, 2005; Yeung, 2005). Further, contextualization and relational thinking through a social-justice lens position people to situate their thoughts and practices responsibly, with attention to the relation between one’s own practices and those of others, the context that one’s practices affects, and consequences. The edtech industry aligns with the logic of algorithmic governance under the regime of big dataFootnote 3 insofar as it eschews causality to prioritize correlations of decontextualized data; the pedagogy accompanying the edtech industry follows suit, and as I will show, conceptualizes knowledge generation in terms of what students can do, a matter of performance, without attention to whether they ought to do what they do, and the situation of their actions relative to a chain of activity and associated effects. Despite the problems, a celebratory discourse casts the new education paradigm as a “disruptive innovation” (Christensen, 1997; Christensen, Horn, & Johnson, 2008) that delivers a new and improved learning experience. However, this laudatory discourse itself lacks contextualization, resulting in misconstrued claims about the datafication of knowledges that cast new technology (edtech) as the catalyst for a new, improved pedagogy. I show how the prevailing, problematic pedagogy is longstanding, predating digital technologies of the new millennium, although its target population has changed over time from minoritized groups in the United States in the twentieth century to the entire population in the new millennium in the United States and worldwide. Finally, I conceptualize education as an upstream institution, which enculturates subjects in a mode of knowing and thinking that affects downstream applications in daily life, and I examine the ways in which the decontextualized and non-relational character of the prevailing pedagogy governs unwittingly irresponsible practices.

I begin below with a brief background on problems with algorithmic governance generally, and subsequently I extend the issues to the education sector regarding the burgeoning edtech industry and the prevailing pedagogy. The main focus is on the United States, although as I explain in the conclusion, the issues are pertinent worldwide, recognizing that problems and processes materialize differently across space relative to variation in institutional configurations and social, cultural, political, economic, and ecological histories. The next section situates the celebratory casting of the current trajectory in education as a “disruptive innovation,” and explains how this discourse obfuscates realities. The following section pursues a brief genealogyFootnote 4 of the so-called “new” pedagogy to demonsrate the fallacy of the technology-first approach of celebratory discourses of technocracy as well as some not-so-apparent logics entangled in the current educational trajectory and concerning paradoxes and twists that have delivered the new learning paradigm. The penultimate section engages downstream effects of the upstream inculcation of technocratic values. Concluding comments pertain to the datafication of knowledge production relative to broad societal problems.

Background: Algorithmic Governance and Its Discontents

Just in the infancy of the digital era, we are witnessing the normalization of undemocratic, often devastating effects of technological advance. The problems are rooted not in a particular project, but rather in their diffuseness throughout the fabric of society. Datafication entails the extraction of data from individuals’ digital footprint, without consent of, or payment to, digital subjects, thereby enacting routine erosion of basic privacy rights, continual surveillance, and exploitation of subjects by capitalizing on their personal data (Thatcher, O’Sullivan, & Mahmoudi, 2016; van Dijck, 2014; Zuboff, 2019). People interact with the internet in wide-ranging ways in daily life through, for example, internet searches; social media; smart devices ranging from phones and appliances to children’s toys and adults’ sex toys; the almost two million apps available worldwide that assist people with everything from transportation and shopping to mediation and menstruation tracking; platforms for work as well as consumption; and the internet of things (IoT), which embeds digital technology such as sensors or software throughout the environment to connect and exchange data for widespread activity, from energy usage to credit and financial information more generally. The pervasiveness of digital technology in the increasingly interrelated realms of social media, home, work, leisure, and intimacyFootnote 5 reflects our immersion, willing or unwilling, conscious or unconscious, in digital systems in daily life.

Routine data extraction without consent is orchestrated by big-tech firms, whose motive is profit, which supersedes other possible motives such as fairness, equity, transparency, and basic privacy. Beyond objectifying problems such as invasion of privacy, surveillance, and exploitation, deleterious subjective effects include addictive habits as algorithms nudge usersFootnote 6 into continued use of digital-era accoutrements such as phones, social media, and apps to ensure continued usage and, therefore, profits (Chun, 2017; Cockayne, 2016; Ettlinger, 2019). Emblematic of the prioritization of profit is the mundane example in online shopping of the profusion of choices, which are designed not with the user in mind, but rather to increase usage time in the interest of profitability (Sullivan & Reiner, 2021, p. 418), an instance of what media scholar Simone Natale (2021) considers the deceitfulness of media in the digital era.

Governance in general has become reliant on algorithmic designs that embed biases relative to longstanding societal hierarchies resulting from classism, racism, misogyny, homophobism, xenophobism, ableism, ageism. Beyond the problem that biases exist in the real world and therefore exist in designs (Christian, 2020; Crawford, 2021), the overwhelming constitution of the data sciences by privileged white men – the “diversity crisis” – feeds bias-driven problems (Crawford, 2016; Snow, 2018). Urban planning around the world, especially in association with “smart planning,” is designed, orchestrated, and implemented by tech firms, for profit, while government steps in as a partner to legitimize the inscription of smartness on the landscape, unevenly. Smart-city applications commonly are socio-spatially bifurcated, with systems intended to provide information and nurture entrepreneurialism in downtowns, whereas a system of punitive surveillance targets underserved communities of color (Brannon, 2017) governed by a “digitize and punish” mentality that unjustly targets marginalized communities (Jefferson, 2020). More generally, smart-city planning guided by the corporate sector tends to be piecemeal, focused on disparate for-profit projects related to compartmentalized problems such as parking and transportation, IoTs in downtowns and select places of “opportunity,” as opposed to a coherent plan to work towards a more socially and environmentally sustainable future throughout an urban social and political economy (Cugurullo, 2019). Algorithms inform the public-private planning complex and agents of the real-estate industry where to invest, as well as where to disinvest, notably in the same communities targeted for punitive surveillance (Safransky, 2020) while evidence of racialized bias in mortgage approval algorithms mounts (Martinez & Kirchner, 2021). The rise of big-data policing has generated a system designed to preempt crime by criminalizing marginalized individuals before crimes are committed, an insidious reversal of the “innocent until proven guilty” hallmark of democracy (Brayne, 2021; Ferguson, 2017). Everyday decisions ranging from judicial to hiring, firing, credit approval, and scheduling routinely discriminate based on race/ethnicity, gender, sexuality and their intersections (Pasquale, 2015). Echoing the perpetuation of life under Jim Crow, mundane practices such as drinking from an automated water fountain or washing one’s hands in a lavatory with automated soap dispensers require being white because the sensors are not designed to recognize Black skin (e.g., Benjamin, 2019). Search engines embed racist and sexist values (Noble, 2018). Algorithmic governance overall unjustly targets marginalized populations relative to multiple axes of difference and their intersections, prompting new vocabulary such as the “digital poorhouse” (Eubanks, 2017) and “weapons of math destruction” (O’Neil, 2016).

Conceivably, one might argue that the well-worn path of neoliberalismFootnote 7 as well as racism and many other “isms” are devoid of ethics, judiciousness, and sufficiently restrictive regulatory policy, and therefore the apparent absence of such values in the new millennium is nothing new. However, pernicious mentalities are not accomplished facts; they are ongoing processes. Although systemic injustice is longstanding worldwide, it takes on different forms and manifests in different practices across contexts. The pertinent question is not whether injustice lies in the domain of continuity or change, but rather how the processes by which persistent injustices have changed, an approach that can inform ways to tackle problems, challenge mentalities, and pursue alternatives.

Concerned critics within the data-science community have called attention to a vacuum of ethical thinking (Floridi, 2015). On the other hand, critical media scholar Mark Andrejevic (2020) has argued that the fundamental problem pertains not to ethics but rather to a crisis in judgement that has resulted from the automation of judgement linked with the automation of media as well as of sociality and the dismantling of people’s shared sense of community. Critical media scholar Kate Crawford (2021) similarly has argued that the focus on ethics is problematic, although for different reasons and with different conclusions. She argued that a focus on power brokers of twenty-first century technologies, from big-tech firms to universities, can curtail algorithmic violence through the development of appropriate regulations (see also Pasquale, 2015). However, calling for ethical thinking, lamenting lack of judgement, and calling for policy to reign in major actors complicit in the sins of artificial intelligence (AI) applications all beg the question as to how the logic that permits tolerance of unjust, data-driven, technocratic solutions has become ingrained in digital subjects’ minds. I concur that the automation of judgement poses profound problems, and I endorse attention to both ethics and regulatory policy, but I argue that constructing real change at some point must engage the systemization of a mode of knowing that renders unjust, data-driven, technocratic solutions persistently tolerable by society to the point of normalization, a matter of a societal-scale subjectivity. I ask how in the process of the smartification of society a mode of knowledge production developed that bypasses ethics, judiciousness, and sense of citizenship and community.

Education in the Digital Era

Although rarely called by its name, a pedagogy called “competency-based education and training” (CBET) prevails in the United States and around the world, while its “carrier” across all institutions currently is the edtech industry, the vehicle by which technology mediates CBET tenets. One value of online education promulgated by the edtech industry is that it can be customized, personalized, relative to students’ needs, and this customized aspect of the current system has long been central to CBET pedagogy. Those who can complete assignments rapidly can do so, and those who need more time are accommodated. The discourse on the new education features the efficiency of the self-paced learning system by de-standardizing the learning process insofar as it puts students in control of their learning. The approach shifts the role of instructor from “a sage on the stage” to “a guide on the side,” rendering instructors facilitators of the management of information (King, 1993). Online education in turn renders students entrepreneurs of their own education, responsible for their progress in a new round of neoliberal practices.

In addition to the personalization component, CBET departs from evaluating students on what they know, and instead prioritizes performance—what students can do. Students in a CBET system demonstrate mastery of predetermined competencies, expressed in terms of expected learning outcomes (ELOs), which are assessed quantitatively. One fundamental problem, however, is that teaching for the learning outcome, like “teaching for the test,” can leave considerable gaps in people’s thinking. Just as different processes can result in the same pattern, a “right” answer can derive from different logics, with potential problems downstream in application. Further, the focus on skills and what people can “do” relegate content-oriented, contextual knowledges to secondary status, relevant only if such knowledges are useful in the performance of a task (Hyland, 1997). For example, a task such as the construction of hot spots of crime in a city requires no contextual knowledges regarding uneven surveillance; uneven arrest patterns across a city demonstrate the constructed nature of hot spots, which in turn unjustly stigmatize places and the people who live there (Jefferson, 2017). Skills—“doing”—while valuable and necessary, represent partial knowledges that lack connection with conceptual frameworks guiding action. The construction of hot spots, for example, conceptualizes places as bounded, without connection or relevance to other places across a city and beyond. Focusing singularly on tasks and the skills required to perform them neglects contextual and conceptual knowledges that enable a student—or downstream, a worker—to raise questions and critically evaluate the tasks they execute that implicitly are part of larger societal projects that may deliver injustices.

Despite these problems, CBET in the United States exists in various forms both in traditional postsecondary institutions with tenure as well in the private sector. The landscape of education is changing rapidly, although unevenly. Change is slowest in traditional colleges and universities that reward students for their “seat time” with credit hours towards courses and degrees, rather than exclusively on mastery of ELOs;Footnote 8 however, incipient changes in the current context are evident in a new fervor over certificates that can be independent of degrees.

CBET in traditional colleges and universities is occurring on a piecemeal, experimental basis, notably regarding the specification of ELOs and increased accountability. In these institutions, CBET has been adapted in academic departments to the needs and demands of disciplinary issues in the longstanding structure of courses, majors, and degrees. The ELOs and proficiencies provide a vehicle for examining effectiveness of teaching, and potentially offer a blueprint for substitute teaching when researchers buy themselves out of courses, take a sabbatical, or spend time in the field or visit another institution. Student work on university learning platforms enable the datafication of their performance for assessment purposes, although at the time of the writing of this paper, this aspect of CBET tends to be optional in traditional colleges and universities, even if seductive because of the automation of grading that relieves instructors of evaluation.

In contrast, CBET in its purest form, which encompasses personalization, is unconstrained by the curricular structure of non-traditional postsecondary institutions, rewarding students for their mastery of ELOs, accountable quantitatively, and pursued among students online through self-pacing. Emblematic of “pure” CBET in the new millennium, Western Governors, a thoroughly online, private university, began enrolling students nationwide in 1999 in self-paced programs designed for working adults.

New universities such as Western Governors entered the new millennium offering an educational alternative to traditional postsecondary education that solved both space and time problems for working adults in the context of precarious work. The shift from the salience of a primary to a secondary labor market associated with the decline of Fordism in the last quarter of the twentieth century produced what labor studies scholar Guy Standing (2011) called “the precariat,” an internally heterogeneous class of people across wide-ranging occupations experiencing high levels of under-employment, job and wage insecurity. In the context of the digitalization of jobs in the new millennium, labor studies scholar Ursula Huws (2014) dubbed the burgeoning digital labor force “the cybertariat,” an extension of the internally heterogeneous precariat into the digital realm in which insecure and unjust conditions of the precariat have deepened (see also Ettlinger, 2016). The market for education in the new millennium thereby has encompassed underemployed adults across racial/ethnic, gendered, sexual, and aged axes of difference. Minoritized populations continue to bear the harshest burdens and injustices of the new economy (Cottom, 2020), while the general circumstances of precarity also characterize those of the previously privileged. By 2013, one-third of undergraduate students in the United States were over the age of 25, many of whom were working women with diverse responsibilities (Burnette, 2016). Enrollment in traditional colleges and universities declined because working students lack the time and money to dedicate four or more years continuously to education. The consequent decline in tuition-based revenue occurred concurrently with diminishing public investment in postsecondary education. Traditional colleges and universities responded to the changing context by increasing tuition fees, which in the new millennium amounted to twice as much educational revenue as in the 1990s (Gallagher, 2014; Weissmann, 2014). Ironically, the short-term, bottom-line thinking behind the tuition increases exacerbate circumstances in the long run because the costs of tuition have become unmanageable in the context of precarious work. Increasing numbers of young adults now seek alternatives, and nearly all “non-traditional” students, 90%, now take courses online (Rabourn, Brcka-Lorenz, & Shoup, 2018). Fully online courses enable working students and those with domestic responsibilities to access a postsecondary education they can complete at their own pace and without the requirement to leave work to arrive at a fixed space on a university campus. Focusing on professional fields such as IT, health and nursing, business, and teaching, new institutions in the new millennium emerged to provide training and certification at a fraction of the cost of traditional colleges in response to the changing student “market”.Footnote 9

In the scramble to expand their market, traditional colleges and universities have developed new strategies. Many have incorporated distance learning into their curricula, which solves the space problem, yet leaves the time issue unattended because distance learning still requires working students to reserve time in their day for online classes. Leading private universities in the United States such as Harvard, MIT, and Stanford pioneered the next curricular innovation: Massive open online courses or MOOCs, which, like Western Governors University, solve both space and time problems. MOOCs have been branded as “high end” due to the prestige of the private institutions through which they are developed and delivered, and the internationally renowned professors who prerecord lectures; evaluation is automated and students pursue courses online, anytime, at their own pace per the CBET personalization model. The “massive” in the MOOC model reflects the global crowd of students that these courses target in association with a modernization discourse regarding the diffusion of high-end education throughout the world, encompassing low-income countries. However, MOOCs have been unsuccessful at both retaining students in all countries and attracting students from underdeveloped world regions. Only a third of MOOCs students come from low-income countries. Just over 3% of enrolled students in MOOCs through MIT and Harvard from 2012 to 2018 completed their courses in 2017–2018, the end point of a downward trend from 6% in 2014–15 and 4% in 2016–17; and almost 90% of students who enrolled in a MOOC in 2015–16 did not enroll again (Reich & Ruipérez-Valiente, 2019). These serious problems prompt questions regarding the value of the new education paradigm.

New universities such as Western Governors as well as MOOCs in private, traditional universities now compete with edtech firms, encompassing startups, middle-market companies, and publicly traded companies that service elementary, secondary, and postsecondary institutions. Established edtech firms such as Coursera, Pearson, Udacity, and Edx that have collaborated with traditional colleges and universities by supplying them with platforms and apps now also offer their own courses and certificates (Mirrlees & Alvi, 2020),Footnote 10 and edtech also now encompasses massive open online course corporations (MOOCCs) that work with professors at traditional universities (Mirrlees & Alvi, 2020). Further, the edtech sector has spawned a generation of “meta edtech” firms that monitor, evaluate, broker relations among stakeholders, and shape the direction of the industry (Williamson, 2021). “Meta edtech” also encompasses “evidence intermediaries,” which provide platforms that evaluate commercial edtech products and services for schools and parents. Another type of “evidence intermediary” is market intelligence firms such as HolonIQ, which offers global “educational intelligence” that assesses the market value of edtech companies as well as world regional markets and their potential for edtech investment (Williamson, 2021).

Strategies for the delivery of technologically mediated education vary from a blend of labor and capital-intensive to thoroughly capital-intensive approaches. “Blended learning” is a combination of synchronous and asynchronous educational delivery, and private-sector edtech firms emphasize asynchronous education while offering a brief “bootcamp” approach to satisfy a synchronous learning component (Perdue, 2018). The brief time required for in-class, “bootcamp” learning caters to working adults with little time to leave work, while the asynchronous approach is amenable to a “plug and play,” standardized approach to courses taught across institutions to minimize set-up costs. More generally, non-traditional educational establishments initially met the high costs of incorporating educational technology in the learning enterprise by reducing labor costs, specifically by jettisoning the professoriate and implementing a Taylorist division of education into tasks for non-tenure track, low-paid education professionals scattered across various functions such as instructional design, assessment, counselling, and subject-matter development (Berrett, 2016). Labor-market optimists might argue that such new developments represent a case of “creative destruction” because new types of jobs have been created to replace single positions. However, the low pay and untenured, insecure, nature of the new jobs reflect the casualization of academic labor, which inevitably will pervade traditional colleges and universities, even if at a much slower pace than in non-traditional institutions.Footnote 11 By the second decade of the new millennium, the edtech industry has incorporated fully capital-intensive methods with automated teaching and evaluation, early stages of AI tutors, and blockchain technology to write and validate student transactions across institutions.

The imminence of AI tutors as a norm is concerning because AI currently lacks the capacity for explanation and contextualization; it can describe, yet with difficulty because decontextualized correlations often result in spurious conclusions, such as Black Americans misidentified as gorillas or an overturned school bus on a road misidentified as a snowplow. Further, the binary foundation of algorithmic logic aligns with a “right”/“wrong” approach to evaluating student performance, a mode of evaluation outside the domain of argumentation as a mode of learning, knowing, and expression. The “right”/“wrong” binary lacks awareness and appreciation of multiple perspectives and forfeits scrutiny of assumptions that would cast doubt on the tidiness of unilateral thinking. Assumptions underlie all perspectives and guide a subject towards particular types of information, methods, conclusions, and recommendations. From this vantage point, “right” and “wrong” reflect the perspective adopted by those developing questions, answers, and curricula more generally to the exclusion of other perspectives, without attention to alternative conceptualizations, their context and significance. Herein lies a principal source of bias in the new pedagogy.

Blockchain, as an emergent arm of edtech, may be increasingly salient in traditional colleges and universities to permit students to transfer credits between CBET and non-CBET programs (Burnette, 2016, p. 90). With an eye to the future, the edtech vision is to enable the burgeoning non-traditional student population to enroll in courses in institutions around the world, documenting and transferring course credentials or ELOs with ease through blockchain while “professors” take on the new role of advising students in customizing their inter-institutional, international curricula (Williams, 2019).

Beyond new universities committed to a tech-mediated CBET and an expanding privatized edtech sector, big-tech firms themselves are expanding into education. For example, students can now earn certificates from Google in just 3 to 6 months at the low cost of $49 a course; to affirm the credibility of the program, Google has indicated that the certificates substitute for regular college/university degrees for eligibility for jobs at their own company (Trapulionis, 2020). Big tech also has become an important component of edtech philanthropy.Footnote 12 These firms’ considerable support of the automated, personalization model of education is self-serving insofar as they are invested in the profitability of innovations, and crucially, the data collected from students, the “oil” of datafied education in the new millennium. Edtech and big-tech companies adopting edtech practices are fast becoming the new agents of knowledge production.

Currently, all educational institutions,Footnote 13 traditional and non-traditional alike, are developing learning analytics, whereby student information from platforms as well as applications are mined and datafied. The purpose is to profile students so that “problem students” can be identified early to permit “intervention”, a structural mimicking of predictive profiling of minoritized populations at a societal scale, specifically in the education sector of the surveillance economy (Zuboff, 2019), without regard for the systemic biases that contribute to profiling (Benjamin, 2019; Eubanks, 2017; Jefferson, 2020; Noble, 2018). Learning analytics in cash-strapped traditional colleges and universities unload the costs of development and new releases of software to vendors (Burnette, 2016, p. 90) while ostensibly helping to stem attrition, and do so by eroding students’ privacy without their consent.

More generally, learning analytics is emblematic of the use of big data in the education sector. As in big tech’s governance of populations generally, analytical use of AI in the education sector depends on big data pooled from populations rather than samples, and proceeds based on correlations among data that have been decontextualized (Bolin & Schwartz, 2015). Rather than focusing on causes of problems, learning analytics is based on correlations of patterns in the past to preempt problematic practices in the future through intervention in the present (Witzenberger & Gulson, 2021). The value of students in this system is that they are the source of data; per critical philosopher Gilles Deleuze (1992), they are “dividuals”—sets of data points subject to manipulation by machine learning—as opposed to individuals with agency whose actions are situated and require contextualization. Although learning analytics is considered valuable for its discovery of patterns (Beer, 2019), clustering techniques in learning analytics assign “dividuals” to groups not on the basis of discovery, but rather based on mathematical construction using pre-determined parameters and criteria (Perrotta & Williamson, 2018). Observing market-ready innovations at an edtech trade show targeted to educational institutions, critical education scholars Kevin Witzenberger and Kalervo Gulson (2021), for example, observed the use of patterns of student mouse movements and response times to questions as the basis for the modelling of learning pathways. This “innovation” evaluates and purportedly preempts problems based on patterns outside the scope of assigned tasks, without students’ awareness that mouse movements or response times will affect their learning pathway.Footnote 14 Learning analytics is extending into the realm of emotions with the use of psychometrics, sentiment analysis, natural language processing, face cams and other modes of biometric dataveillance (Lupton & Williamson, 2017). Far from an ivory tower, the education sector is firmly embedded within the broader digital economy.

History of the Pedagogical Present: Contextual Dynamics in the Twentieth Century and Contradictions of CBET Wellsprings

Even insightful critical scholarship on digital-era education has focused on the technologies that mediate education (e.g., Mirrlees & Alvi, 2020; Williamson, 2017), and those that focus on the accompanying pedagogy presume that it is new and has been developed to implement the emergent edtech industry. Indeed, business and innovation scholar Clayton Christensen and his colleagues (2008) presciently recognized the big-business aspect of the new edtech industry just before the end of the first decade of the new millennium. They argued the edtech industry represents a case of “disruptive innovation,” and that the computer-driven technological infrastructure for education would prompt a change in pedagogy that would change education as-we-know-it, decidedly for the better. However, the so-called “new” pedagogy has a history that would have predicted considerable dissatisfaction; the pedagogy, and its ills, preceded the technology.

Competency-based education (CBE) emerged in the United States in the late 1950s emphasizing ELOs and quantification; the inclusion of “training” (CBET) reflects the vocational orientation that became salient in the 1960s, when the personalization tenet was introduced, and has remained central through the present. The impetus for the development of a new approach to education was a sense of the United States falling behind when the former Soviet Union launched Sputnik I in 1957, causing concern regarding the competitiveness of the skill base of the US citizenry (Elam, 1971; Hodge, 2007; Tuxworth, 1989). Enacted the following year, the National Defense Education Act brought education into the purview of federal policy and provided funding for education, notably in STEM fields and languages. However, demands changed in the next decade, the civil rights era.

The frame of the new approach to education changed in the 1960s to assist marginalized populations, especially Black Americans, who had “slipped through the cracks” of US post-War prosperity. The government extended funding beyond STEM to all fields and focused on teacher training and vocational programs outside traditional educational institutions to provide “disadvantaged” populations—a euphemism for “underserved”—with skills for jobs. Whereas the agenda behind skills-based education directly following Sputnik emphasized STEM to achieve competitive advantage internationally in what became the space race, the unfolding of CBET in the next decade reoriented the skills imperative to a pipeline to jobs for “non-traditional” students in racialized society.

The liberal agenda of the 1960s therefore was to institute a skills-based vocational approach to education to support diversity and ensure equity and inclusion in the US opportunity structure (James, 2019). The emphasis on skills required a pedagogy focused on student performance, a problem directly amenable to the establishment of ELOs, with inspiration in educational theory from Benjamin Bloom’s (1956) taxonomy of educational objectives, published just 1 year prior to Sputnik. The rollout of the new pedagogy entailed specification of multiple proficiencies associated with each ELO to permit quantitative evaluation and ensure objectivity in the new science of education to establish confidence in the order of the system (Kerka, 1998). Competence in proficiencies would demonstrate mastery of ELOs and preparedness for jobs. A little more than 10 years after the publication of his taxonomy of educational objectives, Bloom (1968) incorporated the principle of student-centered learning via self-pacing in his framework, accommodating the agenda of diversity of the civil rights era and crystallizing the imbrication of personalization with ELOs and quantitative assessment. While contextual dynamics prompted a change from targeting the general population for skill development for purposes of international competition to targeting unemployed minoritized population for skill development for jobs, academic influences contributed to the pedagogic principles that were to guide the liberal process.

Eclectic and selective intellectual wellsprings reflect inconsistencies that arguably produce problems while also helping to explain the multiple versions of CBET (Kerka, 1998) that developed within and across different types of educational institutions in the twenty-first century (Klein-Collins, 2012), as discussed in the previous section. A pivotal intellectual wellspring for CBET was the scholarship of experimental and behavioral psychologist Burrhus Frederic Skinner (1968), who pioneered the quantitative, “scientific” examination of animal behavior, which he maintained is similar to that of human beings and therefore useful in the management of people’s behavior. He was interested in shaping animals’ behavior by narrowing and reinforcing a prescribed set of desired behaviors, analogous to the pre-determination of learning outcomes set by teachers for learners in CBET. Also pertinent to CBET’s exclusive focus on performance, Skinner’s (1968) approach casts anything that cannot be observed directly as irrelevant, a basic tenet of positivist science.

The scientific mode of analysis in the social sciences, education, and various fields across academe developed in an emergent socio-technical milieu buttressed by the introduction of computers and their widespread use in academe and think tanks, encompassing wide-ranging developments from Ludwig von Bertalanffy’s (1968) systems theory to a quantitative revolution in methods across many academic fields. As the education sector became responsibilized for its accountability (Houston, 1974), the systematization of data permitted quantitative assessment of students’ performances on proficiencies and mastery of ELOs, as well as the quantitative assessment of whole curricula. Quantification presented the pedagogy as legitimate by the presumed neutrality and objectivity of a “scientific” approach to assessment. During the ‘60s and ‘70s and throughout most of the twentieth century, CBET was implemented in non-tenure-track educational institutions associated with what became known as the Performance-Based Teacher Education Movement (PBTM), amenable to quantitative assessment (Hodge, 2007; Gallagher, 2014). Yet, dropout rates from CBET programs were high (Grant, 1979; Jackson, 1994),Footnote 15 anticipating the current situation of MOOCs. Despite this fundamental problem, the movement eventually spread by the 1990s internationally to Canada, the UK, continental western Europe, Australia, and Africa, and topically extended to professional fields such as medicine, health, and IT (Lassnigg, 2017). The fervor regarding quantification via the pedagogical innovations of ELOs and personalization apparently outweighed signs that the personalization of CBET was insufficient to deal with the problems of diversity to which the pedagogy purportedly responded.

The intellectual activity in the ‘60s connected with another, familiar wellspring: Taylorism, which has been a pervasive influence in societal trends from the early twentieth century through the present. Named after Frederick Taylor (1911) who published The Principles of Scientific Management in 1911, Taylorism implicitly framed CBET in two ways. First, Taylorism embraces efficiency by way of developing a detailed division of labor so that each individual becomes proficient in specific jobs. Analogously, CBET embraces a detailed division (“taxonomy”, per Bloom) of ELOs and associated proficiencies that are amenable to “scientific” analysis, which is useful as a quantitative vehicle for accountability. Second, Taylorism casts rank-and-file workers as doers, not thinkers, a category reserved only for managers who conceptualize the activities in which workers perform their duties. Analogously, learners in a CBET system thereby are conceptualized as doers while the instructors are the thinkers who design and prescribe pre-determined behavioral outcomes, the ELOs.

Although familiar Taylorist principles seem consistent with CBET principles developed in the context of the quantitative revolution as well as behaviorism and liberal approaches to diversity, the mix of ideas associated with CBET lack coherence. For example, the granularity of Taylorist divisions of labor and their manifestation in CBET in terms of ELOs and proficiencies are inconsistent with the holism of systems theory. One conceivably might argue that the two frameworks nicely complement each other, but the underlying principles nonetheless differ. Whereas from a systems perspective, a change in one component of a system affects all others, proficiencies and ELOs do not necessarily interrelate unless a specific proficiency directly speaks to such interrelation. The skills-based knowledges for which CBET aims lack a relational understanding of problems and construct compartmentalized logics that can miss problems formed at their nexus.

Another contradiction lies in the evolving discourse of personalization, which champions student-centered learning. Students indeed have control over the speed with which they complete tasks, but they have no voice regarding the domain of tasks to complete, or at the least, an avenue of negotiation. The practices by which the personalization tenet of CBET materialize contradict humanist values of scholars such as John Dewey (1971), from whom CBET also purportedly draws, partially. Dewey was interested in activity-based learning, suggestive of CBET’s emphasis on skills-based education, and this interest connected with knowledge-based education. Ironically, CBET scholars tended to focus on the former and circumvented the latter (see Wexler 2019), reinforcing the notion of the Taylorist division between doers and thinkers and rendering the lack of student control over knowledges problematic. Similarly, CBET scholars emphasized linguist Noam Chomsky’s (1965) distinction between doing and knowing while, however, bypassing Chomsky’s thoughts about the importance of knowledges, a centerpiece of his critique of Skinner’s devaluation of innate knowledges (Hodge, Mavin, & Kearns, 2020). Following the behaviorism of Skinner, CBET presumes that knowledges follow from skills. Yet evidence exists that affirms the opposite, namely that knowledges prefigure skill acquisition. For example, a study comparing the performance of two groups of children – one of which had developed contextual knowledges regarding a topic on which they were tested and the other of which had not – showed that the group with contextual knowledges tested better than the other group (Wexler, 2019, p. 30). Another study showed that children at resource-poor schools lack the texts available in affluent school districts that feature material on standardized exams (Broussard, 2018, p. 53). Context matters regarding both the knowledges that enable relational, critical analysis and the accounting of uneven performance.

The growth of CBET throughout the second half of the twentieth century and its diffusion around the world is ironic considering the problems. In addition to issues regarding circumvention of contextual knowledges and the high dropout rates from CBET programs, proponents of the pedagogy were unable to provide evidence that it results in better performance than other pedagogies (Gallagher, 2014; Hodge & Harris, 2012; Kerka, 1998; Tuxworth, 1989). Moreover, despite the vocational orientation to provide an education-to-jobs pipeline, the CBET community stopped short of any communication with employers (Burnette, 2016, p. 90; Henrich, 2016). CBET was out touch with new developments downstream in the workplaces for which it purportedly was preparing students. In contrast to the narrow focus on specific tasks in a Taylorist-inspired rigid division of labor connecting with CBET, post-Fordist production processes by the 1980s in the United States, especially in the automobile industry, mimicked Japanese competitive strategies regarding quality control, which required holistic, contextual knowledges through job rotation. Accordingly, the Japanese had to train US workers in their branch plants in the United States and located facilities in “greenfield” sites—rural areas without a history of manufacturing—to avoid teaching workers to unlearn Taylorist practices (Ettlinger & Patton 1996). The capacity of CBET students to tackle new, multidimensional problems in workplaces remained “a next step” (Hyland, 1997), and continues to be elusive in new and different ways in the digital era.

Although the theory of disruptive innovation predicted that pedagogy follows from new technology and thereby missed the historicization of new trends, its departure from an emphasis on breakthrough innovations by its focus on the tweaking and rendering of existing products or services accessible to those formerly overlooked as a market, often due to lack of affordability, is apt. The expansive notion of disruption relatively accurately, even if partially, describes market changes specifically regarding pedagogy. Digital technology enabled the scaling of a pedagogy that emerged in the twentieth century for a small market, which represented, however, a downsizing of the original, societal-wide target population. It was the confluence of existing pedagogy and new technologies to scale up its delivery, not a causal or chronological relation between the two, that constitutes the current disruption. Causal factors are contextual, not a matter of technology proactively being pushed on a market to engage profound societal problems. A fundamental problem with the theory of disruptive innovation applied to knowledge production is that at its core, it is technocratic in its presumption that technology can engender a mode of knowing capable of serious engagement with societal needs.

History shows us that the present is produced over time, discontinuously. The discontinuous and contingent nature of CBET’s evolution is reflected in changes in its target populations and its disparate intellectual wellsprings that spawned various renditions of the pedagogy in different types of institutions. The “production of the present” is evident in the profusion of problems associated with CBET principles as well as inconsistency among principles and lack of follow through to connect education with jobs—all of which were evident in the twentieth century and unsurprisingly remain so. It would have helped if proponents of the so-called new pedagogy in the new millennium would have contextualized the principles they promulgate to learn from history. Importantly, beyond problems that result in student attrition and lack of connection between educational institutions and employers, the inattention to relational and contextual thinking in CBET raises important questions about ethics and responsibilities, as elaborated below.

Downstream Consequences of Tech-Mediated CBET

While edtech renders students valuable as “dividuals” to a variety of actors and notably to firms, the accompanying pedagogy renders students valuable downstream as workers, also notably to firms. The corporate, neoliberal sense of value envelops and pervades all aspects of education in the twenty-first century. Related critical discussions of neoliberal education have focused on its privatization;Footnote 16 the promotion of diversity in universities for the sake of competitive advantage; the training of students for lifelong learning so they can adapt to changing workplaces; and the cultivation of overwork (Cockayne, 2020; Mitchell, 2018). The CBET pedagogy, and more informally, the skills orientation in technical and professional fields, have ushered in novel ways to inculcate neoliberal and technocratic values that play out downstream in workplaces and everyday life. The personalization component of CBET responsibilizes students for their progress while an ELO repertoire of skills licenses students for jobs, without, however, the contextual and conceptual knowledges that would permit critical questioning. Even if traditional universities and colleges only recently have begun to adopt the ELO system, many disciplines, notably technically oriented STEM fields and business and other professional fields—the fields in which CBET developed in the twentieth century in non-tenure educational institutions—have long approached education principally from a skills vantage point. Formalization of ELOs reinforce existing tendencies that materialize in new curricula, with consequences downstream.

Although jobs in the data sciences require considerable critical thinking regarding, for example, statistics and engineering, they have no requirements for knowledges of the places or people applications affect. Contextual issues and related knowledges are outside the data-science domain, explaining why AI researcher Hannah Kerner (2020) has argued that data scientists are “out of touch,” in part due the prioritization of novel methods and relative disrespect for research on applications to pressing real-world problems. Kerner pointed out that AI researchers compete based on contrived benchmarks that embed biases or pursue modelling with inappropriate categories that lack connection with complex dynamics in the real world. Media scholar Sophie Bishop’s (2020) ethnography of algorithmic experts associated with YouTube industries showed that these practitioners routinely ignored issues such as socio-economic inequalities inherent in social media platforms. Human-computer interaction scholar Kenneth Holstein et al. (2019) found in an interview-based study of data-science practitioners that “fairness,” apparently a proxy for “ethics” in data-science workplaces, is something one does on their own time. A report drawing from data-science practitioners worldwide showed that only 15% of respondents indicated their organizations dealt with fairness issues (Anaconda, 2020, p. 32).

The lack of concern for effects of applications of AI research derives from the reward system. The private sector, notably big tech, dominates as the major employer of AI researchers and funds most AI research (Knight, 2020). The main priority, therefore, is profit. As one data scientist commented, “I like to view myself as a problem solver, where data is my language, data science is my toolkit, and business results are my guiding force” (Peters, 2018). Similarly, as sociologist and critical media scholar David Beer (2019) showed in his interview-based study of the data analytics industry, data analysts strive for “… the pursuit of efficiency and the location of value” (p. 129). Consistent with the profit motive, a survey and interview-based study of firms engaged in data analytics and AI across wide-ranging industries found that a salient motive for engaging “ethics” is self-promotion by establishing trustworthiness in the reputation economy to further business interests (Hirsch et al., 2020). The study found that ethics often are interpreted as a privacy issue, which certainly requires attention but hardly encompasses the wide-ranging effects of applications. None the motives uncovered by researchers prioritize effects of decision-making on people and places outside a firm. The crystallization of the skills-focused CBET pedagogy upstream reinforces rather than alters the technocratic and neoliberal values that infuse data-science workplaces, a perilous prospect in the context of deepening socio-economic polarization and conflict worldwide.

Problems in the domain of the data sciences “leak” to other domains. Sociologists Will Orr and Jenny Davis (2020) found that agents of the data sciences unload ethical issues to corporate users. As one of their AI-practitioner interviewees remarked,

We were a technology provider, so we didn’t make those decisions… . It is the same as someone who builds guns for a living. You provide the gun to the guy who shoots it and kills someone in the army, but you just did your job and you made the tool. (cited in Orr & Davis, 2020, p. 12)

Lack of training in contextual and related knowledges among corporate users in turn clarifies why critical questioning among users of data-science products is rare. Further, Orr and Davis found that each of their 21 interviewees had limited awareness of the broader system in which they worked. Beyond the fundamental tie to profitability, a serious impediment to productive and ethical engagement with applications and their effects is the Taylorist division of labor in work, reflecting a mode of working and learning that is inculcated upstream and grounded downstream. The division of labor within firms, and more generally the ecosystem of firms, renders everyone disengaged from the linkages among tasks fulfilled by different people and groups, despite the technocratic discourse of seamless flows. Orr and Davis’ (2020) study revealed a pattern of “ethical dispersion” in which “… powerful bodies set the parameters, practitioners translate these parameters into tangible hardware and software, and then relinquish control to users and machines, which together foster myriad and unknowable outcomes” (p. 7). Beer (2019, p. 129) similarly found that “the data gaze” is a conceptualization of the world from the vantage point of isolated constituent parts from which the whole is retrofitted.

The brave new world of education portends a world ironically insensitive to issues of difference—the initial prompt for CBET developments in the 1960s—and is unable to engage digital subjects upstream and downstream in problems of social and data injustice that affect us all. The direct effects on marginalized populations are clear while the insulation of white privilege has obscured problems that are erupting in protests worldwide. A crucial lesson of the Covid-19 pandemic is a nasty paradox: the apparently “rich” United States has plenty of vaccines while so many other countries suffer, yet people travel internationally and carry the virus with them while the deep but unattended inequalities within the United States have contributed to significant numbers of people refusing vaccines, with consequences, even if uneven, for everyone. Myopia towards longstanding societal wounds can be a matter of life and death, yet the science of the digital era has yet to even attempt to grapple with this pressing reality. As computer scientist Barbara Grosz commented in an interview in regard to the ethical problems facing the data sciences, “… it’s not a question of just what system we can build, but what system we should build. As technologists, we have a choice about that, even in a capitalist system that will buy anything that saves money” (cited in Ford, 2018, p. 349).

Conclusion

The datafication of knowledge in the twenty-first century version of CBET, currently unfolding through the edtech industry, inculcates technocratic thinking that prepares students upstream in the neoliberal academy for work downstream that lacks critical, contextual thinking, and accordingly, produces working subjects unlikely to question the parameters of work assignments. The relation between upstream learning and downstream practices is, however, one of conditioning but not determinism because there always is the possibility that digital subjects will reflect critically on what they know, how they know it, the ways in which their knowledges have been constructed and governed, and how they might think and conceivably act differently (Foucault, 2000). Yet such deep and possibly difficult thinking can be a tall order when so many digital subjects are pressed for time, often in the context of multiple jobs, or otherwise concerned with the requirements of maintaining a job. Resistance to norms always exist, yet often in shadows of a dominant regime.

Although education conditions knowledges, recognizing alternative scenarios, it is not unicausal. Traditional, tenure-track postsecondary colleges and universities in the late twentieth century, for example, did not implement CBET, suggesting other problems such as the construction of postsecondary education by and for the relatively privileged—another factor at work in producing limited frames of reference with negative effects downstream as societal inequalities deepened following civil rights legislation. Lack of diversity coupled with CBET pedagogy in the new millennium help explain how well-meaning and intelligent actors can lack critical awareness of the contexts their actions affect and the relation between individual tasks and broad societal problems.

If education is to guide us to a better world, then the “new” pedagogy is cause for serious concern when the world is at a tipping point of tensions wrought of profound inequalities. Admittedly, conditions vary across space. For example, countries with a clear welfare state where education through the postsecondary level is free and subsidized by government lack the pressures indicated in this chapter for the continual boosting of revenue in educational institutions that fuels strategies prioritizing profitability. Yet the “welfare state” is an idealized model, and already, notably in western Europe, many nation-states increasingly lack the capacity to provide basic needs for all subjects, especially in the context of mushrooming streams of international migration among economic, political, and environmental refugees. Processes of disintegration of the welfare state are uneven across space relative to context-specific conditions, but they appear inexorable in light of deepening socio-economic polarization worldwide.

Some of the problems of the CBET pedagogy, notably ineffective engagement with issues of difference, are unsurprising, precisely considering the failure of CBET in the previous century in the United States to engage these issues. Upstream efforts to correct algorithmic violence to places and people often register in the insertion of a course in ethics in data-science curricula, commonly conceptualized in terms of philosophy. Yet ethics-as-philosophy does little to inform data scientists-in-training about contextual issues, the focus of critical social science. Ethics matter, but without contextual knowledges, they remain an abstraction. Interdisciplinary curricula are pivotal to responsible downstream practices, with the qualification that they encompass more than skill sets delivered through ELOs, specifically, critical contextual, content-oriented knowledges to enable connection between intellectual constructs and lived experience. Indeed, one corner of education theory, apparently jettisoned in the pursuit of prescribed outcomes, is the theory of “situated learning” (Lave & Wenger, 1991), which interestingly became adopted in a corner of innovation theory centered of “communities of practice” (Wenger, 1998; Wenger, McDermott, & Snyder, 2002), and broadly has parallels in feminist theory regarding “situated knowledges” (Haraway 1988). As feminist and critical data studies scholars Catherine D’Ignazio and Lauren Klein (2020) have argued, feminist principles that value situated knowledges as well as difference, multiple perspectives, and intersectionality are germane to a constructive data science.

Crucially, a critical, interdisciplinary understanding of data studies requires attention well beyond data-science disciplines. All students across all fields, including the humanities, social sciences, arts, business, law, and health should be exposed to problematic and often devastating uneven realities of algorithmic life within the education sector and more broadly. Beyond revealing the fruits as well as problems of societal projects, education should teach us all about our real or potential implicit complicity in the perpetuation of inequalities by virtue of lack of critique, silence, and unwitting collaboration on everyday violences. A proactive sense of citizenship committed to social, environmental, as well as data justice requires urgent attention in all domains of life, including the upstream production of knowledges and their downstream applications.