Google is a multinational technology company most known for their search engine, but they have long been “an advertising company first and foremost” as almost all their profits come from auctioning contextual advertisements (Vaidhyanathan, 2011, p. 16). In short, Google collects and extracts users’ personal data—often without their knowledge—and then uses that data to show them targeted ads in search results and across the Internet. Zuboff (2019) called this new economic system surveillance capitalism and critiqued that Google “unilaterally claims human experience as free raw material for translation into behavioral data” (p. 8). Google is facing both legal and public backlash for anti-competitive behavior (McCabe et al., 2020; Nadler, 2020; New Mexico v. Google, 2020; Watters, 2019), privacy violations of employees and their own Terms of Service (ToS) (Schiffer, 2020), corporate and search-engine gender and racial bias (Noble, 2018; Plenke, 2015; Sweeney, 2013; Wakabayashi et al., 2018; Whittaker, 2020), and algorithms that radicalize users and promote conspiracy theories (Tufekci, 2019). Yet, Google’s massive advertising profits have allowed them to sprawl into many other areas where they develop software and hardware products. Many of these products have been purchased, adopted by, or used in schools.

Despite Google’s many troubles, there is scant criticism of Google in educational technology literature, and they have faced little scrutiny in schools where Chromebooks, browsers, apps, search engines and more are used by students. Just as with users outside schools, we are concerned with what Google is taking from students (e.g., personal data), how it is targeting them (e.g., advertising, product familiarity), and where it is directing them (e.g., search and recommendation algorithms). Many of Google’s dilemmas are illustrative of the problems of the technology sector more broadly, but we focus on Google in this paper due to their outsized influence. In just the last year, the COVID-19 pandemic has resulted in increased use of Google in schools (De Vynck & Bergen, 2020), the company has rebranded G Suite for Education to Google Workspace for Education and added new data costs (Sinha, 2021), and the company acknowledged that Google Classroom serves as a learning management system (LMS) for many classes (Lazare, 2021).

In this paper, we use a technoethical framework (Krutka et al., 2019) to consider, should we use Google in schools? There are many good reasons why teachers, students, and the public use Google products. However, we offer a critical perspective that serves as a counter to the general optimism we have observed around the use of Google products in schools. We hope to raise questions that educators, students, and community members consider when making technology decisions inside and outside schools.

The Rise of Google

Google.com is the most visited website in the world (Alexa, 2020). The company and its search engine were founded at Stanford in 1998 by Larry Page and Sergey Brin, went public in 2004, and reorganized under the conglomerate Alphabet Inc. in 2015. Google was deemed the most valuable brand in the world in 2017 (Cox, 2017) and topped $100 billion in annual sales in 2018 (Fiegerman, 2018). Because Google primarily profits from advertising, they have been able to offer other services for “free” (even if those services often collect user data) or at low prices (e.g., Chromebooks). With Google Workspace for Education, the company offers both free (i.e., Google Workspace for Education Fundamentals) and paid (i.e., Google Workspace for Education Plus) versions. Educators have applauded Google for how Chromebooks work with existing networks, their prices compare to competitors’ laptops, and Google apps integrate and interact with other software and hardware (Varier et al., 2017; Vu et al., 2019).

While Google has sold hardware products like Chromebooks to schools through traditional educational processes that require contracts and agreements, they entered schools in a more “disruptive” means by appealing directly to classroom teachers to use Google Classroom (Singer, 2017a). Bypassing formal school bureaucracies resulted in quicker adoption at the expense of ensuring the protection of students. The company offers a vast array of hardware and software products that are designed for schools (e.g., Google Classroom) or updated for them (e.g., Google Meet), reframed and rebranded for schools (e.g., Google Docs, Google Slides, Google Forms, Chromebooks), or appropriated by educators and students for use in schools (e.g., Google Search, Google Earth, Google Maps, Google Translate, YouTube).

In 2017, more than half of U.S. K–12 students used Google products in schools (e.g., Chromebooks, then G Suite for Education) (Singer, 2017a). With the shift to emergency remote teaching during the 2020 COVID-19 pandemic, Google accounted for “60% of the market” for computers in the education market (De Vynck & Bergen, 2020) and the company reported that “more than 170 million students and educators worldwide rely on our suite of tools” (Sinha, 2021). However, the growth of Google in schools has been accompanied by the problems we detailed in the introduction of this paper.

A Technoethical Approach

In 1975, Mario Bunge contended that “the technologist is responsible not only to his employer and his profession but also to all those likely to be affected by his work. And his primary concern should be the public good” (p. 78). He thus argued for a technoethics constructed to encourage “right and efficient conduct” (p. 79). Amrute (2019) more recently observed that Bunge’s top-down technoethics from technologists was likely to fail to center the bodies, concerns, and preferences of those most vulnerable to technological harm. She argued that “moving from rules to attunements allows us to reframe ethics as being about maintaining relationships and broaches the question of what kinds of beings, both human and non-human, are presupposed in any ethical arrangement” (p. 57). Without always using the term technoethics, many scholars and organizations (e.g., Algorithmic Justice League, Electronic Frontier Foundation) have called for technologists and decision makers to consider the social and moral consequences of technologies that are presented as neutral but can further racism (Benjamin, 2019; Noble, 2018), classism (Eubanks, 2017), and authoritarianism (Tufekci, 2017). In short, technoethical questions cannot just be for technologists anymore.

Building on technoethical scholarship, Krutka, Heath, and Staudt Willet (2019) theorized technoethical questions educators might pursue alongside students before using technology in their classes. They encouraged educators to ask, is this technology ethical? They also offered sub-questions on ethical, legal, democratic, economic, technological, and pedagogical considerations:

  • Was this technology designed ethically and is it used ethically?

  • Are laws that apply to our use of this technology just?

  • Does this technology afford or constrain democracy and justice for all people and groups?

  • Are the ways the developers profit from this technology ethical?

  • What are unintended and unobvious problems to which this technology might contribute?

  • In what ways does this technology afford and constrain learning opportunities about technologies?

They argued that students should inquire into these questions, weigh evidence, and determine whether the technology meets their ethical standards for use. In this paper, we draw on this technoethical model to consider, is Google ethical?

Unfortunately, we have found few technoethical critiques of Google in our scholarly searches, experience in schools and universities, and attendance at educational technology conferences. For example, scholarship that discussed “Google” in the TechTrends journal over the last 5 years (2015–2020) focused on how educators can use Google tools (Brown & Green, 2016; Francom et al., 2020; Rabu & Badlishah, 2020), analyzed usability (e.g., Francom et al., 2020), or hailed “Google work culture and applications as a model for effective online collaboration” that educators should teach students (Moore, 2016, p. 233). We largely turned to academic scholarship, legal documents, and technology journalism outside of the educational technology field in our effort to bring more balance to educators’ considerations in adopting and using Google products in schools. We begin by turning to the ethical design and use of Google services.

Was this Technology Designed Ethically and Is it Used Ethically?

While ethical design and use can encompass many components, we focused on the transparency and truthfulness of Google by turning to their own words and standards. Lindh and Nolin (2016) conducted a rhetorical analysis of Google’s policy documents relating to Google Apps for Education (GAFE; now Google Workspace for Education) and concluded that the company’s primary aim was to “disguise the business model and to persuade the reader to understand Google as a free public service, divorced from marketplace contexts and concerns” (p. 650). The company often emphasized the free services they offer on the front-end, but obfuscated their back-end business model of extracting users’ personal data to profit from targeted advertising. For example, Google uses rhetorical legalese to simultaneously claim that users own their data, but that the company can collect user information. The authors thus concluded Google used the former term for marketing and the latter so the company can legally claim users’ data.

Google’s lack of transparency with schools became evident when districts demanded clarity from the company that they would abide by federal law and local policies protecting students. When Chicago Public Schools required Google to abide by the Family Educational Rights and Privacy Act (FERPA), the company pressed to use hyperlinked policies on their website that could be changed at any time (Singer, 2017a). Similarly, the state of Oregon had to negotiate with Google for 16 months to ensure they would abide by existing laws intended to protect students’ educational data (Singer, 2017a). A technology district administrator in Fairfax County Public Schools in Virginia stated Google “ignored the Google settings he had selected that were supposed to give his district control over which new Google services to switch on in its schools” (Singer, 2017a, n.p.). By the time the district realized Google had disregarded their local choices it was too late to shut down the services as students and teachers were already using them. Even after a 2014 federal lawsuit was filed against Google for data-mining students’ emails to sell to advertisers, the Electronic Freedom Foundation filed a complaint the next year with the Federal Trade Commission (FTC) because Google’s “changes” were mainly “semantics” (Lindh & Nolin, 2016, p. 655). While Google has largely acquiesced to demands to abide by federal laws, the company has shown a consistent pattern of obfuscating, or even ignoring, their own policies concerning students’ privacy.

Lawmakers and educators have long protected the privacy of students and minors through laws such as FERPA and the Children’s Online Privacy Protection Act (COPPA) (see Table 1). The FTC and New York Attorney General alleged that YouTube illegally collected personal information from children without parental consent in violation of COPPA (Federal Trade Commission, 2019). Google and YouTube used their status as a popular online destination for children to attract advertisers from toy companies resulting in a $170 million settlement and an overhaul of YouTube’s advertising and data collection on content for kids (Bensinger & Romm, 2020). Google has repeatedly circumvented laws intended to safeguard privacy.

Table 1 State and federal laws related to protecting students’ online privacy

Not only has Google designed their apps and ToS to lack transparency, but default settings pass the responsibility to individual students or districts to change. In their recent lawsuit, the state of New Mexico explained:

When students log into a school issued or personal Chromebook, Google will turn on the Chrome Sync function by default. This, in turn, automatically starts uploading students’ Chrome usage data—such as bookmarks, web searches, passwords, and online browsing habits—to Google’s servers. Google acknowledges in a help page that it can read this data but states: “With a passphrase, you can use Google’s cloud to store and sync our Chrome data without letting Google read it.” Unfortunately, the option to set such a passphrase to stop Google from reading private student data is off by default, and buried in settings that parents likely never see. (New Mexico v. Google, 2020, p. 12)

Technology companies who seek to gather personal data often shift responsibility on to individual users by requiring that they opt-out of harmful default settings (Benjamin, 2019). Google has widely expanded the geographic data it creates (e.g., GPS), extracts (e.g., geo-tags), and commodifies (e.g., Street View and Earth images) with little concern for personal or legal consent (Alvarez León, 2016). In 2020, a lawsuit was filed by the state of Arizona for deceptively and illegally collecting users’ geolocation data even when they have opted out of providing location data access (Arizona v. Google, 2020). It is critical that companies like Google prioritize transparency and students’ privacy because it is challenging for students and their families to opt out of Google Workspace for Education once a district or school adopts it. As Lindh and Nolin (2016) asked, “Why should the public school system force pupils to participate in the commodification of their digital labour and algorithmic identities?” (p. 660). Users—including students, parents/guardians, and educators—should understand what they gain and lose when using any product or service.

Are Laws that Apply to our Use of this Technology Just?

We have addressed how Google often avoids laws intended to protect students online, but we will now turn to the justness of those laws (see Table 1) by considering the effects on the most vulnerable users, particularly minors. Livingstone and Third (2017) cautioned against adult normative discourses and proposed “rights-based approaches” that protect and empower children online while serving “as an antidote to the limitations of the risk and safety paradigm” (p. 665). Unfortunately, most laws designed to protect the privacy of minors adhere to parental consent frameworks that place an unreasonable onus on parents, and educators via in loco parentis, to protect minors in a complex online world (Plunkett, 2019). Adults who oversee children are therefore expected to read lengthy ToS that are written by lawyers to obfuscate Google’s intentions (Lindh & Nolin, 2016; New Mexico v. Google, 2020).

Even if parents and educators interrogated the ToS, many of these apps do not abide by privacy laws or even their own terms. For example, Reyes et al. (2018) found that 19% of 5885 tested children’s Android apps collect personally identifiable information in violation of the apps’ own ToS and laws like COPPA. Even large school districts—much less individual parents—have faced resistance from Google to follow laws like FERPA (Singer, 2017a). The state of New Mexico’s suit contended that Google Education “vociferously collect[s] children’s (and older students’ and educators’) personal and sensitive information for Google’s own commercial purpose without first informing parents, let alone obtaining ‘verifiable parental consent’ as required by COPPA” (New Mexico v. Google, 2020, pp. 11–12). Given the complexities of these laws and how they apply, parental consent approaches may not be sufficient to ensure minors are protected and empowered online.

Not only do vendor contracts, privacy policies, and ToS fail to protect students’ data, but even existing federal and state laws “have failed to keep up with technology and left large gaps in the protection of students’ information” (Common Sense Media, n.d., n.p.; see Table 1). COPPA is the major law intended to protect children online today. FERPA, originally passed in 1974, was written during a time of paper-based record keeping, and legal scholars claim its decades of amendments are still inadequate to protect student privacy (Daggett, 2008; Wasson, 2003; Zeide, 2016). For example, educational technology companies can be considered school officials and are granted access to personally identifiable information according to section 99.31 of FERPA.

State laws like California’s Student Online Personal Information Protection Act (SOPIPA) place the burden of compliance on companies, but they are neither widespread nor enforced at an adequate level (Common Sense Media, n.d.). COPPA protects youth against unnecessary collection of data until parental consent is given, but the FTC “does not mandate the method a company must use to get parental consent” (Federal Trade Commission, 2017, n.p.). Through interviews, surveys, and a usability study, Bansal (2016) found that parents in her sample (N = 10) were neither aware of COPPA nor did they “know if websites/apps are following COPPA’s mandatory guidelines” (p. 1).

Parents who want their children to participate in school and social activities online may also feel undue pressure to provide consent. For example, the school at which our second author taught asked for parental/guardian consent to post media online with students’ faces. At the start of 2019–2020 school year all but one student in the class had a consent form on file. That student’s parents did not want their child to be included in photographs and videos for privacy concerns—similar to concerns raised in the New Mexico lawsuit. The educators at our second author’s school abided by the parents’ request, but it created many interactions in which the student was excluded. The absence of consent became a social burden on the student who asked the parents to relinquish their privacy preference. We believe this example represents a form of coercion that undercuts privacy laws. What if a family sought to opt out of a district-adopted learning management system like Google Classroom? Do parents really have a choice? The academic and social pressure can upend the very purpose of consent-based frameworks (Plunkett, 2019). Young people are not the only group that encounters inequitable outcomes from Google products.

Does this Technology Afford or Constrain Democracy and Justice for all People and Groups?

Google has sought to “organize the world’s information and make it universally accessible and useful,” but this has resulted in problems for democracy and justice. Google essentially serves as the world’s “most flexible yet powerful information filter we use regularly” and thus exercises “inordinate influence over our decisions and values” (Vaidhyanathan, 2011, p. 200). Google has also contributed to the diffusion of (mis)information and discriminatory (Benjamin, 2019; Noble, 2018) and anti-democratic (Tufekci, 2019) ideologies. Noble (2018) critiqued not only Google’s sexist and racist search suggestions and results but the corporate cultures that maintain them. Google search associated Black names with arrest records far more than White names (Sweeney, 2013) and returned pornographic results to the search query “Black girls” (Noble, 2018).

Google has a spotted record when it comes to the equitable treatment of marginalized groups. Noble (2018) described how Google mishandled a search for “Jews” where the first result linked to an anti-Semitic website. The company has the capability of altering the search results, as they do in Germany for anti-Semitic content, but they regularly claim neutrality in the United States. Repeatedly, Google products have “glitches” that are explained away as an error in the programming or a byproduct of their neutrality. However, Benjamin (2019) contended that glitches are “not an aberration but a form of evidence” of the human biases programmed into code (p. 80). Plenke (2015), for example, reported that Google Photo racistly auto-labeled two African-Americans as gorillas.

Furthermore, Google not only “reproduce[s] the biases that persist in the social world” (Benjamin, 2019, p. 93), but also nudges people into them. Google often returned a White supremacist website when students searched for information on Martin Luther King (Rheingold, 2012). Noble (2018) described that when Dylann Roof entered “Black on White crime” in Google’s search engine that it returned top results that offered disinformation from White supremacy groups instead of more accurate information from an FBI crime database. Roof referenced these search results as solidifying his White supremacist radicalization that resulted in the mass murder of Black churchgoers. Unlike public institutions like libraries that take an active role in filtering content and are available to redress grievances, Google relies on algorithms to index the web. Google is often the default search engine at schools. Schools could, for example, consider using privacy-oriented browsers such as DuckDuckGo, and educating students on the problematic results browsers can return (Wineburg & McGrew, 2018).

Not only does Google’s search engine play an important role in informing citizens, but so does YouTube. However, Tufekci (2019) detailed how the algorithms of the video-sharing platform recommend more radical content as users continue to watch videos. She explained, “YouTube’s algorithms will push whatever they deem engaging, and it appears they have figured out that wild claims, as well as hate speech and outrage peddling, can be particularly so” (n.p.). Many White nationalists are radicalized on YouTube where their messages flourish, cross-pollinate, and gain legitimacy (Lewis, 2018). For example, in “YouTube’s search and recommendation system appears to have systematically diverted users to far-right and conspiracy channels in Brazil” and “teachers describe classrooms made unruly by students who quote from YouTube conspiracy videos or who, encouraged by right-wing YouTube stars, secretly record their instructors” (Fisher & Taub, 2019, n.p.). If democracies require informed citizens committed to justice then educators must confront the ways Google search and YouTube undermine those values. Unfortunately, democratic and just aims can also conflict with profit-motives.

Are the Ways the Developers Profit from this Technology Ethical?

Google not only profits by surveilling and extracting excess data, but the company has popularized an economic system that has spread to many other companies such as Facebook (Zuboff, 2019). While Google conveys that collecting data increases efficiency and improves their products, they use profiles of users to target advertising, nudge user behavior, and increase their click ratios that boost value to advertisers, which can threaten privacy, democracy, and choice. Google has developed a range of software and hardware to collect more data. Google’s advertising profits have allowed the company to dominate other markets, including education over the last decade, without turning profits on those ventures. Zuboff’s critiques of Google’s business practices are supported by lawsuits brought against Google by multiple states (e.g., Arizona v. Google; New Mexico v. Google, 2020; USA v. Google, 2020).

Google has infiltrated schools by taking advantage of schools’ “dual ambition” to improve technology and cut budgets by offering free software and inexpensive hardware (Lindh & Nolin, 2016, p. 647). The company then profits from students’ personal data and brand loyalty among teachers (e.g., Google Certified Educator) and students. Google first marketed Gmail to universities (Vaidhyanathan, 2011), but they often gained a foothold in schools by appealing directly to teachers (Singer, 2017a). Google appeals to educators with well-designed technology and to cash-strapped districts with affordable prices on the front end, but they then collect and extract information from students and educators on the back end (New Mexico v. Google, 2020). Lindh and Nolin (2016) summarized Google’s “front end/back end business model” (p. 648) as offering free services where the customer’s data becomes the product. Google positions the company “as a free public service, divorced from marketplace contexts and concerns” (p. 650), but this framing allows the company to obscure the intent to surveil students to create algorithmic identities. Google profits from securing future customers, profiling students for marketing, and extracting data.

While it is unclear all the data Google extracts and how they use it, there are numerous ways they profit off the digital data, information, and labor of minors. Watters (2019) explained, “the kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising” (n.p.). Until 2014, Google illegally “mined students’ email accounts, extracted personal information about them, and used that data. .. for advertising” (New Mexico v. Google, 2020, n.p.). More recently, the state of New Mexico argued that Google collects or extracts data from “physical locations, websites visited, search terms, results clicked on, YouTube videos watched, contact lists, voice recordings, saved passwords, and other behavioral information” (New Mexico v. Google, 2020). The company monitors this data on both students’ and educators’ school and personal devices and networks on which the student uses Google Education apps. Lindh and Nolin (2016) argued that students using GAFE (now rebranded as Google Workspace for Education) enact “free digital labour” for Google by turning the everyday activities of users into a commodity (p. 647).

Moreover, Google’s surveillance capitalism business model has ushered in an environment in which numerous technology companies track students’ interests, behaviors, emotions, health (via fitness tools), non-cognitive experiences, academics, and psychometrics and other personal information (e.g., Lieberman, 2020). Russell et al. (2019) concluded that there is a “subterranean market” of student data for sale and wrote that faculty may not know they are contributing to commercially available student data (p. 25). When Instructure (the owners of Canvas) was purchased by the private-equity firm Thomas Bravo, students expressed concerns over the future privacy and use of their data in a public letter (Rozsa, 2020). Even if companies do not sell student data there is still reason to be concerned about the vast amount of information they collect; in 2018, the FBI issued an alert warning of the “malicious use of this sensitive data could result in social engineering, bullying, tracking, identity theft, or other means for targeting children” (Federal Bureau of Investigation, n.p.). Moreover, there are other problems that can be harder to identify than profit motives.

What Are Unintended and Unobvious Problems to which this Technology Might Contribute?

Google’s dominance throughout online spaces and school markets has resulted in an array of unintended and unobvious problems concerning knowledge, values, and possibilities. First, Google may actually make it too easy for people to access information online. When patrons search a library—or even an online library database—books and other sources require some understanding of how information is organized. While this organization is not without its flaws (Noble, 2018), it does require some understanding of information organization and professional librarians as support. Moreover, the market dominance of Google means that people may not even be familiar with differences across search engines. Google becomes knowledge for many people. Numerous commentators and educators have contended that Google’s dominance and ease of search harm students’ information literacy skills and lowers their substantive engagement with texts (Brabazon, 2007; Thornton, 2009). Wineburg and McGrew (2018) found that students consistently struggled to interpret search results as they often trusted top results.

Second, Google’s incursion into education threatens and devalues traditional roles of teachers and librarians as sources of wisdom and experts in pedagogy. Google’s director of education apps, Jonathan Rochelle, questioned whether his children should learn the quadratic equation saying, “I don’t know why they can’t ask Google for the answer if the answer is right there” (Singer, 2017a, n.p.). Educational technology companies may conflate knowledge with wisdom and argue that their algorithms can personalize student learning (Singer, 2017b). Technology companies have already contended that technology positions educators better as “helpers,” which “disrupts and diminishes the role of the teachers as experts” (n.p.). Learning can become re-defined by efficiency as students rush through assignments without any sense of purpose aside from productivity.

Third, Google also emphasizes an education where digital consumption, productivity, and surveillance may be prioritized over human dispositions such as curiosity, uncertainty, and privacy. Educators are increasingly encouraged to use online products, which Google could discontinue at any point (Ogden, n.d.), to replace analogue alternatives or face-to-face interactions. Therefore, “software cannot be viewed as merely a technical tool but rather as something that codifies certain ways of thinking and acting” (Lindh & Nolin, 2016, p. 659). Schools may pressure educators to shift group work and social interactions online to Google productivity tools. Lindh and Nolin (2016) contended:

Online behaviour, documented and quantified through GAFE may easily be given more weight than classroom activities, since it supplies more distinct and quantifiable indicators of performance. Furthermore, not only pupils, but also teachers can now be evaluated by school leadership in ways earlier not possible, as their activities can be monitored, quantified and compared. (p. 659)

Google datafies education in ways that can unnecessarily increase the quantification and surveillance of students’ performance.

Finally, Google’s information technology (IT) and education dominance stifle both public and private alternatives. Historically, public institutions have helped to organize and make information available, but Google has eroded their role in an increasingly online world. In 2004, Google Books began digitizing books, however, Vaidhyanathan (2011) pointed out that “the quality of Google’s document scans was too poor to serve the aims of preservation” (p. 153), thus weakening accessibility of screen readers. Scholars and publishers still continue to discuss Google’s role in digitizing the sum of human knowledge and culture (e.g., Band, 2009; Bohannon, 2011; Duguid, 2007; James, 2010; Jones, 2010; Kousha et al., 2011; Leetaru, 2008; Pechenick et al., 2015). The public should consider the downsides of Google displacing publicly funded institutions such as universities and libraries with cheaper approaches to knowledge organization. Not only does Google impact how knowledge is kept, but how learning proceeds.

In What Ways Does this Technology Afford and Constrain Learning Opportunities?

While Google offers software and hardware that affords certain types of digital learning experiences, their technologies also pose constraints, and even redefine, learning opportunities. Google Workspace for Education software is elegantly minimalist, highly usable, and allows for both asynchronous and synchronous digital collaboration on Google docs, sheets, and slides. Their popularity is not without reason. However, as Postman (1992) articulated, “technology giveth and technology taketh away” (p. 5).

Google’s constraints and drawbacks are rarely discussed in education (e.g., Google for Education, 2016). Educators do not simply use Google’s technology, but are also used by it. Google’s ethos bends education toward it’s aims—namely productivity, quantification, surveillance, and learning online. Singer (2017a) contended that this “...puts Google, and the tech economy, at the center of one of the great debates that has raged in American education for more than a century: whether the purpose of public schools is to turn out knowledgeable citizens or skilled workers” (n.p.).

Postman (1992) explained that in our current society, “we would expect that no limits have been placed on the use of statistics” (p. 129), and the use of supposedly objective algorithms to determine the worth and relevance of information is what Google has sought with search results. Students trust Google results and rarely click beyond the first page (Haas & Unkel, 2017; Hargittai et al., 2010; Pan et al., 2007; Wineburg & McGrew, 2018). Google has effectively become the arbiter of important, relevant knowledge and people can accept their judgment as correct without further investigation. Moreover, students may often turn to YouTube for educational purposes even though their algorithms can nudge people toward extremism, conspiracy, and bigotry (Tufekci, 2019).

This is all in line with Google’s mission is to “organize the world’s information” and they apply this ethos of quantification to students. Google Classroom makes it effortless to run reports and quantify every aspect of student activity—attendance, grades, words per essay, and usage metrics. Students can be reduced to numbers in a report from which their success or failure is determined. When students complete assignments on Google docs they both lack privacy and control. Keystrokes are saved, stored, and in some cases scanned. While Google collects and extracts students’ behavioral surplus, Watters (2019) argued that Google’s nudging builds on B.F. Skinner’s behaviorism that has long held sway in schooling. The COVID-19 pandemic laid bare how quickly digital tools can shift into an educational infrastructure which leaves students with little choice, exposes inequities in digital access (Sawchuk & Samuels, 2020), and results in, what Papert (1988) would describe as, technocentric solutions.

Implications

We offer implications of our technoethical audit (Krutka et al., 2019) of Google for students, educators, citizens, and scholars. First, educators should address technoethical issues as part of the formal curriculum. Educators often focus on teaching with technologies, but not about or among them (Adams et al., 2021). Students need opportunities to critically think about not just Google services, but the many other technologies they use in their daily lives. Educators can adapt questions from Krutka et al. (2019) and assign students to complete technoethical audits on the technology they use in their classroom. For example, students could watch and discuss the 2013 documentary Terms and Conditions May Apply (Hoback, 2013), review the ToS of a Google service, or encourage their school to install privacy add-ons to browsers that rate transparency (e.g., DuckDuckGo Privacy Essentials & Terms of Service; Didn’t Read).

Moreover, conducting technoethical audits might result in students changing personal habits or taking informed action. Students might conclude that they, and society, would be better served by relying less on Google as the company’s dominance in education and society are unhealthy for both the private and public sectors (Nadler, 2020; New Mexico v. Google, 2020; Vaidhyanathan, 2011). Students might also work to create plans for their peers who seek to opt out of district-wide platforms or work with district technology specialists to evaluate technologies and policies concerning adoption and access. As one Texas school official acknowledged, “We should be asking a lot more questions about who is behind the curtain” (Singer, 2017b, n.p.). Schools and districts could hold forums with parents, educators, staff, and students to both educate and interrogate the trade-offs of using tools from Google to other educational technologies. In identifying criteria for technology adoption and use, school districts may even influence educational technology companies to meet their criteria.

Second, educators and students can work towards democratic and just change regarding the role of Google, and other technologies, in schools and society. Considering the mounting criticisms of Google and other technology companies, this might be an opportune time to address technoethical concerns. Members of the U.S. Congress in the recent past have often displayed either technological ignorance (e.g., Digital Trends, 2020; Romm, 2018) or resolution when facing CEOs such as Google’s Sundar Pichai (e.g., D’Onfro, 2020). However, this may be changing as Congress addresses anti-competitive behavior (Nadler, 2020).

Collective action and laws are needed so every school district and parent are not forced to negotiate with Google on their own. Educators and young people could advocate for technoethical values to Google and Congress to make laws that empower users, ToS that are transparent and readable, and designs that are anti-discriminatory, equitable, and accessible. Computer scientist Joy Buolamwini created the Algorithmic Justice League to challenge bias in software and educators might learn from her activism how to effect broader change (Benjamin, 2019). Plunkett (2019) contended that the public should aim to create digital playgrounds where young people can play, grow, and be protected as they mature into adulthood.

Finally, educators and scholars should consider, and even center, technoethical issues. In schools, at educational technology conferences, and in educational technology journals, we rarely see criticisms of not just Google, but other educational technologies too. While some studies may focus on technoethics (Krutka et al., 2019) or discriminatory design (Benjamin, 2019), others should give more attention to downsides and harms to ensure people are protected in the short and long term. There are many scholars who have long been critically considering the role of technology on society (e.g., Benjamin, 2019; Tufekci, 2017; Vaidhyanathan, 2011; Watters, 2019), and the educational technology field might follow their lead.

Conclusion

We completed this technoethical audit of Google so educators and students might consider, should we use Google in schools? Schools should not be places where educational technology titans exploit students, test new products, or reimagine education through their own techno-corporate ideals of personalization, efficiency, and profits. Educators are tasked with not only protecting students, but educating and learning alongside them to answer difficult questions. While we focused on the harms and downsides of Google in this paper because of their outsized influence and recent investments, educational technology companies—from Apple to Microsoft to Facebook to Instructure to Class Dojo —deserve scrutiny. Educators can teach the requisite skills for evaluating technology by adapting the technoethical audit we have used to fit their context; they can encourage critical dispositions toward how they use and are used by apps and programs; and continue to investigate ways to protect their data, privacy, and dignity online. We believe educators should work alongside students to find an answer that ensures that we, in the words of Google’s old motto, don’t be evil.