Keywords

1 Introduction

Data-based educational technology systems increasingly include learning analytics (LA) tools, which present students personalized curricular materials and educational paths. LA proponents argue these things will lead to better outcomes for the student, the instructor, and the institution. However, the design of LA systems brings to the fore questions about whether or not designs of such systems are ethically above board. LA technologies, to varying degrees, make decisions for students, lead them to make particular decisions, or deny their choice for decision making altogether. By doing so, LA acts as a paternalistic technology. Herein, I address the paternalistic design of LA.

This paper will continue with four sections. First, I begin with a brief overview of LA. After this, I follow with a discussion of paternalism as a moral concept before addressing paternalism as it relates to technology. Next, I address prima facie cases against LA as a paternalistic technology. Finally, I consider how LA’s paternalism runs counter to student academic freedom protections.

2 Learning Analytics

2.1 Learning Analytics Defined

LA is commonly defined as the “measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.” [1] In higher education, proponents of the technological practice employ data-driven systems to, inter alia, study student behaviors, inform students of their educational progress, and improve instructional methods. The practice aggregates a wide variety of data to support its analytic needs, mostly from on-campus sources like learning management systems and institutional student information systems. And while some of the information supporting LA students disclose to their institution, much of the valuable “digital breadcrumb” data is created as students interact with numerous types of information systems and sensors–from, among other things, wireless access points, RFID readers, e-mail servers, library databases, and student ID scanners.

2.2 Ethical Issues

Due to the variety, volume, and sensitivity of data institutions can aggregate, a lively conversation about ethics has emerged. But, to date little work has explicitly addressed LA’s paternalistic design. Rubel and Jones [2] recognize that LA brings to fore important student autonomy issues. Similarly, Prinsloo and Slade [3] argue that the student vulnerabilities LA presents could decrease by improving student autonomy. And participants in Sclater’s [4] study recognized that a code of practice for LA could counterbalance issues of institutional paternalism. While this literature and other works (see, for example, [5]) allude to paternalism as a form of negative liberty, they do not make explicit what, exactly, about LA is paternalistic; moreover, the current literature does not address how paternalism interferes with specific student rights or interests. Moving forward, I consider the philosophical foundations of paternalism and facets of technological paternalism in order to address this gap in the literature.

3 Paternalism

3.1 Soft/Hard and Weak/Strong Paternalism

Paternalism invokes liberty, for the purposeful limitation of one’s liberty by another is, roughly, paternalistic behavior. Consider a close working relationship between two men, Jed and Charlie. When Jed restricts Charlie’s ability to make a decision for himself, Jed reduces Charlie’s capacity to make choices according to his own interests, values, and as a means to living a good life (however Charlie defines such a thing). This is a liberty-reducing action on Jed’s part. However, what makes it paternalistic is Jed’s intention, which is that he feels that his decision for Charlie will ultimately make Charlie better off. This is a fine start, but let us continue by considering Dworkin’s [6, 7] formal views on paternalism.

According to Dworkin [7], “[p]aternalism is the interference of a state or an individual with another person, against their will, and defended or motivated by a claim that the person interfered with will be better off or protected from harm.” Paternalism is codified in rules, policies and laws; it also manifests in social norms and technological design, as I will discuss later on. In each of these things, someone or something intercedes for an individual under the assumption that the individual’s life will improve. Also important to note is that the target of paternalistic actions does not wish for such action to take place. Dworkin [6] provides the following as a formal definition:

  • P acts paternalistically towards Q if and only if:

    1. (a)

      P acts with the intent of averting some harm or promoting some benefit for Q;

    2. (b)

      P acts contrary to the current preferences, desires or dispositions of Q;

    3. (c)

      P’s act is a limitation on Q’s autonomy.

(a), (b), and (c) occur in varying degrees depending on the intent of the paternalist. A soft paternalist justifies intrusion in an individual’s life when the intrusion is simply to make sure the individual is informed about her decision–not to make the decision for her. Here, (a) is active, while (b) and (c) are not. On the other hand, a hard paternalist makes the decision for her, regardless of whether or not she’s informed, which invokes (a), (b), and (c).

Paternalistic action is justified using a weak or strong explanation. A weak paternalist feels her interventions are justified when her actions allow the individual to accomplish her intended ends even though the individual’s autonomy is compromised. Svendsen [8] clarifies this, writing, “[t]hat is to say that the weak paternalist focuses exclusively on the means an agent uses to meet their goals, though it is completely left to the agent to determine what those goals might be.”

In contrast, a strong paternalist intervenes in a person’s life because her goals/ends are wrong, irrational, or improperly prioritized, which justifies paternalistic behavior to stop the means used to achieve the incorrect ends. However, as Dworkin [7] argues, strong paternalists are only justified in their action when their interventions are fact-based, which is to say that they are intervening in an individual’s life because the facts informing her actions are wrong; strong paternalism is not justified when paternalist interventions are enacted to change the values individuals use to inform their actions.

3.2 Libertarian Paternalism

A recent addition to the paternalism scholarship has come from Cass Sunstein and Richard Thaler’s theory of libertarian paternalism. [9,10,11] According to the authors, their form of paternalism is a “relatively weak and nonintrusive type of paternalism because choices are not blocked or fenced off.” [9] The essence of the theory is that the libertarian paternalist is justified in nudging, or suggesting particular choice options, to individuals when those choices maximize welfare and do not limit choosing other, non-nudged choice options.

Libertarian paternalism is built on a foundation of modern behavioral economics. Historically, humans were viewed as rational choice makers, but new theories suggest that rational choice making is not all that rational. There are too many competing interests or confusing in situ conditions that affect rational choosing. Instead, the liberal paternalist’s interventions are justified because they present the individual with “good default rules [that] can neutralize and overcome” irrational choice making to yield “better overall choices” [8].

What makes “libertarian paternalism” seem like an oxymoron, as Sunstein and Thaler [9] point out, is the obvious fact that paternalism is not compatible with libertarian philosophy. To restrict freedom of choice–as an act of autonomy and a means to better welfare–is simply not libertarian. But, Sunstein and Thaler argue that the weak paternalism they suggest defends the theory against libertarian criticisms. Individuals still have free range to make their own choices regardless of the limited, but welfare-promoting “choice architecture.”

Choice architecture refers to the options embedded in certain environments created by choice architects. Thaler, Sunstein, and their colleague John Balz [12] use the example of a school cafeteria to explain the concept. The cafeteria’s choice architect, the cafeteria director, can use information about food choice to decide how to present options to students, such as:

  1. 1.

    Arrange the food to make the students best off, all things considered.

  2. 2.

    Choose the food order at random.

  3. 3.

    Try to arrange the food to get the kids to pick the same foods they would choose on their own.

  4. 4.

    Maximize the sales of the items from the suppliers that are willing to offer the largest bribes.

  5. 5.

    Maximize profits, period.

Each of the options provide different outcomes for different parties, thus serving different interests. The choice architect has the “responsibility for organizing the context in which people make choices.” [12] And these architects are not always neutral. Sometimes they have their own welfare in mind (e.g., they will setup conditions for choice making that financially benefits themselves); sometimes they are benevolent. Either way, it is important to note that choice architects have influence in (and power over) the lives of individuals for whom they are designing choice sets.

3.3 Moral Proxies and Choice Architects

Questions of technological paternalism stem from the moral influence embedded in technological systems, artifacts, and tools. Science and technology studies (STS) scholars, remarks, Millar [13], have long considered questions of technological determinism, momentum, and neutrality in order to delineate who or what has agency and moral responsibility: the human or the technology. Overall, STS scholarship argues that technology holds very little responsibility for embedded moral positions, but designers do. “Designers,” writes Millar [13], “can intentionally embed moral norms into artifacts to achieve certain ends”. The technology, then, takes on a proxy role: It simply represents the designer’s moral position.

Designers epitomize the role of the choice architect and hold the power to create moral proxies. The communications they design into interfaces and the algorithms they embed in the architecture to inform and predict user behavior can mold or shift choice making in a paternalistic way. Their designs present individuals with particular courses of action and just-in-time information that can, among other things, limit or promote autonomy based on how they construct technology. Consider the common recommender system employed by Amazon.com. By suggesting “if you like this, buy this” options based on personalization algorithms, the designers have architected a choice set that did not exist before. And those choice options can be embedded with the designer’s moral compass. For instance, a designer may believe that individuals who search for materials on gay culture should also be presented with specific religious literature as a moral counterweight, consequently the designer can code this perspective into the construction of the algorithm.

While personalized recommendations for books are by and large non-paternalistic, other types of recommendations or predictions are not. They may suggest, for instance, that users who interact with a system or behave in a particular way are more likely to be better off financially, professionally, or personally. Nudging users to act in a way that brings about these benefits highlights important questions. It is not always clear as to whether or not individuals are fully aware of why the technology is giving them recommendations and metrics, nor are they mindful of what is motivating and informing those nudges. The opaque nature of technologies, and the way that they conceal moral positions, should be concerning to users. Yet, the ubiquity of and reliance on highly complex technology in everyday life often leads individuals to unquestionably interact with systems and artifacts without a thought to paternalistic influences.

3.4 Four Facets of Technological Paternalism

With these things considered, Hoffmann [14] argues that technological paternalism is characterized by four facets. First, technological opacity makes it “incomprehensible” for individuals to make informed decisions about how to use technology and avoid paternalism. Individuals lack the intellectual capacity and skill set necessary to understand how technologies are interfering with their lives. Second, Big Data analytics embedded in technological design creates an air of objectivity or truth-by-data that subordinates (if not denies) personal subjectivity. People have confidence in data-driven systems, the predictive scores they present, and the recommendations they make because they seem to be neutral and based in fact. Third, aggregating massive datasets for analytic practices enables a wide array of actors to intervene in and craft strategies to direct the lives of individuals–often without their knowledge. And, finally, the ubiquity and interconnectedness of technologies and the datasets on which they rely allows for paternalistic intervention on a grand scale, allowing for targeted interventions for subsets of populations and individuals alike.

4 Learning Analytics and Prima Facie Cases of Paternalism

4.1 The Distribution of Benefits

In 2010, Colleen Carmean and Philip Mizzi [15] considered the role nudging techniques might play in LA technology. Motivated by the work done by Sunstein and Thaler, they argued for the creation of digital choice architecture that could “promote engagement, focus, and time-on-task behavior.” Since then, LA technologies have matured with nudging at the core of their features.

Nudging works in a variety of ways in LA. For instance, one aim of nudging is to promote proven study strategies, while another is to encourage students to enroll in courses in which they are predicted to be academically successful. [16, 17] Other nudging techniques are embedded in learning management systems, where LA tools suggest to students just-in-time information. [18] Consider the University of Washington Tacoma, where students self-report academic behaviors. The institution’s Persistence Plus LA system directed a student who reported math anxiety to stress management resources for math phobia, and at just the right moment–the student received the nudge to action before her next math quiz [19].

On the face of it, LA nudging purportedly yields good benefits. By providing assistance and direction when students are most likely to need it, nudging has the potential to increase performance, engagement, and persistence. [20, 21] And if students feel supported and successful, it is plausible that important metrics–like retention, time-to-degree, and graduation–will improve on campuses as students succeed year-to-year. Among LA advocates, these metrics are often the yardstick against which they judge the efficacy of the technology, for there is concern that higher education institutions are using too many resources and operating inefficiently without producing enough highly skilled and competitive students for the workforce. [22,23,24] However, the emergence of LA technology and the relative immaturity of related practices means that the allocation of benefits between and among students and their institution is still unclear.

Given that the ends to which proponents aim LA as a means are diverse, there is an open question as to whether or not the benefits (however they materialize) are justifiable, or will be distributed equally or equitably. About this, Rubel and Jones [2] write:

Even if we suppose that the consequences of learning analytics are overall positive…there is a question as to who it benefits, and to what degree. It is certainly the case that learning analytics will be a benefit to institutions. And to the extent that institutions will use the information to further their mission of providing learning opportunities and helping ensure learning outcomes, some benefits would presumably accrue to students as well. But that does not mean that the distribution of benefits is good, or fair.

If we cannot assume that the benefits will redound to individual students, and if those benefits are more aligned with the interests of their institution, then there is a prima facie case against LA and nudging techniques. And when LA acts contrary to student preferences, it emboldens cases against LA. Furthermore, these cases only become stronger when they outright limit student autonomy.

The discussion that follows provides three examples of paternalism in LA technologies and practices with explanations as to why the paternalism is harmful.

4.2 Case One: eAdvising

For the first example, consider Austin Peay State University’s implementation of Degree Compass, a homegrown eAdvising system. The system uses predictive analytics to pair students with courses that “fit their talents and their program of study.” [25] Among other things, course recommendations are weighted based on if they are relevant to the student’s set degree path and whether or not the student will be academically successful in the course. Similarly, for students who have yet to choose a major, the system nudges them to consider majors in which they are likely to achieve academic success.

In this scenario, it may seem to a student that the institution is looking out for her well-being. The institution wants her to be academically successful and recommends to her degree paths and courses. Moreover, the system provides structure and just-in-time information that enables decision making during high-stress times, such as enrollment and course selection.

The problem with eAdvising systems, such as Degree Compass, is that their choice sets compromise a student’s ability to make fully autonomous choices by presenting the path of least resistance to students. Simpler, they provide only those choices that are likely to lead to continued success. From an institutional perspective, there is little benefit to be had by showing students courses or programs they are predicted to do poor in. Due to financial burdens, it is best for institutions to guarantee, as much as possible, student success. For instance, getting students from admission to graduation as expediently as possible maximizes the financial resources spent on students while reaping the most financial benefit in terms of tuition and fees. It also improves metrics accreditors and the public use to judge the institution. Providing students options that run counter to these aims would not work in the institution’s favor. The author has seen no evidence that suggests institutions would stop students from enrolling in particular courses and programs based on a predictive score alone, thus this type of LA system is a form of weak paternalism.

4.3 Case Two: Digital Fences

For a second example, consider the emerging use of “digital fencing” technology for attendance taking and enforcement. Systems, like Class120, draw digital fences using GPS technology around areas on campus, many of which are associated with a course and its assigned classroom. The Class120 app and geolocation tools in student smartphones work in tandem to report to professors, advisors, and–if permitted–even parents when a student is not within a class-associated fence on time. [26] Students, themselves, get nudged about the next time they need to be within a fence on campus before automatic messages are sent out noting their tardy or absence.

Many faculty members would agree that class attendance leads to academic success, and the literature is saturated with research that supports this correlation (for example, see [27,28,29]). So, it seems justifiable that institutions would create attendance policies enforceable by technologies to promote attendance and maximize related benefits. However, there is no guarantee that students will actually learn while attending class: the instruction could be poor, the course could be badly designed, or the students could be unengaged. Regardless, institutions are motivated to enforce attendance as a means to maximize “an efficient return on investment” in classroom infrastructure and instructional labor, as well as to address some stakeholder accountability arguments [30].

Class120 and related systems exemplify a form of strong paternalism. It may actually be in some students’ best interests to use their time to attend to other needs, which, all things considered, make them better off. For instance, a student may have the intellectual capabilities to earn a satisfactory grade in a course regardless of lecture absences. Instead of attending class, she schedules more work hours to fulfill her financial obligations. But the justification for digital fencing rests on the argument that students should prioritize attending course sessions over other interests and ends. What plausibly happens is that the paternalism motivates students to attend class not to advance their education, but instead to prevent social repercussions and academic penalties.

4.4 Case Three: Libraries and Learning Analytics

As a final example, consider the University of Wollongong’s foray into library resource use tracking for LA purposes. The library datasets keep track of the total number of student item loans and electronic usage information from proxy server logs, including access points, timestamps, duration data, and the specific electronic resources that students access. After analysis and reporting, the data informs teaching by disclosing to instructors whether or not their students are using library resources. [31] When resource usage was low–and academic risk levels were assumed to be high–instructors intervened by nudging students to use the library. As a result, instructors saw an immediate uptick in library usage by students.

Some librarians believe that participating in LA practices will prove their value to their institutions. Libraries are increasingly scrutinized for the costs of their physical and digital collections, not to mention the upkeep of their facilities. Stakeholders want data showing that such investments work towards and are aligned with institutional needs, such as improving learning outcomes. [32] And emerging research suggests that library usage does correlate with student success, engagement, and retention. [33, 34] While the intervention has good intentions to improve library usage and serve a secondary purpose of proving library value, there is a serious question about the peripheral harms that could accrue.

Any library action that borders on surveillance of intellectual behaviors brings up questions of intellectual freedom. In this example, the library observed intellectual behaviors as represented in the materials students reviewed (or did not review). The library’s intent was well-meaning, but the act of surveillance was autonomy reducing. Library surveillance may have changed students’ natural dispositions and preferences, in that they plausibly altered when they used the library and what materials they searched and engaged with in order to comport themselves in a way that would look positive in the analytics. Jantti [31] provides no explanation as to why students responded to the interventions to use library resources, nor does she explain if the interventions led to more selective and useful interactions with the library’s collection. On the face of it, the paternalistic interventions could have harmed the conditions necessary for intellectual freedom.

5 Values at Risk and Recommendations

5.1 Liberal Education

When paternalistic actions inhibit student autonomy and put at risk the free pursuit of intellectual ideas and paths, a wider concern emerges related to student academic freedom. Most theories of liberal education posit that higher education institutions have a responsibility to promote student autonomy and put in place conditions for personal and intellectual flourishing. [35] Quoting Seneca, Nussbaum argues that liberal education “‘liberates’ students’ minds from their bondage to mere habit and tradition, so that students can increasingly take responsibility for their own thought and speech.” [36] Other theories also argue that university students, most of whom are the age of majority and have the capacity for autonomy, should be protected against paternalistic influence. [37] It is for these reasons that higher education has put in place academic freedom protections specifically for students and uniquely separate from faculty academic freedom.

5.2 Student Academic Freedom

At their best, universities maximize lernfreiheit–or student academic freedom–by developing students who can be their own person, follow their own interests and desires, and act authentically without undue institutional burdens. Working towards these ends requires universities to scaffold the expression of academic freedom by creating empowering circumstances and experiences that run counter to entrenched thinking and conformative being. To this point, Derek Bok [38] argues that institutions should expose students to a plurality of views, enable them to write and speak with purpose, develop their critical thinking capacities to carefully consider the information that informs their reasoning and ethical sense making, as well as prepare them to engage with a diverse citizenry and work in a global society.

The prima facie cases of paternalism against LA indicate that, to varying degrees, the conditions necessary for and the support of student academic freedom are at risk. Nudging students to academic paths of least resistance does not motivate them to consider yet to be discovered personal interests, nor does it engage them in a wider array of ideas. Moreover, the predictive measures employed by LA technologies puts at risk the things liberal education works to promote: critical thinking and autonomy. These metrics paternalistically convey what a student is predicted to be good at or do well in without fully informing students of what variables are included in the measures, the statistical error, or why the measures should be taken seriously at all. As a result, there is a serious concern that LA predictions will suppress choice making and, consequently, academic risk taking.

5.3 Recommendations

The technology and statistics that form the foundation of LA and the practices used to deploy LA insights are all still nascent. It is exactly because of this that the potential harms addressed herein can be limited. There is still ample time to consider longstanding values, such as liberal education and student academic freedom and determine if LA is designed in support of or runs counter to those values. Institutions should consider the following recommendations in order to respect these important values.

Justify the distribution of benefits:

If LA’s paternalism is to continue, colleges and universities must carefully consider whether or not the benefits or protections from harm the paternalistic action yields are in the best interests of the student. By and large, LA’s paternalism seems aligned with institutional interests–not student interests. Arguably, institutions may claim that the benefits they receive from LA trickle down to students in one form or another. But this argument is indefensible given the ways in which the technology can direct student life and restrict choice making. Institutions must work to ensure that students will directly benefit from paternalistic technologies.

Optimize choice making:

Institutions must make LA technologies and the algorithms that inform their design transparent. For students to make informed choices when considering specific choice sets or nudges to action, they need to understand what data or information supports LA in the first place. And they also need to understand the ends that their choice will work towards as a means. Doing these two things will work towards optimizing student autonomy while still allowing for the benefits based on LA’s paternalism.

Hold designers accountable for good moral proxies:

Student autonomy and academic freedom are moral proxies that designers can embed into LA technologies. But, LA vendors may not motivate their designers to consider these moral issues, and designers may not be sensitized to the harms their paternalistic designs can create. Institutions have a duty to hold LA designers accountable for building in good moral proxies into their designs, and should hold themselves accountable for carefully vetting systems to consider the harms that could accrue from poor designs.

6 Conclusion

I have argued that LA is a technology that employs paternalistic nudging techniques and predictive measures. These techniques can limit student autonomy, may run counter to student interests and preferences, and for sure do not always distribute benefits back to students–in fact some harms may actually accrue. The argument showed that paternalism in LA technologies is an especially problematic concern for higher education institutions who espouse liberal education values and promote student academic freedom. The paper ended with three general recommendations that work to promote student autonomy and choice making as a way to protect against risks to student academic freedom. Future research should consider developing particular information policy recommendations, and the literature would benefit from empirical work that analyzes how students perceive and respond to paternalistic LA designs.