Keywords

1 Introduction

Computational processes become increasingly “close” to us: they interact with our bodies, movements and emotions and occupy a central place in our daily lives. In line with this development, “the person” has been brought to the centre of the research on design of computer systems. In the contemporary context of “Big Data” systems [1] and an increasing “information overload” on the web [2], it has become progressively important to get the right information in the right way to the right person at the right moment in time [3]. In marketing studies, this has led to questions such as how “to tailor electronic commerce interactions between a business and each individual customer” [4]. In computer science, it has led to improvements of methods for “user profiling”, “behaviour preference” and “personalised search engines and recommendation systems” [5]. Basically, research in personalisation aims at adjusting digital content to the needs, preferences, desires and whims of the person interacting with it.

The process of personalisation invariably integrates underlying assumptions about what a person is and – more importantly – about what we can understand as the right information for the right person. It is an automated process of categorisation, and therefore of inclusion and exclusion of both digital content and of “digital personae” or profiles from the information we access. As a result, personalisation processes can influence what type of information we retrieve from our search engine, what kinds of products are recommended to us when browsing online, and what kind of feedback we receive about our daily activities. Also, personalisation processes are used by intelligence agencies to identify suspicious individuals and by insurance companies to establish people’s personal credit ratings. Accordingly, these increasingly ubiquitous computational processes that directly influence many aspects of our everyday lives can have significant ethical implications. For instance, de Vries argues that ethical concerns arise from cases of implicit discrimination based on profiling [6]. Moreover, Bohn et al. argue that the practice of matching digital persona with profiles that present a public security concern has ethical implications because it can lead to mass-surveillance practices [7]. In a different vein, Schubert argues that the use of personalised technologies that nudge their users in certain directions can lead to a reduction of personal autonomy and agency [8].

Most of these ethical analyses of personalisation processes seem to focus directly on their effects, without explicating how we can actually understand the process of personalisation. In this paper, in contrast, we aim to contribute to the existing ethical reflections by engaging in a discussion of how personalisation processes mediate “the person”: showing that the person is often wrongfully understood as a collection of static attributes and offering an alternative understanding, in which the person and the technology are co-shaped, or rather configured, through interaction with personalisation processes. We argue - to use the terms of Verbeek - that in order to understand the ethical implications of personalisation processes we need to understand what kind of “I-technology-world” relationship they constitute [9], that we should make explicit how they interact with a human and how the human is transformed in this process. However, we also aim to overcome some problems that theories of technological “mediation” such as Verbeek’s confront us with in understanding personalisation processes. As an alternative approach, we use a framework of narrative technologies, which is based on Paul Ricoeur’s major work on narrativity: Time and Narrative [10]. With this framework, we can better understand how human understanding of our technological world changes through interaction with textual technologies such as computer systems. We will first investigate the concept of personhood in the context of personalisation technologies. Second, we will present the framework of narrative technologies that is based on Ricoeur’s narrative theory. Third, we will apply this framework in order to identify and understand the ethical implications of personalisation technologies.

2 Personalisation Technologies as Adaptive Mirrors of Personhood

The process of personalisation is applied to a wide range of different technologies; but it generally revolves around notions of “adapting”, “fitting” or “tailoring” digital content to the human being(s) interacting with it. A central term in personalisation research is said to be the “adaptability” of a system [11]. Adaptive systems include three basic models in their design: a “user model” that is a structured model for the collection and categorisation of personal data belonging to a user, “an application model”, which is a description of relevant features of the application, and an “interaction model” that is meant to structure the organisation of interactions between a user and a system [11]. Asif and Krogstie argue that multiple personalisation approaches exist that can be based on “machine-learning algorithms, agent technology and ubiquitous and context-aware computing”. They identify a “basic level” of personalisation, at which a user selects a certain configuration of a computing device or interface, which subsequently remains the same [12]. This basic level corresponds with the conventional, instrumental view of a personalisation technology: of the human user actively configuring its settings. Then there is a “second level”, at which the configuration of a system is based on a “profile” of the user, and a “third level” at which both the profile of the user and his or her “context” (mostly comprised of meta-data such as location, time of the activity, type of activity) are used as the basis for the configuration of the system. Bouzeghoub and Kostadinov make a distinction between profiles and queries: a profile being a “user model” “defined by a set of attributes” and a query being an “on-demand user need” [13]. Roosendaal offers an additional, distinction, namely one between “digital personae”, which are representations of known individuals in the real world and “digital profiles” which are sets of characteristics about persons that can be used as inputs for algorithmic decision making [14].

A clear example of personalisation based on a digital persona and profiles is an automated passport check at an airport [15]. When someone’s passport chip is scanned, the retrieved data is compared with a data entry containing the document number belonging to the respective person, accompanied by her picture, biometric information and information about country of origin, age, and so on. Based on an algorithmic assessment of this personal data, the person can either pass through or will be held and interrogated by the border police. In this process, the digital persona can be compared with and transformed into a digital profile, for instance by linking it with a certain profile containing passport features that are deemed “suspicious”, or by using it to add to the digital profile of people originating from the same country. What all processes of personalisation such as the automated passport control have in common is that they use interactions with humans to gain knowledge about them and create a representation that changes the behaviour of a system in order to fit, adapt to or be tailored to this representation. The purpose of many instances of personalisation research is to make sure that the behavioural changes of the system approximate the expectations, wishes and/or needs of the human interacting with it. Terms like tailoring suggest that the user – the human agent – is seen as a given, as a static point to which the personalisation processes need to adjust. Just like a tailor adjusts the sizes and shapes of a piece of clothing to the human body that remains the same (or rather, that is defined by fixed measurements), personalisation processes are supposed to be tailored to users who are presumed to remain the same. Accordingly, user needs or preferences are supposed to be fixed. For instance, it might be assumed that the user of a weather application has the fixed need of knowing what the weather will be the next day in her city and on request a personalisation process will link the data of her location and weather forecasts to provide for the desired information.

However, scholars in the area of philosophy of technology have claimed that such an instrumental approach does not adequately capture how humans and technologies interrelate. Ihde explains that “technologies transform our experience of the world and our perceptions and interpretations of our world, and we in turn become transformed in this process” [16]. Accordingly, we argue, personalisation processes could be seen as “mirroring” processes, in a similar sense as Hegel and Lacan gave to them, rather than as mere adjustments of computational outcomes to the static user needs [17]. By looking into the mirror, the perception of the self changes because the subject suddenly becomes aware of herself as an object as well: she engages in a process of self-reflection. Dennett argues that the consequent idea of “self-consciousness” – or the ability of the self to reflect on the self – is one of the necessary conditions for personhood [18]. However, the “mirror” that we engage with in the sense of personalisation processes differs from the static idea of a glass mirror for it actively adjusts to the user interacting with it. Here, we juxtapose the conception of a static person “subjected to a set of rules in whose making he or she had no part” [19], which supports conventional views of personalisation, with a mediated conception of the person as changing during interaction with personalisation technologies. This confronts us with the question: if interaction between a human and a personalisation technology results in a constant “mediation” [9, 16] of the “I-technology-world” relationship, and consequently in a mediation of the personhood of a user of the technology, how can we make sense of this process? To answer this question, we turn to the work of Paul Ricoeur.

3 A Framework of Narrative Personalisation Technologies

Current theories of technological mediation have certain drawbacks in accounting for the kinds of mediations (the kinds of human-technology relations) personalisation processes constitute. First of all, after the so-called “empirical turn” [20], theories of technological mediation have been drawn away from analysing the linguistic aspects of technologies, as they were present in classical analyses of technology (Heidegger; Ellul), because they became preoccupied with technologies’ material dimensions [21]. This disregards the highly linguistic, or textual character of many of the mediations that personalisation technologies bring about. This problem not only relates to the linguistic nature of software in general but also, and perhaps more importantly, to the linguistic nature of the information with which the user engages (search results, nudges for information input, etc.). Secondly, these theories often display a highly individualised approach (focusing on the “I” in technological mediation) and therefore can less adequately account for the mediation of what van den Eede designates as “being-with-each-other” relations [22], in the context of personalisation. As Kitwood argues, personhood needs to be understood as requiring a “living relationship with at least one other” person [19]. We cannot talk about an “I-technology-world” relation when the technological mediation of a “person” essentially revolves around a “we-technology-world” relation. For instance, consider a personalisation process that tells an intelligence agency something about a designated category of individuals. In this case, a mediation of an inter-subjective relation is at play - between an institution and a socially constructed group of people.

Coeckelbergh and the first author of this paper have started developing a “theory of narrative technologies” that tries to account for the abovementioned inadequacies of current theories of technological mediation [23, 24]. This was done by taking the work of Ricoeur as a valuable starting point for thinking about technological mediation, as Kaplan already suggested [25]. The suggestion is that the model of mediation of human experience by means of a text, as explicated by Ricoeur, can be used as a model for explicating the mediation of human experience by technologies. In his seminal work Time and Narrative, Ricoeur develops a theory that draws from the writings of Augustine and Aristotle to provide a structural account of how narrative understanding, through the engagement with texts, mediates human experience [10]. He aims to ground what happens once a human interacts with a text, by conceptualising it as a process consisting of three distinct moments: the prefigured time at the initiation of the interaction, the configured time during the interaction and the refigured time after the interaction has taken place, at which the world of the text merges with the world of the reader. Ricoeur argues that texts mediate our narrative understanding, which is always a public understanding [26] and therefore deals with inter-subjective relations. We argue that the process of configuration, as explicated by Ricoeur, offers an adequate model for understanding mediation of information and communication technologies (ICTs), especially in the case of personalisation technologies.

Even though a theory of narrative technologies might be less suitable for understanding the individual, material co-shaping of humans and technologies, such as is at stake with wearing classes or a prosthesis, we argue it can much more adequately account for text-like technological mediations that revolve around inter-subjective relations, which are at stake in this paper. Based on Ricoeur’s theory, we established two dimensions of technological mediation that characterise the process of configuration: the dimensions of activity and of abstraction [23]. Activity refers to the extent to which technologies actively configure human narrative understanding. This can be explained by drawing an analogy between “reading” of personalisation processes by a user and the “reading” of data by a computer. Notably, we do not want to argue that interpreting of information by a human is in any way the exact same process as symbolic manipulation by a computational system (see e.g. Searle [27]). Rather, we want to draw the attention to the simultaneity of the activities we usually designate separately as reading and writing. Whenever a computer reads out certain data, simultaneously data are written, which implies that the processes of reading and writing interrelate. Similarly, whenever a person “reads” the output of a personalisation process, her narrative understanding is “re-written”, which means that her experience of the world changes. At the same time, interactions with personalisation processes “re-write” those processes, which leads us to the claim that the technological mediation of personalisation processes can be characterised as a process of active configuration. Arguably then, the three levels of personalisation identified by Asif and Krogstie signify an increase with regards to this dimension [12]. This means that the more personalisation technologies are able to interact with the context of a user (location, time, personal network), the more that they actively engage in a process of changing her narrative understanding, and consequently in configuring her experience of the world. To illustrate this difference: a modern technology such as a microwave might be personalised, in the sense that a user can configure its settings according to her personal preferences, but such a basic, non-contextual level of narrative configuration is still fairly passive. In contrast, the earlier mentioned automatic passport check can be regarded as an active personalisation technology for its configuration changes according to its interaction with a human and simultaneously the human understanding of the technology is configured (as a result of displayed information and nudges to engage in certain actions).

The second dimension of abstraction can be understood in line with Heidegger’s notion of modern technologies transforming aspects of the human life world into “gestell”, or standing reserve [28]. The collection and processing of personal data transforms aspects of our personhood (e.g. age, gender, occupation) into standing reserve, the raw material of the personalisation process. Abstraction, in Ricoeur’s work, is the result of narrative structures in a text that enable it to configure so-called “second- and third-order entities”, or higher-order entities [10], which are entities that abstract from the world of human action. For instance, although somebody engaging with his phone to buy and sell derivatives on the stock market is a first order entity directly engaged in the world of action, the “derivatives” and “stock market” she engages with are abstracted from the worlds of action they mediate. That is, a derivative trade configures a “distance” between its interaction with a user (the person initiating the trade) and the effects it has on the world of action, in which for instance people are forced to sell their house because the derivatives drop in value [29]. The closer mediated interactions with ICTs stay to the world of action, as for instance in a game in which the characters and the plot are configured in a meaningful narrative whole for the player who “acts” in it, the less abstraction a narrative configuration brings about. In that sense, the construction of a “digital persona” as described by Roosendaal leaves the user relatively close to the world of action though the “profiling” of a user [14], in which types of persons defined by measurable variables invoke certain responses of a system abstracted from the world of action. For instance, the automated adjustment of credit ratings based on certain profiles (containing for example gender, ethnicity and occupation), is an example of personalised abstraction for it detaches the generated process from the world of action of a particular person.

4 The Narrative Ethics of Personalisation Technologies

By using the framework of narrative technologies, we have established two claims: (i) that personalisation processes actively configure our narrative understanding - progressively so the more they interact with the context of a user - and that they – depending on the design of the technology – (ii) are capable of abstracting from the world of action. As such, we contend, personalisation technologies are highly similar to the paradigm of the text as discussed by Ricoeur (reference): they are very textual technologies. This means that, just as a reader’s experience of the world might change by engaging with a piece of literature or by watching a stage play, his experience of the world might change by interacting with digital personalisation processes. Such a process of personalisation is perhaps best described by Needham, who argues that personalisation in a context of the design of public services “can best be characterised as a ‘story-line’” [30]. In other words, personalisation processes configure the narrative structures, or “stories” in an abstract sense, that underlie what Dennett described as one of the necessary conditions for personhood.

This view firmly opposes the idea that personalisation processes in some way get tailored to static user needs. Rather, they in turn configure these “needs”: these needs change according to the changes in the narrative understanding of a person. The idea of the person that underlies this argument ties in with the ideas of the ‘narrative self’ as put forward by scholars such as McIntyre and Taylor, who argue for the “narrative character of human life” [31] and that we “grasp our lives in a narrative” [32]. According to this idea of personhood, a person’s character is shaped according to the narrative structures (the “stories”) that configure her narrative understanding through interaction with her life world, which in our time is highly technological. According to Kamtekar, we can understand the notion of character as: “a more-or-less consistent, more-or-less integrated, set of motivations, including the person’s desires, beliefs about the world, and ultimate goals and values” [33]. When engaging in an ethical analysis of such a notion of configuration of a person’s character, we are not merely focussing on evaluating the direct consequences of actions or the design of rule-based patterns of behaviour that can be either right or wrong. The focus on configuration of a person’s character instead of consequences (which would lead us to consequentialism) or rule-based systems (which would lead us to deontological ethics), leads us to consider virtue ethics - which takes character as a central notion - as an adequate basis for evaluating personalisation processes in the framework of narrative technologies. Indeed, as van Hooft shows [34], Ricoeur’s hermeneutics is strongly related to the tradition of virtue ethics and adds to it in important ways. In virtue ethics, the notion of a “virtuous” character, or virtuous person, depends on the consistency of motivations and on the extent to which these are non-conflictual. Thus, in order to evaluate the configuration of the person through personalisation processes, we should inquire how the narrative structures configured by these processes influence a person’s motivations: the extent to which those are configured in a consistent and non-conflictual manner. Notably, we are therefore not only interested in the ethical impacts of a personalisation process on the user, but also on other people affected by it, such as care-givers in the case of assistive technologies or on insurance agents in the case of personalised credit ratings.

First, we consider the ethical implications of active configuration. This characteristic of personalisation processes implies that they are very powerful tools for either re-enforcing the world-view of the user or for refiguring it. A strong example of this is what Introna and Nissenbaum designate as “the political effects” of search engines as they exclude certain political sources and include others, based on a user’s profile [35]. A democratic voter in the United States might therefore be confronted with search results that exclusively link to media that favour the views of the Democratic Party; based on her “personal needs” and re-enforcing her narrative understanding. However, this process applies also to the more ordinary, everyday activities of users of personalisation technologies. For instance, wearable personalisation technologies such as assistive technology devices constantly monitor the location and bodily state of a person who is in need of care and interact with this person or with her caregivers [36]. Similar wearable devices can also be used to interpret a user’s behaviour and bodily processes according to profiles of “productive” and “non-productive” workers, and can nudge a user to engage in daily exercise in order to be more productive at work. Such technologies are less explicitly political, but can have an even more pervasive influence on the “character” of the human interacting with them, for certain values like a “work ethic” and a preferred “life style” can be embedded in the personalisation processes. We argue that these effects show that we should consider personalisation processes as comparable to the conventional human processes in which media are produced: as automated journalists and writers who constantly confront us with information that configures our narrative understanding. Therefore, the ethics of personalisation processes should mirror an ethics of the public sphere, improving the means for a user to engage in a deliberative process with the technology and ensuring a pluralist character of information that can be accessed through the technology. For instance, this could imply that the underlying values of nudging technologies such as wearable devices on the work floor should be made explicit and subjected to a democratic process of deliberation between the workers. As such, design of the nudges configured by devices could change according to the agreed-upon purpose that is assigned to the technology by the workers.

Second, we consider the dimension of abstraction from the world of action. This implies that a user’s narrative understanding engages with “second- and third-level” – or higher order - entities as conceptualised by Ricoeur that abstract from the world of action they mediate (reference). This process reflects what Coeckelbergh designates as the “distancing” effect of technologies; the capacity of technologies to constitute (moral) distances between humans who engage with them and the reality they mediate [29]. We argue that abstraction by means of personalisation technologies carries with it the risk that a person’s motivations are made inconsistent and brought into conflict with one-another. Kitwood discusses this risk in the context of care relations, arguing that the normalisation that is implied in many personalisation processes (for instance, “profiling” a patient to fit care services to needs of her patient type), goes fundamentally against the idea of being a person [19]. Personalisation, he argues, ought to account for the uniqueness of the person. As Ricoeur [10] also argues: narrative structures help to understand the particular, the situated, rather than the general, the universal. The risk for inconsistent or conflicting motivations especially persists when personalisation processes are used to confer indirect, technologically mediated judgement on a person based on information generated by profiling. For instance, an insurance agent might reject a person’s request for an insurance contract, based on a digital profile that itself is a higher-order entity. Inconsistency can arise because even though the insurance agent might be motivated to provide people with the best insurance contracts, his judgement can be misguided for it is not based interaction in the world of action but on an abstract representation; by for instance including variables like a person’s music taste or ethnicity. This process simultaneously mediates the world of the user who tries to obtain her insurance contract; for decisions based on the digital profile she is related with can influence her senses of financial freedom, status or self-respect. As such, even abstract personalisation processes can function as the earlier mentioned adaptive “mirror” that changes the self-perception of the persons interacting with them.

5 Concluding Remarks

In this paper, we discuss the narrative ethics of personalisation processes that increasingly influence our daily lives. Adding to existing ethical analyses, we base our investigation on an understanding of technological mediation of the “person”. To do so, we depart from established theories of technological mediation and utilised a framework of narrative technologies that is inspired by Ricoeur’s work. We first argue that personalisation processes are powerful tools for re-enforcing or configuring the narrative understanding of the people interacting with them. This makes them into technological agents that can influence people’s political or even everyday worldviews. Secondly, we argue that personalisation processes configure narrative structures that abstract from the world of action. This carries with it the risk that they configure inconsistencies and conflicting motivations with for the virtuous character of the persons interacting with them.

Our analysis contributes in different ways to existing debates on the ethics of personalisation technologies. First, we provide a philosophical understanding of personalisation that goes against instrumental the idea of “tailoring” digital contents to the static needs of a user and instead shows how a “person” also changes in the process of interaction. Second, we link the structured understanding of technological mediation that we gained from Ricoeur’s narrative theory to ethical theories of person’s character, which enables us to provide a normative account of “personalisation”. Third, our analysis goes beyond explicating ethical implications for only the users of personalisation processes. For instance, not only the person whose interaction with personalisation processes results in a credit rating is affected, but also the person consequently utilising this rating.

An initial step in dealing with the ethical implications of our analysis would be to include considerations of narrativity in the design process of personalisation processes. This approach is currently gaining momentum, being referred to as the “narrative approach to personalisation” [37]. Especially in game-oriented designs of digital education environments, designers focus on the “personalisation and adaptation of Story-based Digital Educational Games” [38]. For instance, by using technologies based on these design principles, a user can “co-author” the narrative structures she engages with. These design practices could deal with the issues of abstraction, for users would be drawn nearer to the world of action. Eventually, a broader ethical program would be needed to ground the effects of personalisation technologies that will become increasingly pervasive in our lives. One of the key issues will be to discuss the impact of these technologies on the mediation of our public sphere: how we want them to configure our public deliberations in the political and cultural realms.