1 Introduction

Studies suggest that social media data may be used to make accurate predictions about health-related conditions of users, including their risk of suffering depression (Reece et al., 2017; Reece & Danforth, 2017). In a recent article, I argued that individuals should have sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent (Ploug, 2023). The fundamental idea in this article is that accurate AI predictions of health-related conditions based on publicly available data can drive stigmatization and attempts at socially controlling other people. The suggested right would thus empower individuals to protect themselves against stigmatization and unwanted attempts of social control driven by AI profiling based on publicly available data. For the sake of readability, I shall in the following occasionally refer to this right in abbreviated form as a right not to be AI profiled.

In his comment, Holm agrees that there is a need for regulating AI profiling but has reservations about the suggested right (Holm, 2023). Holm seems to raise two substantial issues.

First, Holm claims that there are scenarios in which individuals have a reason to prefer attempts of social control exercised on the basis of accurate AI predictions rather than less accurate human predictions. A right not to be subjected to AI profiling, Holm thinks, would thus potentially run counter to individual interests. In this reply, I argue that the kind of scenarios Holm imagines are wholly unlikely to arise and that more probable scenarios of the kind Holm has in mind are accommodated for by the possibility of providing informed consent to AI profiling.

Second, Holm contends that a right allowing for individuals to provide informed consent to AI profiling burdens those individuals with an unfair responsibility. As with ‘cookie consent,’ such a responsibility is practically impossible to undertake. In response, I shall argue 1) that Holm’s diagnosis of the ‘cookie consent’ problem fails to take account of the empirical research literature, 2) that Holm wrongly assumes that the right not to be AI profiled without consent commits its proponent to a ‘cookie consent’ model of informed consent, and 3) that Holm’s move from the practical difficulties of providing informed consent to the unfairness of the right to informed consent has rather controversial implications.

2 The Desirable AI Profiling Scenario

In my definition, social control encompasses two kinds of actions. It may take the form of social pressure defined as communicative actions in which an individual succeeds in associating certain choices with costs to a degree making it less likely that another individual will make these choices. It may also take the form of choice–sets interventions in which an individual directly shapes the future opportunities of another individual.

So, defined social control is a common phenomenon. It will cover a plethora of daily interactions between individuals. Although all attempts of social control may be unwanted by an individual, it seems obvious that a general right against all forms of social control cannot be upheld. As mentioned above, I have argued in favor of a specific right not to be involuntarily subjected to attempts of social control driven by AI profiling based on social media data.

Holm thinks that a right not to be subjected to AI profiling based on social media data provides poor protection against attempts of social control. As he contends, humans may make the same—albeit less accurate—predictions from the same data and then attempt the same kind of social control. This would, in some cases, even be a reason for preferring the accuracy of AI predictions. To illustrate this, he asks us to imagine a case, where an employer is offering an applicant, Jane, a job if she is willing to participate in weekly therapy to prevent that she comes to suffer depression. In so doing, the employer exercises social pressure on the applicant to participate in therapy as a choice not to participate is now associated with the risk of not getting the job. In this case, Jane allegedly has a reason to prefer that the employer is pressuring her based on accurate AI predictions rather than inaccurate human predictions as she would then at least know that the therapy is more likely to be of benefit to her.

Holm’s scenario is a non-starter. It is highly unlikely that the employer will access Jane’s social media data and try to predict her risk of depression and other health-related conditions. As pointed out in my article, AI profiling is characterized by being human-unpredictable. It will often reveal personal data that would otherwise be hidden from or hardly accessible to humans as in the case of predicting of depression from social media data.

Let us, for the sake of argument, imagine that the employer accesses Janes social media data before the interview, e.g., some of her photos on Instagram, and that he is stricken by the dominant bluish colours in her photos. Although the employer is aware of the questionable reliability of such methods, he takes them to be a possible sign of depression. Would this belief lead the employer to exercise social pressure on Jane to participate in weekly therapy to the same degree as an accurate AI prediction of Jane suffering a depression? The answer to this question is ultimately empirical. The underlying psychology of social control must be determined empirically. However, in my article, I hypothesize that the accuracy of AI predictions may drive attempts of social control that are stronger than attempts of social control based on inaccurate predictions. Ex hypothesi, the social control exercised on a specific individual known to have a depression would be stronger than social control exercised on the basis of unreliable evidence. Truth matters for attempts of social control—and stigmatization. If so, Holms imagined scenario becomes even more unlikely.

Rejecting Holm’s scenario as unlikely does not, however, entail the impossibility of situations where an individual may have reason to prefer the accuracy of AI profiling. Adjusting Holm’s scenario slightly will be sufficient. Imagine an employer suggesting to his employee, Jane, that she must participate in the annual, weeklong, preventive therapy class unless AI profiling shows that she is not at risk of depression. Jane is only interested in taking the class if she knows she will benefit from it, and thus, the employer’s proposal amounts to social pressure to succumb to AI profiling. At first sight, it seems that in such situations, a right not to be AI profiled would run counter to Jane’s interests. However, the right not to be AI profiled is a right that may be exercised by providing informed consent. Jane may in this situation provide informed consent to the AI profiling and thus pursue her interests while at the same enjoying and exercising her right not to be AI profiled. The suggested conflict presupposes an outright prohibition against AI profiling.

Even if Holm’s scenario is taken at face value, it is far from clear that it has any implications for the question of a right not to be AI profiled based on publicly available data. Legal measures aimed at protecting individuals may occasionally run counter to individual interests. While speeding limits may generally protect individual interests, there are undoubtedly exceptional cases where it is in the interest of the person to exceed the speed limit. More is needed for such exceptional scenarios to have any real implications for regulatory measures.

3 The Alleged Problem of Informed Consent

In the latter part of his comment, Holm argues that it is unfair to impose on individuals a responsibility for handling informed consent requests in relation to AI profiling, because “as with cookies consent, there is good reason to expect that it will in effect be impossible for people to undertake this responsibility.” According to Holm, the problem with ‘cookie consent’ is that individuals are “asked to take responsibility for access to their data without the competences and time to do so in the way required by informed consent.”

Holm’s diagnosis of the problem of ‘cookie consent’ is highly dubious. Thus, it is not at all clear that individuals lack the competences to understand consent requests to cookie use, and there certainly are no time limits on ‘cookie consent’ requests. The research literature on consent behavior, including the literature on ‘routinization’ and ‘consent fatigue,’ can provide some of the much-needed evidence here. Empirical studies of online behavior find that the main drivers of routinized consent behavior—i.e. where consent is provided as an unreflective, habitual act—are the length of the information materials, the frequency of having to read such information, and the perception that use of certain online services is ‘low risk’ (Ploug & Holm, 2012, 2015b) While routinization clearly threatens the validity of informed consent, the research literature also suggests that consent behavior may be more or less routinized and that it may be mitigated through careful consideration of how consent requests are designed. Rather than it being ‘practically impossible’ for people to understand consent information and their choices in an online context, the reality is that individuals to varying degrees understand and reflect on the information provided as part of informed consent procedures.

Holm also wrongly assumes that a right not to be AI profiled without informed consent commits its proponent to a ‘cookie consent’ model. From the right not to be AI profiled without informed consent, nothing follows as to the specific character of the consent requests, i.e., their frequency, the level of accompanying information, and whether access to services and products can be made conditional on consent.

I have in previous writings defended the idea of a meta consent model (Ploug & Holm, 2015a, 2016). Meta consent denotes the idea that individuals should have the opportunity to design future consent requests for different categories of data. In effect, individuals should have the opportunity to determine if they want to be asked every time their data is used, only sometimes or never. That is, they should have the opportunity to decide for different categories of data if future consent requests should take the form of specific consent or broad consent, or if they can want to provide a one-off blanket consent or blanket refusal. An individual’s consent preferences could, for instance, be recorded and stored by a not-for-profit data broker that would then make those consent preferences available to companies and institutions. Online platforms enabling companies and institutions to make consent requests to individuals are already being offered in several countries. If, as I have argued, AI profiling based on publicly available data is exceptional, it could be considered an independent category of data use in a meta consent model, and thus, a category of data for which a particular kind of consent request should be made.

For the sake of argument, let us again take Holm’s position at face value. His argument has a fairly simple structure: if it is practically impossible for individuals to understand the choice they are making, then it is unfair to burden them with a choice. I cannot help but wonder what other situations Holm would consider individual understanding of information and choices to be ‘practically impossible’ to achieve. It seems that his argument could not only be extended to a number of different contexts of informed consent but also to individuals’ voting behavior and beyond. Letting practical difficulties be a reason for stripping people of basic rights seems to be a road marred with trouble. The approach favored here is to let practical difficulties be a reason for taking care in designing choice situations—not for taking away or withholding individual rights to self-protection, privacy, and autonomy.

4 Concluding Remarks

Holm points out that if the right not to be AI profiled based on publicly available data is ultimately grounded in the social control and stigmatization that may ensue from highly accurate AI profiling, then this right could apply to “any kind of means making highly accurate predictions.” In my article, I point to a number of features that make individuals more exposed to attempts of social control and stigmatization. They include increased accuracy, but also the versatility of AI modeling, the human unpredictability of AI profiling, and the transferability of AI models. It is the sum of all these features that ground a right not to be AI profiled based on publicly available data. However, I can readily agree with Holm that in so far as other technologies exhibit these features, individuals could and should also have a right not to be subjected to profiling by such technologies.