1 Introduction

In his article, Ploug (2023) takes up the pertinent question of how to regulate the use of AI-profiling. He argues for a sui generis right not to be subjected to AI profiling based on publicly available data. The sort of right that Ploug has in mind is a legal right that ensures that other parties, i.e., individuals, companies, and public institutions, are not allowed to generate predictions about you based on processing publicly available data, unless you provide them with informed consent. However, the right is pro tanto. In exceptional circumstances, it might be outweighed by other considerations such that it will be permissible to infringe it all things considered.

A central claim of Ploug’s article is that “a number of features of AI profiling make individuals more exposed to attempts of social control and stigmatization and that—in doing so—AI profiling presents a particularly invasive kind of data processing” (Ploug 2023, 7). This is what justifies “the exceptionalism” of AI-profiling. Compared to other kinds of data processing that may produce predictions about individuals, there are features of AI modeling that justifies a right against the use of such models for profiling of individuals.

I agree with Ploug that there is a need for regulating AI profiling. In this comment, I will not present an alternative to Ploug’s proposal but simply raise some issues that I think arise in relation to his article. To begin with, I raise a minor terminological concern. Second, I describe how Ploug’s proposed right is a right against a certain means of making predictions and present some further questions that I think should be considered when the right is understood this way. Third, I raise a concern about the burdens that Ploug’s suggested right seem to invoke on the individuals it is supposed to protect.

2 A Terminological Concern

To begin with, I want to flag a terminological point which I think is important to clarify in order not to confuse what is at issue, when we are discussing AI-profiling. AI model output of the sort that Ploug has in mind is typically referred to as a prediction. Such predictions in effect express the conditional probability that a subject A has target feature F given that A has input features I.

In his article, Ploug uses “data” to denote both the output of an AI, i.e. the prediction, and the input features. However, while Ploug is transparent about this use, I think calling the AI ouput “data” may easily give the false impression that an AI output records a known fact about the person being profiled. It is important to make clear that this is not the case.

3 AI Models as a Means

We may distinguish between how predictions about people are made and what the predictions are used for. I might use my data processing skills to predict that a friend of mine or the Prime Minister suffers from depression on the basis of publicly available information about them. Or I might apply an AI model to make such predictions. Ploug’s right is against the use of an AI model as a means for making such predictions. Not against the use of my human data processing skills.

To illustrate, if an employer wants to use information from the social media profiles of applicants for a job to predict how likely they are to have some feature of interest, say suffer depression, applicants for a job must be asked to give their informed consent. On the other hand, if it is the interviewer who processes the same information in order to make a prediction about the applicant this does not require consent regardless of the fact that the prediction is put to the same use.

We can now imagine an employer offering applicant Jane a job conditional on her participation in weekly therapy to prevent that she comes to suffer depression. This would seem to be a form of social pressure in the sense of communicating to the applicant that choosing not to attend the suggested therapy is associated with costs that makes it unlikely that she will reject the therapy. Ploug’s right is supposed to enable people like Jane to protect themselves against this kind of pressure.

The reason for a right against the use of the AI model and not against the use of human skill is that the AI model may be highly accurate. However, I cannot help wondering whether the way forward to avoid the sort of scenario just described is well-served by equipping Jane with such a right. For one thing, the employer might simply rely on the less accurate human prediction to exert the same kind of social control. So, the result might well be that Jane is subjected to the same unwanted behavior from the employer. If this is the case, then there is a sense in which Jane should prefer the more accurate AI prediction: It would make it more likely that the therapy is of value to her (even if it is a product of unwanted social control).

Second, Ploug simply refers to “AI models.” However, what constitutes an AI model in the relevant sense needs to be clarified. There are many types of models which are currently being described as “AI.” In so far as the right is against the use of a certain kind of means called an “AI model” it becomes crucial to provide an informative characterization of the kind of models that are covered by the right.

Third, if it is the high accuracy of AI models that justifies the need for a right against AI profiling, then it becomes a question whether the right also applies to AI models under some description that are not highly accurate? And even if a practical definition of the relevant kinds of AI models is provided, the real problem with such models seems to be that they are highly accurate. It is their high accuracy which Ploug takes to make their predictions lead to increased risk of control and stigmatization (on certain assumptions about human belief formation). Thus, it seems that the justification for a right against using AI models as a means for making predictions will justify a right against the use of any kind of means that makes highly accurate predictions. Whether such as system is an AI model under some description or some other tool for making predictions seems irrelevant.

In response to this, Ploug might point out that there are features of AI models that “in and of themselves make profiled individuals more exposed to social control and stigmatization”—for example, their versatility. Still, as I understand Ploug, he also presents two arguments for the exceptionalism of AI profiling which appeal only to high accuracy and assumptions about human belief formation. Thus, he writes:

AI profiling may bring to the fore highly accurate predictions of future behaviour and dispositions that may lead to beliefs triggering various attempts of social control that would not otherwise have been attempted.

On the face of it, this seems to be an argument for a right against such profiling based only on the accuracy of AI models, not on their versatility, human unpredictability, and transferability.

An alternative to introducing a right against using AI models as a means to make predictions about people would be to regulate what uses different parties are allowed to make of predictions based on publicly available data. In the case of Jane, I would find it more in line with the concern about social control and stigmatization that a general regulation was in place specifying permissible use of publicly available data to make predictions about people’s mental health regardless of whether they were produced using an AI model or not.

Finally, it would be interesting to see how Ploug’s argument relates to privacy concerns. On some accounts, privacy is a matter of being able to control personal information. On other accounts, privacy is respected to the extent that other parties do not in fact access such information (Munch & Mainz 2023). Ploug’s rights approach seems to suggest a control account of privacy, but it would be valuable to elaborate the connection between the Ploug’s right against AI profiling and privacy theories.

4 Rights and Burdens

I will end by presenting a concern about the tasks and responsibilities that the right defended by Ploug may incur on right-bearers. To illustrate what I have in mind, consider the requirement that people give informed consent to the use of cookies, when they enter a homepage. It is practically impossible for individuals to take on the responsibility of managing the use of cookies on their computers, when they use the internet. Thus, most of the time people nominally give informed consent without understanding what they are consenting to. They are asked to take responsibility for access to their data without having the competences and time to do so in the way required by informed consent.

A similar concern I think arises with respect to the proposed right not to be subjected to AI-profiling. In a world where it is practically impossible for individuals to administer what information about them is publicly available in a digital form, and where the use of predictive AI models is already ubiquitous, it seems unfair to impose on them the responsibility to administer informed consent to requests to use AI profiling. As with the cookies consent, there is good reason to expect that it will in effect be impossible for people to undertake this responsibility. Thus, while I agree that AI profiling should be regulated, I am not convinced that individuals should be burdened with the task of administering when such profiling is permissible. Rather, it would seem more feasible to regulate what sort of AI-generated predictions one is allowed to make about people and how they can be used. This should go some way to minimize the risk of social control and stigmatization that is the concern driving Ploug’s argument for a right against AI-profiling.