Skip to main content
Log in

Algorithmic Transparency and Manipulation

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

A series of recent papers raises worries about the manipulative potential of algorithmic transparency (to wit, making visible the factors that influence an algorithm’s output). But while the concern is apt and relevant, it is based on a fraught understanding of manipulation. Therefore, this paper draws attention to the ‘indifference view’ of manipulation, which explains better than the ‘vulnerability view’ why algorithmic transparency has manipulative potential. The paper also raises pertinent research questions for future studies of manipulation in the context of algorithmic transparency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

All data is available in the MS.

Notes

  1. I focus on the argument in Wang (2022) because it is the most detailed and extensive regarding the conceptual claim that algorithmic transparency can constitute manipulation. Wang (2023) and Franke (2022) seem to agree about the conceptual claim and offer different perspectives on the ethical question of whether and why such manipulation is morally problematic. I will discuss their contributions insofar as they bear on Wang (2022) claims about the ethics of manipulation.

  2. There are several critical questions about the norm-objectification premise that I will set aside for the purposes of this paper. First, it is unclear whether Wang wants to show or has shown that the system is indeed intended for that purpose, or whether it is really a side effect. I will set that issue to the side in this paper. Moreover, there are several open empirical questions about Wang’s norm-objectification premise. I set them aside in this paper to focus on the manipulation aspect of his argument.

  3. Wang suggests that algorithmic transparency “opens the black box “ so that people know what the rules are and can actively try to conform to them “, cf. Wang (2022, p. 13). Consumers can indirectly derive, and are directly told, about an „ideal model “ of someone that the algorithm would rate highly. In various ways, people may be influenced to conform to the model. Given the rewards and punishments associated with creditworthiness, “consumers as rational individuals will try to better their position “ behaving in ways “to their advantage,“ cf. Wang (2022, p. 13). For example, upon learning that “payment history” will be considered in FICO’s algorithm, individuals would tend to make prompt repayment to improve their credit scores.

  4. Wang does not fully adopt Susser et al.’s (2019). As I discuss in more detail in Section 3, Susser et al. (2019) defend covert influence as a necessary criterion for manipulation, whereas Wang often highlights that – to the contrary – manipulation can take place non-covertly (e.g. Wang, 2022, 69). Thanks to an anonymous referee for prompting me to clarify this point.

  5. Unlike Wang, Susser et al. (2018, p. 40) distinguish between general (shared by “all human beings in virtue of their embodied condition”) and “situated, socially constructed, or contingent vulnerabilities.” They further distinguish the latter into structural vulnerabilities, which derive from membership in groups with differential levels of advantage (e.g. being poor, or of a certain gender), and individual vulnerabilities, which are irrespective of group membership and derive, e.g., from one’s personal history or habits. Susser et al. (2018, p. 41) write that contingent vulnerabilities are not “monolithic” and that various overlaps and combinations of vulnerabilities can pertain to any one person. This makes it understandable why they characterise online manipulation, a type of influence that can be highly personalised and targeted, in light of vulnerabilities which, on their view, are also highly personalised and non-monolithic.

  6. See also Wang (2023, p. 2).

  7. Wang (2023), responding to criticism by Franke (2022) of the norm objectification premise, notes that manipulation may also occur by other means. For example, he notes that companies may also manipulate people’s behaviour “directly” by changing people’s choice architecture, rather than through the process of norm-objectification, see Wang (2023, p. 2). It seems that this interpretation is clearly true: there are many other ways in which people can be manipulated, apart from some process of norm-objectification, e.g. by altering people’s options. But that interpretation is not relevant for the claim about transparency as manipulation. The relevant, but doubtful, interpretation is that transparency itself has some role to play in these other ways of manipulation. That interpretation is doubtful because it is completely unclear what these ‘other ways’ might be in which transparency can manipulate without exploiting norm-objectification. Thanks to an anonymous referee for stressing this point. It seems that the relevant interpretation supports the reconstruction of Wang’s argument offered above: norm-objectification is a specific process or way in which vulnerabilities can be exploited. In that sense, the exploration of the link between manipulation and transparency on the indifference view are a charitable contribution to Wang’s suggestion that there may be ‘other’ ways in which algorithmic transparency can be manipulative.

  8. E.g. In fn 1 of his paper, Wang (2022) writes that he “follows the understanding of manipulation” given by Susser et al. (2018).

  9. In terms of Susser et al. (2018, p. 40)`s account, norm-objectification may at best be an “ontological” vulnerability, rather than a contingent vulnerability. Their account of manipulation, however, focuses on the latter as the relevant type of vulnerability in the context of manipulation.

  10. Franke (2022) contrasts Socrates’ dictum with Whitehead’s (1911) emphatic emphasis of the value of automating thought and behaviour in the sense of “extending the number of operations we can perform without thinking about them” (1911, pp. 45–46), cited in Franke, 2022. Franke is right to challenge an uncritical adoption of the thought that conscious reflection and deliberation is, per se, valuable. It is beyond the scope of this article to enter into a debate about the respective merits of the positions emphasised emphatically by Socrates and Whitehead. Wang’s point about the ability to make up one's mind, and the reference to the importance of revealing reasons adopted by the indifference view (see Section 4), can be appreciated at least in the minimal sense that there are some contexts when this is valuable (without claiming that this is valuable all the time). Thanks to an anonymous referee for prompting me to clarify this point.

  11. When transparency means that false or misleading information is communicated about the algorithm, transparency conceivably causes exploitation. Perhaps there will be manipulation as a result. But such a case is obviously irrelevant for Wang’s argument to the effect that informationally adequate, genuine transparency can lead to manipulation. This situation must be set aside.

  12. Indeed, Wang explicitly contrasts his account with an earlier discussion in Kossow, who suggests that when the dominant structure is dogmatic, it is not only useless to promote transparency, but can re-strengthen the existing power asymmetry, see Kossow et al. (2021).

  13. The alternative, wide notion of harm would count as harmful anything that does not contribute to an improvement of the status quo. Though I cannot argue for it here, however, that seems to me to be an implausible notion of harm. In any case, the present argument stands independently of that dispute insofar as there is no empirical evidence that non-transparency would lead to benefits, i.e. improvements over the status quo (thus, even if not procuring these benefits counts as harm on a wide notion of harm, it is simply empirically unclear whether the benefits would materialise). Thanks to an anonymous referee for prompting me to clarify this point.

  14. Ideas pertinent to the indifference view have also been defended by Gorin (2014b), Mills (1995), and Baron (2014). Klenk (2021a) uses the term ‘carelessness,’ whereas Klenk (2022) introduces the more appropriate term ‘indifference’ to avoid the misleading impression that manipulation is, overall, lazy or not planned out. Indeed, manipulation is often carefully crafted influence in its aim to be effective, but careless or indifferent only to the aim of revealing reasons to others.

  15. Franke (2022, p. 4), discussing Wang’s example of the FICO algorithm, helpfully points out that, abiding by (objectified) norms can be in the interest of the ‘victims’ of manipulation. Hence, any account of manipulation used to show how norm-objectification can be manipulative should be compatible with manipulation that benefits the victim. As suggested, the indifference account is compatible with paternalistic manipulation. See Klenk (2021a) for discussion. Accounting for paternalistic manipulation is possible on other theories of manipulation, too. See e.g., Noggle (2020) Thanks to an anonymous referee for prompting me to clarify this point.

  16. Noggle (2020) See Noggle (2018) and Klenk and Jongepier (2022) for critical discussion and overviews.

  17. To further illustrate the point, consider a world of omniscient, hyper-rational beings that are not vulnerable at all. Whether or not someone strives to reveal reasons to them or not does not matter at all because they are perfect trackers of reasons. Manipulation on a narrow reading of the indifference view would appear much less of a problem insofar as it will have no discernible consequences on the targets. This does suggest that facts about the potential targets of manipulation – such as their vulnerability – is relevant at least in two ways. First, for our assessment of the importance of manipulation in general and, second, for the moral assessment of a specific instance of manipulation. One can consistently adopt the narrow reading of the indifference view for purposes of defining or conceptualising manipulation and acknowledge the significance of consequences for evaluating manipulation. It is a further question whether the strict reading aligns with intuitions about manipulation. Since it mirrors how, for example, we talk about deception (a deceiver can accidentally make people believe the truth), I take it that the narrow reading enjoys sufficient support; see also Klenk and Jongepier (2022). I thank an anonymous referee for pressing me to clarify this point and for providing a version of the helpful example discussed in this footnote.

  18. Since this is but a sketch of the indifference view (and necessarily so, in view of the aim of the article), relevant questions remain concerning, for example, the precise nature of the ideal to reveal reasons to the interlocutor, and an adequate justification of that ideal (see Noggle (1996) and Hanna (2015) for pertinent discussion about the objectivity of the ideal in question). For the purposes of this article, however, the view is adequately described to explore the implications for the manipulative potential of algorithmic transparency.

  19. Thanks to an anonymous referee for suggesting the last point.

  20. An important set of question concerns the motives that determine whether or not the attempt at algorithmic transparency was manipulative. First, whose motives count? The ‘providers’ of algorithmic transparency, like the FICO, are often corporations or other institutions, and there is a large debate about whether or not to think of them as group agents, or mere collectives of individuals (List & Pettit, 2011). So far, accounts of manipulation rely on a notion of intention that is at least contentious to ascribe to such groups or artificial entities. Since the ultimate criterion for manipulation on the indifference view is an explanation of an influence, it is at least possible to give such an explanation independently of intention but instead in terms of function or purpose, which may more easily be ascribed to groups and artificial agents cf. Klenk (2022). Related to that question is the question of how to determine which amongst the many of motives that reside within an individual agent (or are 'distributed’ across collectives of individuals) count toward the assessment of manipulation. For example, a manager may, next to the aim to reveal reasons to their employee, be interested in fulfilling their duty, finishing work that day, and so on. More pertinently, Barclay and Abramson (2021) demonstrate that there are many roles and motives that may legitimately be associated with a given algorithmic system. A tentative suggestion on behalf of the indifference view is that the motive to reveal reasons need not be the only or primary motive (which seems overly demanding) but at least a causal source for chosen means of influence, i.e. the chosen influence would be chosen across a range of counterfactual contexts (Lagnado et al., 2013). This would account for to the intuition that manipulative influences are such that the manipulator all too easily forgoes the aim to reveal reasons (which may be present) in favour of the aim to be effective. Tentative as this suggestion is, it would have some bearing on the practical question of how to regulate manipulative algorithmic transparency. For instance, regulation should aim to encourage robust motives to reveal reasons. Their presence could be assessed by assessing which of the available means of influence – some more, some less reason-revealing, were, in fact, chosen by the influencer. Ultimately, however, this does not fully answer the question of whose motives count, and the tentative suggestion would need to be developed further. I thank an anonymous referee for pressing this point.

  21. More broadly, and beyond the credit system that Wang discusses, the practice of consciousness raising, cf. Keane (2016), can be interpreted as a way to come to question fixed social structures and – insofar as these structures are to an extent malleable and constructed – it would be a mistake to consider them fixed. The indifference view may – even on a narrow reading, and as a purely contingent, empirical matter, explain how the very process of consciousness raising does not get off the ground as a result of manipulative transparency, insofar as influence that is indifferent reason-revealing may (contingently) end up being not reason-revealing influence. It is important to emphasise, again, that this is an empirical question. I am not aware that it has, in specific detail, been explored yet. There is, however, relevant anecdotal evidence from education or training which, in many areas, starts out being geared toward effective influence (simply getting the student to perform a task) and then more and more toward understanding (getting the student to understand why and how the task is performed).

  22. Though only facts about the manipulator matter for the definition of manipulation (see Section 4.4), some of those facts will be facts about what manipulators believe or assume about their targets insofar as what it means to reveal reasons to someone is at least partly determined by that person’s psychology. As discussed above, it is still facts about the manipulator (their beliefs, etc.) that matters for determining whether something is manipulation. But insofar as we strive for non-manipulation in our interactions, or aim for design for non-manipulative transparency, we need to form a conception of what it means to reveal reasons to users. Hence, non-manipulators need to form a perception of people’s vulnerabilities in order to determine what it means to reveal reasons to them. I thank an anonymous referee for prompting me to clarify this point.

  23. Thanks to an anonymous referee for pointing me in the direction of research that already addresses these questions from a design perspective.

References

Download references

Acknowledgements

I thank Emily Sullivan, the team at the Delft Digital Ethics Centre, especially Stefan Buijsman and Juan Duran, and two very constructive, meticulous, and helpful anonymous referees for valuable feedback on an earlier version of this paper.

Funding

The author’s work on this paper has been part of the project Ethics of Socially Disruptive Technologies that has received funding from the Dutch Organisation of Scientific Research.

Author information

Authors and Affiliations

Authors

Contributions

N/A (single author).

Corresponding author

Correspondence to Michael Klenk.

Ethics declarations

Ethics Approval and Consent to Participate

N/A.

Consent for Publication

Consent for publication is given.

Competing Interests

No competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Klenk, M. Algorithmic Transparency and Manipulation. Philos. Technol. 36, 79 (2023). https://doi.org/10.1007/s13347-023-00678-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-023-00678-9

Keywords

Navigation