1 Introduction

I thank Ulrik Franke, (2022) for his thoughtful comments on my paper. His piece agrees that algorithmic transparency can be utilized to manipulate people’s behavior. But Franke poses a further interesting question: how much should we care about this manipulative potential of algorithmic transparency? He suggests that people may have good reasons not to worry much about this manipulation.

Franke starts with an interpretation of my paper as “a Foucauldian power account of algorithmic transparency,” which is typically a constructionist one (Franke, 2022, 1). He then argues that the constructionist account often constitutes a gap between factual and evaluative claims. This gap leaves room for people to admit the fact of algorithmic transparency as manipulation, but that they may not evaluate this manipulation as good or bad. In this light, he seems to imply that algorithmic transparency as manipulation is not in itself wrong, but more of a matter of personal choice: different individuals may adopt “different evaluative attitudes towards” the manipulative risks of algorithmic transparency (Franke, 2022, 1).

In this short reply, despite agreeing with some of Franke’s main points, I will point out three crucial misconceptions in his arguments. Accordingly, I spell out some reasons to defend why we need to worry about algorithmic transparency as manipulation. I end by re-emphasizing that we should care about this manipulation because we as society have a moral duty to do so.

2 Manipulation Is Not Necessarily a Foucauldian One

Franke claims I offer “a Foucauldian analysis of algorithmic transparency as part of a disciplinary power” (Franke, 2022, 2). This claim is partly true. In my original article, I indeed showed how algorithmic transparency works as a disciplinary technique, and the title of the paper is also about “uncovering the disciplinary power of algorithmic transparency” (Wang, 2022b, 1). However, I focus on disciplinary power because I use the credit scoring system, a particular disciplinary system, as a case study to reflect how the operation of asymmetrical power in general can manipulate people’s behavior via algorithmic transparency.

This asymmetrical power is deeply rooted in the algorithmic society, where powerful entities manage to “turn individuals into ranked and rated objects” (Citron & Pasquale, 2014, 3). As Shoshana Zuboff worries, the power gap between users and surveillance capitalists is large:

(Surveillance capitalism) represents an unprecedented concentration of knowledge and the power that accrues to such knowledge. They know everything about us, but we know little about them. They predict our futures, but for the sake of others’ gain (Zuboff & Laidler, 2019).

Under this asymmetrical power structure, as discussed in my original paper, there is often room for commercial entities to manipulate consumers’ behavior. But this manipulation is not necessarily a Foucauldian one. Companies can directly manipulate people’s behavior by changing the architecture of choice, which does not necessarily need norms or penalties (Susser et al., 2019; Wang, 2022a; Yeung, 2017). Moreover, in the context of algorithmic transparency, commercial entities can use a strategic transparency of their algorithms “as a psychological tool to soothe” the public and regulators (Weller, 2017, 57). For example, some big tech firms, like Google and Facebook, have built their own “transparent” AI projects to make their complex algorithms more explicable (Tsamados et al., 2022, 219). This so-called algorithmic transparency, however, does not fundamentally mitigate the problem of AI manipulation (e.g., looking at the scandal of Cambridge Analytics; Hu, 2020, 1). This algorithmic transparency can be criticized as a kind of “ethics washing” to escape more extensive regulations (Yeung et al., 2019; Wagner, 2018; Bietti, 2021).

3 The Power Account Is Not a Constructionist One

According to Franke’s interpretation, the power account I propose is “a constructionist account of algorithmic transparency,” which fits a general constructionist pattern (Franke, 2022, 2, emphasis in original). Following that pattern, algorithmic transparency is seen as an objective and natural thing, which “is taken for granted and appears inevitable,” but in fact, it is constructed by power and interests (Franke, 2022, 2).

While this account captures some critical features of my understanding of algorithmic transparency, there is a subtle but key difference: this constructionist claim assumes that objectivism and constructivism are generally inconsistent with each other. In other words, algorithmic transparency can be understood either as an objective thing or as a social fact that is shaped by power relations—it cannot be both. This inconsistency is not the point of my paper. As highlighted in my original article, the power account of algorithmic transparency should not replace the informational one, but rather the two complement and enrich each other, enabling a comprehensive form of algorithmic transparency that none of them can complete on its own (Wang, 2022b, 6):

Notedly, such a power analysis of algorithmic transparency does not mean that it is superior to the informational account or it can fully replace the latter. Rather, these two accounts are complementary, and both can be useful in illustrating different issues. The upshot is that when analyzing algorithmic transparency, we should take both accounts into consideration. We should not only disclose the information about how algorithms work, but also be alert to the hidden power structures and the way in which the disclosure happens can have profound and far-reaching effects that are often overlooked.

4 The Evaluation of Manipulation Is a Political Issue

Franke shows that people may have good reasons not to care that much about algorithmic transparency as manipulation. After all, we can imagine how people can be cognitively and psychologically overloaded by reflecting on every belief and action in their daily lives.

Nevertheless, my argument is that evaluating algorithmic transparency as manipulation is a political matter that extends beyond the individual level. To be sure, individuals are not required to reflect on every detail of their life. However, it should at least be possible for people to have the capacity to reflect when they want to. Some individuals may not be concerned with using credit cards or cash, or with gaining or losing economic benefits, but sometimes they may, for example, worry about how algorithmic systems can manipulate their political views. Different people may have different attitudes towards how much manipulation they care about, but the reflexive capacity which is crucial for a robust democratic society should be preserved in societies where algorithms shape so much of our behavior. This reflexive capacity is not simply an individual’s choice but rather a significant value for democracy. Many critical studies have shown how artificial intelligence (AI) not only restrains people’s willingness to engage in deliberation, but also undermines critical thinking (Zuboff, 2019; Susser et al., 2019; Wang, 2022a). Therefore, we as society have the duty to build algorithmic systems that can ensure the healthy development of humans’ deliberative capacity.

A further and related point is that there is a moral obligation to improve an inherently immoral and unjust system, even if people who live in the system may not care or some may even feel “happy.” For example, slaves are sometimes narrated as joyful, singing songs, and well-treated in 19th-century America (Kolchin, 1993).Footnote 1 Even if some slaves do feel happy, the inherent immorality of the system indicates that the system needs to be abolished on the political level. This analysis of “happy slave” can be helpful to understand the political meaning of manipulation. Manipulation is “morally objectionable because it exploits individuals’ vulnerabilities, and directs their behavior in ways that are likely to be to the benefit of the manipulator” (Wang, 2022b, 18). In this sense, manipulation is inherently immoral. Individuals may not care about the risks of manipulation, but we as society have the moral obligation to ensure those systems are managed in a responsible and non-manipulative fashion.

5 Conclusion: Design for the Value of Transparency

My main proposal is to design algorithmic systems by incorporating the value of transparency. That means not only that we need to make algorithms as transparent as possible by disclosing the information, but also that we should be more sensitive to the issues of power. This consideration of power will be a big challenge for the design, but we have a second-order duty to do so. According to Ruth Marcus, “One ought to act in such a way that, if one ought to do X and one ought to do Y, then one can do both X and Y” (Marcus, 1980, 135). This regulative principle suggests that if we ought to make algorithms more transparent and we ought to make it more sensitive to the power relations, then we have a moral duty to realize both simultaneously. This second-order duty “entails a collective responsibility to create the circumstances in which we as society can live by our moral obligations and our moral values” (Van den Hoven et al., 2012, 149).