1 Introduction

Social media, e-commerce, gaming apps, and e-health programs: data-driven applications are at the core of everyday life and Big Tech dominates the market. If we want society to benefit from these innovations, it is of utmost importance that tech companies are trustworthy.

However, their reputation is at an all-time low, as one incident after the other (Cambridge Analytica, apps leaking data, discriminating facial recognition applications) exposes their failure to take the interests of users at heart. Both regulators and companies acknowledge this must change. However, what is required from tech companies to be recognized as trustworthy is unclear. This poses a threat not only to people’s personal well-being that is intrinsically intertwined with these technologies but to society too, as our institutions are increasingly being shaped and governed in public–private partnerships with these tech companies.

This raises the question: What should tech companies do to deserve our trust? What would make them trustworthy?

Currently, it is unclear what, morally, to expect from these companies. This lack of clarity is risky as it opens the door for tech companies claiming to take the interests of end-users at heart (talking the talk) while dodging responsibility (failing to walk the walk). The philosophical debate on trustworthiness provides a rich resource for thinking about these questions but predominantly focuses on 2nd-person (I-You) relations. However, applying trustworthiness to tech companies raises specific challenges.

Therefore, this article aims to explore a philosophical account of trustworthiness tailored to tech companies to understand what makes them trustworthy. It will do so by bringing together debates on trustworthiness, philosophy of technology, data ethics, and regulatory strategies.

Section 1.1 will focus on the philosophical discussion on trustworthiness and will highlight three pillars of trustworthiness: giving assurances (1), being competent (2), and being committed (3). Subsequently, Sects. 2, 3, and 4 will each address one of these pillars, starting with a short conceptual analysis followed by an interpretation tailored to the context of tech companies. While each pillar is addressed separately, it is important to keep in mind that they are interdependent and that only by combining the three pillars the trustworthiness account is completed. Section 5 contains a table summarizing the trustworthiness account (Table 1) and provides a final reflection.

Table 1 Trustworthiness account for tech companies

1.1 Philosophical discussion on trustworthiness

While trust has received some attention in philosophical research, trustworthiness as a primary concept remains underdeveloped [25, p. 61]. Most of the work on trustworthiness is indirect, addressing the topic in the course of investigating trust (with few exceptions: [21, 25, 44]. Trust, as well as trustworthiness, are first and foremost seen as intersubjective phenomena, taking shape in human relations. As social beings, our lives are intrinsically intertwined with that of others [50]. To live a meaningful life, and to reach the goals we set, we depend on the efforts of others to help us forward; practically, emotionally, and morally. Trust is therefore inherently connected to dependency; when we trust we have positive expectations of the actions of others. We count on others not to take advantage of our vulnerability, but to actively take our interests at heart, furthering our chances of living a good and flourishing life (for an excellent and extensive analysis of philosophy and trust, see: [49].

When having these positive expectations, we implicitly acknowledge that the people we depend on have a mind of their own. They can anticipate that we count on them and be responsive (or not) to this dependency. This dependency on fickle others makes trust a risky business [32]. Others do not need to take our interests at heart, nor can we force them to do so (if we find ourselves in the —theoretical— situation of having complete control over the actions of others, trust would become redundant as complete control eradicates dependency). Trust is, therefore, more about accepting vulnerability than it is about eliminating it. Simultaneously, trust is one of the most powerful strategies human beings have developed to live their inherently social lives, as it allows them to accept the uncertainty that comes with social interaction and a future that cannot be fully controlled.

Trustworthiness sheds light on the other side of the equation, as ideally, we do not just want to randomly trust others, but only those who actually are worthy of our trust. Jones (25, p. 65) differentiates active dependency (trust) from active responsiveness (trustworthiness), similarly, Potter [44] distinguishes between responsible risk-taking (trust) and responsibility-taking (trustworthiness). Trustworthiness brings into focus the moral, cognitive, and practical skills that a trustee (the actor in which trust is put) should possess. While there seems to be a consensus that “the trustworthy agent is one that can be counted on” [25, p. 34; 52], how this ‘being counted on’ takes shape, remains up for debate. The standard view in the philosophical debate on trustworthiness [21, 25] is that trustworthy agents must at least: give assurances indicating their trustworthiness, be competent in some domain, and commit to putting their competences to work in the service of others who count on them.

In the next section, I will further address each of these three aspects of trustworthiness. However, because this literature takes a distinctly second-person approach, it cannot straightforwardly be applied to tech companies. Companies are socio-technical systems: hybrids of technological and human actors embedded in a specific organizational structure [5, 55]. This mismatch in the literature makes it difficult to explicate what, morally, to expect from tech companies and practically, how policy and regulation should be developed to foster trustworthiness. The remainder of the paper will be dedicated to ‘translate’ these insights from literature, focusing on second-person relations, to the context of tech companies.

2 Actively giving assurances

If we want to build genuine trust, we need to find a way to identify those actors who are trustworthy and put our trust in them. To tackle this challenge, oftentimes the focus is on the trustor and how they can make a good decision within a context determined by incomplete information. To lower the risk of having to deal with betrayal, incompetence, and prevailing self-interest on the side of the trustee, several trust cues have been identified that could help make that choice.

For instance, Hardin [19] stresses that to build a genuine trust relation, the trustor has to consider to what extent the trustee is aware of the fact that their own interest encapsulates the interest of the trustor. In other words, the trustor needs to make a rational assessment of the motivations of a trustee to come to a sound decision as to whether to trust an actor. In addition, an important incentive for the trustee to act trustworthy could be their interest in building and maintaining a long-lasting relationship (also see: [1] or retaining a good reputation. Although trust is always blind to a certain extent, intermediaries may lower uncertainty (and the trustor’s blindness) by establishing a kind of trust chain, where C, as a trusted third party, mediates the interaction between A and B [10].

Notwithstanding, the absolute relevance of considering how trustors should go about putting their trust wisely, this focus on reasonable risk-taking, also puts the burden primarily on the trustor to decipher who is worthy of their trust. To ease this burden, Jones argues that what we are looking for in trustees is “rich trustworthiness”. People who are richly trustworthy, actively communicate what their competences are, and in which domain of life, and solidly display their willingness to take the fact that someone is counting on them as a compelling reason for acting as expected [25, p. 74].

On the interpersonal level, people give assurances or communicate their trustworthiness by making promises and being explicit about what their competences are and where they end. Even more compelling than just communicating your competences and good will, is, —prior to the act of trust— doing something similar to what one is looking for in a trustee in order to show competence. Walking the walk, instead of talking the talk, enables people to more in-depth relate and anticipate how a trustee will act when eventually putting their trust in them.

To be able to actively communicate trustworthiness, one needs to have the ability to assess one’s own competence [25, p. 76]. Under which circumstances can one live up to their promises? Building the ability to assess and monitor one’s own competences in the context of tech companies, this article will argue, requires trustees to nurture certain techno-moral skills (which will be discussed in section three) and invest in institutional arrangements that strengthen their competences (which will be discussed in section four).

2.1 Richly trustworthy tech companies: communicating trustworthiness through design

While it is more or less clear what it means to actively communicate about one’s trustworthiness in the context of second-person relations, it remains unclear how to understand this in the context of tech companies. After all, it seems too demanding to expect users to actively look for publicly made promises by company representatives to assess tech companies’ trustworthiness. People do not directly interact with these representatives; they interact with the companies’ applications. In other words, it should be the applications that actively present their trustworthiness.

However, currently, guided by values such as user-friendliness and efficiency [40], data-driven applications and services are primarily designed to evoke trust [38], regardless of whether that trust is justified or not [30, 39]. A data-driven application can be perfectly designed and deliver on all its promises, but under the hood, out of sight of users, leak data, manipulate, and discriminate against people. Users trust the application because it does a seemingly excellent job, while what is at stake for them from an ethical perspective (their freedom, privacy, dignity) is violated, beyond their direct awareness. Designing for trustworthiness asks for a different approach than designing for trust.

Staying close to current data ethics design strategies, explainability as a key design value in AI applications might be a good starting point to signal trustworthiness.Footnote 1 Explainability is a specific form of transparency because it aims to make complex information and processes understandable to people [59]. Explainability can have both a moral and an operational need. The operational need concerns improving the robustness of AI systems. The moral need for explainability is to ensure that the impairment of fundamental rights, values, and other societal interests can be addressed [4]. Both are relevant for trustworthiness. The former is important as it focuses on the technical competence of the AI application—what is it technically capable of doing—, whereas the latter is key to showing the application’s responsivity to the vulnerability and interests of the trustor.

However, the predominant operationalization of these two forms of explainability is at the moment not tailored to communicating trustworthiness. The technical explainability is inward-looking, focused on tech experts that work on optimizing and further developing AI applications. And although the moral explanation generally is geared to a wider audience, including end-users, its focus is mainly on clarifying ex-post the rationale behind decisions made by AI applications. The underlying assumption is that if we know how a certain decision has been made, it will become easier to have trust in that decision.

To signal trustworthiness, explainability, both on the technical and moral level, should, however, take on an ex-ante approach and be tailored to the context of users, preferably by taking an interactive, conversational form [35]. This entails pointing out, prior to the decision, what the competences are of an AI or data-driven application: so, for what goal was it initially built? On what dataset was it tested and how related is this dataset to the context in which it is now used? For what purposes should the application not be used? Do the interests of the end-user align with those of the tech company? If not, which value conflicts occur and how are they being addressed? Do the interests and values of end-users take priority over those of the tech company operating the application?

When we combine explainability as a manner to signal trustworthiness with the condition that this signaling should take place in the direct interaction between end-user and application, it becomes conspicuously clear that this calls for a very different human–computer interaction (HCI) and User Experience (UX) design mind-set. Signaling trustworthiness might require cutting back on traditional design values such as functionality, user-friendliness, and efficiency in order to foster other values, such as integrity, benevolence, and transparency on the limits of both the application and the tech companies’ competences [31]. Even more so, it might actually require purposely implementing friction into the human-technology interaction to ensure that signals on the (limits of) the competences of the AI application do not go unnoticed. For example, a smart doll that learns from its interaction with children could improve its performance to the point where it switches from generic responses to much more tailored responses, taking into account the specific context of the child it interacts with. The possibility of abuse and manipulation, rightfully, has raised concerns around this kind of smart toy [13]. One design intervention to improve trustworthiness could be to add friction in the form of a warning signal that goes off when the doll reaches a new level of ‘smartness’, clearly informing parents and the child of its new competences [28, p. 155]. From a pure design perspective, this could be a counterintuitive intervention, as it interrupts the smooth interaction between the doll and the user. However, from a trustworthiness perspective, proactively communicating what the smart doll's new competencies are and being explicit about where these competences end is crucial for placing genuine trust in the smart doll.

3 Being competent

In second-person contexts, competences are often tied to specific roles in a relation or domain, e.g., the doctor is competent in the medical domain, the lawyer in the legal domain, and not vice versa. Tech companies describe their purpose as principally providing society with data-driven tools and services. Their competence lies in the technical domain: tech-savvy employees [7, 15] engaging in technical problem-solving to make the ‘best’ products [34]. On such an account, for tech companies to be trustworthy, they would need to put their technical competence to use for those who count on them.

Technical competence is undoubtedly important; nothing is as frustrating as a malfunctioning device or service. However, given the central role these applications and services take up in society and the increasing number of incidents that see the light—from privacy intrusions to manipulation and discriminating data-driven applications—the question arises as to whether this predominant focus on technical skills is still sufficient to develop data-driven applications which warrant societal and consumer trust. Shouldn’t these tech companies take on a bigger responsibility for their societal impact?

Recently, discussions in the USA have challenged the image of tech companies being neutral platforms, claiming that these companies should instead be acknowledged as information fiduciaries. Just as lawyers and doctors are fiduciaries, also data-driven companies should be bestowed with fiduciary duties of trustworthiness and loyalty, so the argument goes. Our relationship with data processing companies like Facebook, Google, and Uber is in a similar vein characterized by vulnerability, and dependency, and because these companies present themselves as experts while holding themselves out as trustworthy, they should be seen as information fiduciaries [2, 3, 58, p. 85–86].

It falls out of the scope of this article to critically assess the proposal to acknowledge tech companies as information fiduciaries, but it is good to note that it has generated some criticism [29]. For this article, it is sufficient to identify the underpinning tenet of the discussion: to be a responsible tech company exceeds being merely technically competent. Also in the EU, this viewpoint has been wholeheartedly embraced, accumulating in the publication of the Trustworthy AI Guidelines of the High-Level Expert Group on Artificial Intelligence (2019). These guidelines stipulate that AI applications should not only be technically robust and compliant with applicable legal frameworks, but should also be ethical in order to be trustworthy [46].

All in all, it becomes clear that the dominant view is that tech companies should not only be technically competent but also need to develop their competence in the ethical and societal impact of their products, taking responsibility for their innovations. In order for them to become more knowledgeable [17, 52] about how people’s well-being is dependent on their applications, they will need to foster techno-moral competences in their employees [53, 54].

3.1 Techno-moral competences and practical wisdom

Techno-moral competences or virtues should be understood as character traits that one has to develop and put into practice to become a good person and act ethically. A virtuous person is someone who in a consistent manner displays praiseworthy actions. For instance, a person who is fair aims at being fair in all situations. This, however, does not mean that she is a sort of robot, executing the same ‘fair thing’ in all situations. On the contrary, virtuous persons are able to take the particularities of a certain situation into account and tailor their actions to the demands of the specific context in which they act. In this view, techno-moral competences work in tandem with practical wisdom (phronesis): the ability to establish what is morally required, even if it concerns a new or unusual situation where general rules cannot easily be applied.

Instead of fleshing out—what I foresee as—key techno-moral competences for tech employees, such as moral imagination [8], solidarity, humility [54], and speaking truth to power [56], I will now focus on the necessary conditions and actions for developing practical wisdom in tech employees.

Quintessential for achieving practical wisdom is being able to develop a rich and nuanced perception of the world or a “just mode of looking and a good quality of conscious”, as Murdoch [37] puts it. Based on this rich perception of the world, it is subsequently necessary to acquire the ability to develop the appropriate actions geared to the situation at hand.

As to the first, far too often, when we assess a certain situation, we are preoccupied with our own position and needs. Even when we genuinely have the intention to help someone, there is this inclination to frame the problem in a way that we are the ones best positioned to solve it. Clouded by our own disposition, we might actually end up not helping others at all, but merely boosting our “fat, relentless ego” [37, p. 52].

A particular challenge for tech employees, specifically for those developing the data-driven application—ML experts, computer and data scientists—is that they perceive the world predominantly through datasets and algorithms. People from flesh and blood, patients, families, communities, and neighborhoods are all captured in proxies, labels, and profiles. As is the case with all technological mediations, some aspects of reality are highlighted by these data sets and algorithms, while other parts of reality move to the background [24]. For instance, more data may be available on communities from a lower socio-economic background than on groups with a high income. As a result, when a fraud detection system is set in place, it targets the former, rather than the latter. The former becomes visible; the latter gets out of sight.

The difficulty, tech professionals therefore in particular face, is to remain conscious of the difference between the dataset representing reality and the lived experience of those making that reality. For tech professionals to take responsibility for the ethical and societal impact of their products, they will have to train themselves to look beyond their data sets and algorithms. What would such a training look like?

To apprehend something new, timing is everything. While learning a new skill, such as computer science, machine learning, data science, etc., soon-to-be tech professionals are in an excellent position to also nurture a just perception of the world. Inherent in being a student is experiencing uncertainty. Realizing, one does not know everything makes that egos do not get in the way of putting things in the right perspective.

Moreover, a genuine learning attitude is an open attitude, making one more responsive to the testimonies of others. In-company training (or other academic or professional training programs, for that matter) provides a fruitful setting to nurture moral attention: “the ability to recognize the ethical relevance of a situation by imagining the way one’s own actions will shape other people’s actions and thoughts” [45, p. 1827]. It would, therefore, make sense to make practicing a ‘just mode of looking’ part of the technical training. This is in line with Borenstein and Howard [6, p. 63] emphasizing that a key piece of the puzzle in AI ethics education is “establishing an authentic professional mindset” that is “related to cultivating moral sensitivity” in order to fully grasp how technical decision-making is not neutral but inherently “intertwined with ethical considerations”.

To further shape the content of such training, Green’s analysis of Data science as political action provides some very fruitful pointers. He advocates that data scientists—but his analysis is just as relevant for other tech experts as well—need to develop a professional practice that is grounded in a political vision of social justice. He distinguishes four stages these professionals need to go through —from becoming aware of (1) and reflecting on (2) real-life moral problems, to directing their applications (3) towards furthering social justice goals and developing practices (4) which facilitate the inclusion of a plurality of values [16]. Including the socio-technical aspects (e.g., power imbalance, social injustice, diversity) of AI and data science practices, such an approach would go further than the current focus in data and AI ethics teaching that predominantly is orientated towards ethical challenges related to technical aspects (e.g., model misuses, data accuracy, and validity) (also see:[47]). In addition, to broaden the perspective of tech experts, it might also be helpful to invite professionals with different expertise (e.g. privacy lawyer, an artist, or a psychologist,) into the classroom to share their perspective on a certain ethical challenge [51]. It might well be that what at first sight looked like a problem that should be solved by a data-driven solution, in fact, needs another approach.

Cultivating a just mode of looking is not an individual endeavor but should be approached as a collective one. Even with the best intentions and the most open mindset, it is not possible to always fully understand and anticipate what the societal impact of one’s work will be. It is, therefore, of utmost importance to build teams that bring together different perspectives and real-life experiences. The lack of diversity in tech teams and a dominant focus on Western values have proven a major obstacle to developing AI applications that do not discriminate or exclude certain groups [22]. It has been stressed that the underrepresentation of certain groups in tech teams results in not addressing the challenges these groups actually face and, moreover, even leads to the further marginalization of these groups [14].

The lack of diversity in tech teams may also hinder the second aspect of practical wisdom: the ability to develop the proper actions given a certain situation. The traditional strategy to deal with a lack of diversity in the design process is ‘user participation’. From focus groups to user personas, all these strategies aim at including diversity and representation of communities in the design process. More ambitious and radical, however, is the approach of design justice practitioners, who “flip the ‘problem’ of how to ensure community participation in a design process on its head to ask instead how design can best be used as a tool to amplify, support, and extend existing community-based processes” [11]. On this account, tech experts bring their technical competences to the community and put their expertise to use for community-defined projects. In an ideal situation, this leads to community-led practices instead of designer-led practices. In such instances, being competent and having practical wisdom interlock as two key facets of what it means to be trustworthy.

4 Being committed

In the previous section, we explicitly connected competences to the employees of tech companies and not to the tech companies as such. This makes sense, given that what a tech company is capable of, ultimately depends on the people who work for it. However, it also makes the responsibility to develop and nurture techno-moral competences a highly individual matter which it should not be (Rieder, Simon, and Wong [46]; on a relational interpretation of virtue ethics see: [9]). Research indicates that AI practitioners do see themselves as, partially, ethically responsible for the societal impact of their applications. However, their agency is, so they express, very much constrained by the powerful companies they work for as well as by governmental forces [42]. In the end, the tech company is the actor that sets the goals, decides on the strategy, and picks the means to get there; the regulatory framework—as the outcome of democratic processes—sets the action space in which the tech company operates. All in all, this makes the motivation of a tech company to wholeheartedly put their techno-moral competences to use for those who count on them, crucial. This raises the question: what does it mean for a tech company to be truly committed?

In the philosophical debate, the standard view typically distinguishes between external and internal motivational structures for trustworthiness. People who are driven by the “desire for the good opinion of others” [43]: 2 in line with Hardin 2002) or who want to avoid punishment (e.g. fines) are externally motivated. When someone’s commitment is solely based on ‘being counted on’ [25, 33], one is internally motivated.

Jones [25] rightfully argues that external motivation is a shaky ground for people to base trustworthiness on. After all, if these external incentives are absent, the motivation to act trustworthy fades too. Rather, the mere fact that someone is relying on us, should be a compelling reason to be trustworthy.

For companies, however, this might not hold. The lack of intersubjectivity makes it hard to base the motivation to be trustworthy merely on the fact that others are counting on you. Generally, there is just too big of a distance between the customer and the company to make that kind of responsiveness stick. This is particularly true in the context of tech companies operating on a global scale. While their data-driven service might be personalized to cater to the needs and wishes of their customers, it does not imply that what is truly at stake for these end-users is on the company’s radar, let alone motivating their actions [26]. After all, companies are first and foremost responsible to their shareholders. As long as the interests of customers and shareholders align, being trustworthy is easy. It is when a conflict of interest arises that being trustworthy becomes truly a challenge [46].

However, external motivational structures, such as complying with legal requirements to avoid penalties or striving for a good reputation to ensure a solid customer base align well with business considerations (making profit). While it is true that such external motivations might not prove sufficient when certain constraints (e.g. legal enforcement) are lacking or challenged by rapid technological developments [41], they nevertheless contribute greatly to creating an environment where tech companies are properly motivated to act trustworthy. Reducing the field of competing considerations can tilt those that are almost trustworthy towards being genuinely trustworthy [25, p. 73].

In the tech sector, this external motivational structure is predominantly shaped by legal frameworks. The General Data Protection Regulation (GDPR), the Digital Markets Act, the Digital Services Act, and the proposed AI Act—to name just a few EU regulations—all with their own specific focus, develop an action space with checks and balances, that promotes justice and legal certainty [23, chapter 11]. It is in this action space, adhering to the rules and boundaries set by these frameworks, tech companies can develop their business [27]. By decreasing conflicts of interest and pushing tech companies to be more accountable for their activities and the societal impact of their activities, they heavily contribute to the motivation of tech companies to be trustworthy.

Does that mean that there is no room for some sort of internal motivation in the context of tech companies? Is it sufficient for a tech company to comply with applicable law to be properly motivated? While legal compliance might bring us a long way, it does not necessarily cover the whole ground of what it means to be committed.

First, legal compliance with for instance a focus on maintaining records of data processing activities and conducting impact assessments (GDPR) may evoke a checkbox mentality with tech companies. As long as the paperwork is in order, they are off the hook. Secondly, It might also lead to a “mechanical proceduralism” [36] whereby tech companies ask for consent to process personal data but without actually installing effective data protection measures. Finally, the highly volatile character of technology innovation might not always make legal requirements sufficient to ensure a technological application worthy of trust.

As a tech company, being committed to acting trustworthy starts but does not end with legal compliance. Being truly committed to acting trustworthy demands from tech companies not only to do what is required by law, but also to go about this in a way that it truly empowers those who count on them. To give an example: publishing terms and conditions governing the contractual relationship between a provider of a service and their users is obligatory. One can do this in a legalistic manner, ticking all the required boxes, but without considering if people are really able to grasp the content. One can also go the extra mile and invest in accessible and tailored terms and conditions which are actually readable and integrated in an attractive way into the service or application’s interface signal trustworthiness. Another example: a privacy policy is obligatory. While most companies take into account privacy principles, leading to a complete and sound privacy policy (from a legal perspective), this does not mean that those policies are also fair (from an ethics perspective). Users might feel that some of the aspects—e.g. the way data is stored or shared—are not in line with their interests [12, p. 457]. Being truly trustworthy will require tech companies to address those interests and not hide behind the law.

Sometimes more needs to be done than focusing on compliance is probably best captured by the commercial data ethics initiatives that took flight over the last couple of years [20]. Data ethics has been regarded as a promising avenue to becoming more trustworthy. From ethical design principles to codes of conduct, tech companies develop all kinds of instruments to help them be(come) responsive actors. From a trustworthiness perspective, these self-imposed regulatory strategies can be regarded as an internal motivation of tech companies to show commitment to those who count on them. Nevertheless, some caution is in order.

As all of these data ethics initiatives are voluntary, they need to be firmly embedded in the organizational structure and business models of the company to avoid the risk of ethics washing [57]. It is key that they are accompanied by organizational enforcement mechanisms, in the form of structural accountability, such as reporting requirements and auditing of that reporting [27]. Research indicates that more work needs to be done here. For instance, of more than 160 AI ethics guidelines that were collected, only ten had proper enforcement mechanisms set in place [18]. And while there are certainly tech companies that engage with data ethics to build their social responsibility or use it as a tool for change, it is not far-fetched to assume that many first and foremost see it as a marketing tool or an instrument to create a competitive advantage [20, 48]. The latter turns data ethics into an external motivation rather than an internal one, running the risk that when the perceived advantages disappear, the commitment erodes too. A fruitful avenue to ensure that data ethics initiatives become more robust is to develop mixed models of enforcement. For instance, in 2022 the Dutch House of Representatives endorsed the mandatory use of the Human Rights and Algorithms Impact Assessment, developed by a Dutch academic consortium.Footnote 2 Also, ethics certification programs—industry-specific or otherwise—might be an interesting instrument to ensure that data ethics initiatives cannot easily be put aside.

All in all, it can be concluded that legal frameworks are crucial to ensure tech companies are properly motivated to act trustworthy but are not always sufficient. There is a role here for soft law strategies such as data ethics initiatives as long as these operate within the action space provided by law and come with clear enforcement mechanisms to ensure that they are not merely paying lip service to the idea of trustworthiness.

5 Conclusion

This article unpacks what it could mean for tech companies to be trustworthy. In the philosophical discussion, the standard view is that trustworthy agents must at least: give assurances indicating their trustworthiness (1), be competent in some domain (2), and commit to putting their competences to work in the service of others who count on them (3). Applying this to the context of tech companies, it was argued that communicating trustworthiness should take place in the interaction with the device or service. This requires a different approach to interface design (emphasizing all the affordances of a product or service), as signaling trustworthiness includes being explicit about the limits of one’s competence (pillar 1). Second, a shift was advocated from a predominant focus on technical competences to techno-moral competences and practical wisdom in tech employees (pillar 2). Thirdly, the necessity of legal frameworks to ensure that tech companies are properly motivated and the role of soft law strategies such as data ethics to help in building internal company commitments was explored (pillar 3).

For clarity, these three pillars of trustworthiness (assurances, competence, and commitment) were addressed separately. However, it cannot be emphasized enough that these aspects are intrinsically connected and interdependent. In order to design products and services to signal trustworthiness (pillar 1), one needs to have professionals who possess and nurture techno-moral skills and practical wisdom (pillar 2). In order to create an environment in which such practices can thrive (pillar 2), companies not only need to be compliant with applicable laws, but they should also be committed to investing in self-regulatory strategies, such as current data ethics initiatives (pillar 3).

As these three pillars of trustworthiness are deeply connected, this also means that this trustworthiness account is not a pick-and-choose model. Tech companies cannot be genuinely trustworthy by merely giving assurances (pillar 1), but not being properly motivated (pillar 3) or by hiring and investing in professionals developing techno-moral skills (pillar 2), but then not being truly committed to actually changing their company’s course (pillar 3). Moreover, fully adhering to this trustworthiness account requires continuous attention. An organizational structure and culture is always in flux. Therefore, it is necessary to constantly monitor whether the three pillars of trustworthiness are still the supporting foundation of the tech company. All in all, it can be concluded that the presented trustworthiness account is rather demanding. Although it has not (yet) been empirically tested, it is not far-fetched to assume that currently, only a few tech companies walk the walk and fully meet the standards set by this account.