Philosophers of IDE have taken issue with Western-centric assumptions in technology ethics in two main ways—one subtractive, one additive. While these have been articulated in different ways, for clarity, these could be described as follows: those adopting (1) a subtractive approach propose that familiarity with non-Western values provides a mandate to contest the value of cherished values in the Western canon, versus (2) an additive approach contends that IDE provides the opportunity to expand the pantheon of Western values, allowing us to design technologies for important new values that may have been previously been neglected (Vallor, 2017; Verbeek, 2011). Although both subtractive and additive approaches are useful in thinking about DWB, both have problems, corresponding to two distinct challenges.
Charles Ess (2020) provides the following articulation of these challenges: IDE must “defend some set of (quasi-) universal ethical norms, principles, frameworks, etc.”, while also sustaining “local, culturally variable identities, traditions, practices, norms, and so on” (2020: 551). Ignoring cultural differences leaves us with a homogenised approach to these technologies, whereas giving too much weight to cultural differences can result in Balkanisation. IDE must, therefore, navigate between both extremes. It must recognise the importance of ethical norms that are found across cultural contexts (fairness, for example), while also accounting for how these values are often expressed in a culturally idiosyncratic way. In the case of fairness, the idea that individuals should be treated a schematised set of rights and responsibilities (with the right to appeal and redress when these rights and responsibilities are not respected) is widespread, but how we should understand rights and responsibilities has clear cultural differences. For Ess, the twin dangers of homogenisation and Balkanisation even apply at the level of the topics that scholars of IDE have taken up.Footnote 3 Versions of these problems can be found in IDE approaches to artificial intelligence (Cave and Dihal 2020), robotics, IoTs, and self-driving cars. Providing examples of these approaches and their associated problems points towards potential solutions.
Identifying Core Values: the Danger of Homogenisation
On the one hand, an IDE approach to DWB could include a core set of values that are important irrespective of cultural differences. Prima facie, Google’s four “Digital Well-Being Values” seem to illustrate this. According to Sundar Pichai’s I/O keynote in 2018, Google’s DWB values are (1) “Providing Awareness”, (2) “Enabling Control”, (3) “Delivering Benefits”, and (4) “Ensuring User Trust” (I/O 2018). Thinking of DWB in such terms aims to avoid the charge of Balkanisation. However, in avoiding Balkanisation, this approach faces two problems related to homogenisation, empirical adequacy, and conceptual vagueness. Identifying core sets of values can come at the cost of vagueness. In other words, Google’s DWB values are not demanding enough because they are overly general. This becomes a problem when values are translated into specific tools, such as Google’s suite of DWB tools. If one starts with a vague conception of what values one is trying to design, then many different design solutions can be offered to solve the very same problem.
Google launched a suite of fully functional DWB tools at the same time Pichai announced Google’s DWB values, attempting to embed its four “digital well-being values” into specific DWB tools. These included an AppTimer (limits on app use), Shh (silencing of mobile device if orientated downwards), WindDown (greyscale in the hours before sleep), and Dashboard (overview of app use across devices). However, the design of many of these tools assumes that users use their digital devices in a culturally similar way, especially that they have similar attitudes regarding sharing. Ishtiaque et al. (2017) found that “people in marginalised communities frequently share a single device among multiple individuals” (2017: 1). As the authors of this study note, while “people share devices out of economic need”, they also do so because “sharing is a social and cultural practice” (2017: 1). In this case, although Google claims that its DWB tools aim to promote its DWB values, in embedding its values in its tools, it makes assumptions about how its products will be used that seem to have cultural overtones. In addition to cultural factors influencing the implementation of values, the centrality of certain values needs to be questioned, the empirical adequacy of identifying core values.
It would be necessary to ensure that supposedly core values (1) genuinely capture universal human concerns and (2) do not overlook other concerns that are as important—if not more—as those articulated in core sets. In the case of Google’s DWB values, one might find it difficult to object that DWB tools should “provide awareness” or “enable control”. These seem to be valuable attributes regardless of the culture of which they are part. Given the outsized influence of Western companies and cultures on the development of technologies, this is a grave mistake. Recent empirical work on moral and cultural psychology shows why.
Despite comprising most of all social scientific samples—as mentioned in the introduction—those from WEIRD cultures are outliers on various socio-psycho constructs, including self-concepts, perceptions, and ethical judgments (Henrich, 2020; Henrich et al., 2010). Ethical judgments by individuals from WEIRD cultures give overwhelming importance to issues of fairness and freedom, although individuals from non-WEIRD cultures typically have broader moral concerns, including loyalty and adherence to authority (Graham et al., 2011; Haidt, 2012; Shwederet al., 1997). Historically speaking, authority and loyalty have been global ethical concerns. The burgeoning field of experimental philosophy has yielded similarly counterintuitive (from a WEIRD perspective) results, where the intuitions of philosophers against which various theories are tested might not be representative. In fact, these studies routinely show that ethical intuitions are deeply affected by culture (Machery, 2017).
Although professed cultural values should not be allowed to ride roughshod over human rights in the digital sphere—a point to which we return below—accounts of digital well-being must not overlook cultural differences concerning ethical judgments and self-concepts, which affect conceptions and experiences of well-being.
Respecting Cultural Differences: the Danger of Balkanisation
An IDE approach to DWB must also show how providers of SMTs can respond positively to cultural differences, avoiding problems associated with homogenisation.Footnote 4 Prima facie, Meta (née Facebook) seems to take this approach: discussing the development of technologies to proactively remove harmful contents from Facebook, Mark Zuckerberg refers to the importance of cultural differences: “over time, these controls may also enable us to have more flexible standards in categories like nudity, where cultural norms are very different around the world and personal preferences vary” (Zuckerberg, 2021). Such technologies would respect cultural differences, allowing people from different cultures and countries to choose the levels of nudity they find appropriate based on personal preferences, thereby avoiding homogenisation. However, in avoiding homogenisation, this approach faces two problems related to Balkanisation: implicit homogenisation and nefarious intentionality.
The latter consists of cultural differences being used to justify various kinds of moral misdemeanours by disguising morally nefarious practices as cultural ones. As Pak-Hang Wong notes, Balkanisation is serious, as it can result in IDE disguising deliberate wrongdoing:
Cultural differences may enable malignant actors to disregard the demand of important ethical values or even to justify the violation of them through deference to the local culture, either by affirming the local culture lacks specific ethical values, e.g., privacy or by asserting the local culture upholds conflicting values, e.g., state intervention is good (Wong, 2020: 705).
Wong provides two examples of this: first, the Indian government’s 2010 edict that “India is not a particularly private nation. Personal information is often shared freely and without thinking twice. Public life is organised without much thought to safeguarding personal data” (Wong, 2020: 706; citing Marda & Acharya, 2014). Robin Li (李彦宏), the co-founder of Chinese tech giant Baidu adopted a similar tactic when he declared that concerns with personal privacy were alien to Chinese consumers—although Chinese netizens swiftly and severely rebuked him (Sun, 2018). Second, Wong notes, regarding state intervention in China, “there are different cultural expectations of the government in China than in other countries. China’s governance tradition of promoting good moral behaviour goes back thousands of years” (Wong, 2020: 706; citing Bing Song, 2018). Similar cultural expectations have been invoked to justify bid rigging in Japan when private firms collude with local governments to ensure higher returns on construction projects. Collusion between local firms and governments supposedly reflects Japanese cultural values related to cooperation and community, despite being illegal, unfair, and opposed by citizens (Luegenbiehl & Clancy, 2017). In addition to justifying nefarious intentions, the cultural differences approach can easily fall into implicit homogenisation.
Even when attempting to account for/being responsive to different cultural orientations, this can prove challenging. There is always the danger that attempts to do so are based on cultural orientations, and assumptions about the cultural lives of users that, in turn, affect the DWB of these users (see Dennis 2021 for a comprehensive overview). This is especially true of approaches taken by NGOs, such as Tristan Harris’ Center for Humane Technology, and the largest technology companies, such as Facebook, which are based in the USA and founded/run by white, cisgender males from overwhelmingly middle-/upper-class backgrounds. As a result, the customs, values, and assumptions of a small percentage of the (WEIRD) global population have come to dominate considerations within digital ethics.Footnote 5 Facebook’s attempts to address cultural differences mentioned above is particularly striking example of this tendency.
The importance to Mark Zuckerberg and Facebook of allowing/empowering people to make decisions for themselves makes sense from a Western cultural perspective, where respecting and promoting personal autonomy and control would be a prime ethical obligation. Although an emphasis on autonomy and individuality is characteristic of Western ethical orientations, these orientations are not only historically recent but also somewhat unique to these traditions (Haidt, 2012; Henrich, 2020). More fundamentally, any IDE approach to DWB based primarily on values is problematic, both normatively and empirically.
Values are typically conceived as long-standing beliefs or ideas about which kinds of states are worth pursuing, that guide behaviours (Kulich & Zhang, 2012). Value differences are often thought to underlie cultural differences (and explain them), where behavioural variations are based on the kinds of states individuals and groups prefer. Normatively, the fact that some states are preferred says nothing about which kinds of states should be preferred, priorities within values (Rachels, 2011). Empirically, values differentially predict behaviours within different cultures—values better predict behaviours in WEIRD than non-WEIRD cultures—and poorly predict membership in culture groups—there are significant differences between the values of individuals from mainland China, Hong Kong, Taiwan, and Singapore (Knafo et al., 2009; Smith, 2010). Furthermore, technology shapes the social and cultural environments in which it is developed, just as these environments shape the development of technology. This means that technology is part of a circle of influence; it at once influences culture, but the design of the technology itself is subject to strong cultural pressures. As a result of cultural fluctuation, preferences associated with technology are likely to be transient—unlikely to be long-standing and, therefore, falling outside the purview of what could be characterised as value frameworks. Rather than approaches that consist of either identifying core values or respecting cultural differences alone, an adequate IDE account of DWB would have to both include the benefits and address the drawbacks of these two approaches.
An Empirical, Normative, and Case-study Approach
Like general approaches to technology ethics, identifying core values and respecting cultural differences is “top-down”, “applied” approaches to ethics, beginning with principles associated with value frameworks, which are then brought to bear on situations and dilemmas concerning technology (Davis, 1995; Hess & Fore, 2018; Luegenbiehl & Clancy, 2017; Martin & Schinzinger, 2009). In the Western philosophical tradition, these have included deontology, consequentialism, and virtue ethics, as well as professional codes/guidelines. The problem with this top-down, applied approach—and, therefore, those of core values and cultural differences—is that it is psychologically irrealist.
Neither ethical judgments nor behaviours are exclusively or primarily the result of ethical reasoning/the application of principles (Greene, 2014; Haidt, 2012; Roeser, 2018). These approaches mistake how and why people think about issues of right and wrong and behave—from implicit homogenisation and nefarious intentionality to empirical adequacy and conceptual vagueness—and, therefore, fail to adequately consider the interplay between moral psychological facts and normative ethical concerns. To address these problems, any IDE approach to DWB would have to be empirically informed and attentive to the interplay between empirical facts and normative concerns. Such an approach to global technology ethics has been proposed by Heinz Luegenbiehl and Rockwell Clancy.
These scholars have suggested a “bottom-up”, iterative method for identifying, resolving, and avoiding ethical problems related to global technologies, a case-study procedure (Clancy, 2021; Luegenbiehl & Clancy, 2017). This method starts with case studies that capture situations or dilemmas involving technology and, on this basis, particularising relevant frameworks or principles to the technologies and situations under consideration, thereby refining general ethical principles in the process—for example, what “privacy” would mean on a personal computer in the USA versus a smartphone in China. It can address problems associated with identifying values and respecting differences.
First, since it begins with case studies rather than principles, it starts with a common basis for understanding or consideration between peoples from different cultures and countries, addressing problems of cultural differences. For example, although disagreements might well exist concerning what went wrong in the case of the Challenger space shuttle disaster or who is to blame, no one would disagree that something wrong occurred. Similarly, IDE considerations of DWB might begin with cases touching on digital technologies and understandings of well-being common across cultures, based on empirical work in cultural and moral psychology. Next, this method begins with intuitions rather than reasons for ethical judgments. Reasons are central to this process but proposed later, in dialogue with others, to justify ethical judgements, realistically reflecting the ways that people think and talk about ethics (Greene, 2014; Haidt, 2001, 2012). Third, principles used in this procedure reflect not only the judgments of professionals working with technologies but also intuitions identified in empirically informed, pluralist theories of ethical reasoning, such as moral foundations theory and morality as cooperation (Curry et al., 2019; Graham et al., 2011). In that way, this method addresses problems associated with both homogenisation and Balkanisation, capable of accurately identifying points of cultural convergence and divergence since it is empirically informed and culturally representative. Finally, this approach is pragmatic in nature. Rather than simply identifying ethical issues that arise with the development and implementation of technology, using the same ethically pluralistic framework, it aims at making recommendations about how technology should be developed and implemented, to make sure it is done ethically.
In sum, this process would be both subtractive and additive, capable of identify cultural similarities and respecting relevant differences since it (1) begins with a broad basis on which various stakeholders are able to agree—namely, common situations, dilemmas, and shared intuitions and principles—and (2) moves on to the refinement and differentiation of shared principles and intuitions relative to specific situations and dilemmas. To demonstrate how empirical findings would be relevant to normative concerns and further motivate the importance of an IDE account of DWB, research on the effects of culture on happiness is instructive—cultural similarities and differences concerning the nature of happiness, experiences, and causes of happiness.