Recent advances in the capability of digital information technologies—particularly due to advances in artificial intelligence (AI)—have invigorated the debate on the ethical issues surrounding their use (Tsamados et al. 2020). However, this debate has often been dominated by ‘Western’ ethical perspectives, values and interests, to the exclusion of broader ethical and socio-cultural perspectives. This imbalance carries the risk that digital technologies produce ethical harms and lack social acceptance, when the ethical norms and values designed into these technologies collide with those of the communities in which they are delivered and deployed. These risks have become more pressing as the development and deployment of digital technologies becomes increasingly global.

Intercultural Digital Ethics (IDE) is a sub-field of information ethics and digital ethics research that seeks to redress this imbalance, by examining the ethical issues due to digital technologies from different cultural and social perspectives. It includes the foundational works of Hongladarom (1999), Hongladarom and Ess (2007), Capurro (2005, 2008) and Ess (2006), amongst other noted scholars. Intercultural issues due to ICTs have also been addressed at several conferences on computer and information ethics since the mid-1990s, including the biennial ‘Cultural Attitudes towards Technology and Communication’ conference (Capurro 2008) and the ‘Information Ethics: Agents, Artefacts and New Cultural Perspectives’ conference convened at the University of Oxford in 2005 (Floridi and Savulescu 2006).

Building on these foundations, this special issue of Philosophy and Technology takes a further step towards broadening the approach of digital ethics, by bringing together a range of cultural, social and structural perspectives on the ethical issues relating to digital information technology. Moreover, it refreshes and reignites the field of IDE for the age of AI and ubiquitous computing. It follows from the Symposium on Intercultural Digital Ethics organized by the Digital Ethics Lab, Oxford Internet Institute, and held at Exeter College, University of Oxford in December 2019.

The symposium and this special issue sought contributions on a range of questions relevant to the theme of IDE (Aggarwal and Floridi 2019). Amongst these: why is a pluralistic ethical approach important in understanding the impact of digital technologies? How do digital technologies impact different cultural and social groups differently? How do these communities view issues in digital ethics such as privacy, consent, security and identity differently? Can we design governance frameworks for digital technologies that are tailored to the ethical values of different cultures, whilst also harmonizing these frameworks at the international level? Do digital information technologies represent a new form of colonialism and exploitation?

The papers contained within this special issue are organized into three sets. The first set of papers addresses the challenge of developing a global, pluralistic IDE that reflects heterogenous cultural values whilst supporting a global framework for the ethical governance of digital technologies. In his commentary, ‘Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics and Governance of AI’, Pak Hang Wong argues that the human rights approach offers a useful global framework for AI governance, but it also needs to place greater emphasis on different cultural values and the role of culture. Indeed, he argues that ‘the consideration of cultural values is essential to the human rights approach for both philosophical and instrumental reasons’.

In ‘Interpretative Pros Hen Pluralism: From Computer-mediated Colonization to a Pluralistic Intercultural Digital Ethics’, Charles Ess explicates interpretive pros hen (focal or ‘towards one’) ethical pluralism (EP(ph)) as a response to the central challenge of developing a global IDE. Building on earlier work (Ess 2006), he argues that EP(ph), with its emphasis on preserving irreducible cultural differences and fostering engagement across those differences, ‘stands as an important component for a contemporary IDE that seeks an ethical cosmopolitanism in place of computer-mediated colonization’.

In their contribution ‘Overcoming Cultural Barriers to Cross-Cultural Cooperation in AI Ethics and Governance’, Sean ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu examine the barriers to cooperation on AI ethics and governance between Europe and North America on the one hand, and East Asia on the other, regions that are leading the development of AI and AI ethics. The authors offer several practical recommendations for overcoming misunderstandings between these cultures and regions in order to enhance cross-cultural cooperation.

The second set of papers draws insights for IDE from specific cultures. In their contribution ‘I am datafied because we are datafied: an Ubuntu perspective on (relational) privacy’, Urbano Reviglio and Rogers Alunge argue that Ubuntu, a communitarian moral philosophy native to Sub-Saharan Africa, can contribute to the development of a more relational conceptualization of privacy, one that is better suited to addressing the ethical challenges of digital technologies than the individualistic conceptualization of privacy characteristic of ‘Western’ philosophical traditions (Capurro 2005; Taylor et al. 2016).

In ‘Harmonizing Artificial Intelligence for Social Good’, Nicolas Berberich, Toyoaki Nishida and Shoki Suzuki similarly endorse a more relational approach to digital ethics. Drawing on Wong (2012), they argue that the concept of harmony (takt), which is central to Chinese and Japanese culture, should inform an IDE, specifically as it relates to AI ethics and the development of AI for social good (Floridi et al. 2020). In turn, Mohammad Yaqub Chaudhary’s paper ‘Initial Considerations for Islamic Digital Ethics’ seeks to ‘open the way to philosophical engagement with issues of digital ethics from an Islamic perspective’. He highlights areas where Islamic perspectives both converge and diverge with the scholarship on digital ethics.

The final two papers augment IDE through the lenses of race and coloniality, particularly as they relate to the development and governance of AI. In ‘Decolonial AI: Decolonial theory as socio-technical foresight in artificial intelligence research’, Shakir Mohammmed, Marie-Therese Png and William Isaac explore the critical role of decolonial and post-colonial theories in understanding and shaping ongoing advances in AI. Building on prior work on decoloniality and information technology by Ali (2016), Irani et al. (2010) and Couldry and Mejias (2019), amongst others, they argue that AI communities should embed ‘a decolonial critical approach within their technical practice’ thereby ‘centring (sic) vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress’.

Finally, in ‘The Whiteness of AI’, Stephen Cave and Kanta Dihal problematize the prevalent Whiteness of AI, grounding their account in the philosophy of race and critical race theory. They warn that such racialization of AI stands to exacerbate the very biases that they reflect, contributing to a ‘vicious cycle of social injustice’ and distorting our perception of the risks and benefits of these machines. They second the call for decolonizing AI: ‘breaking down the systems of oppression that arose with colonialism and have led to present injustices that AI threatens to perpetuate and exacerbate’.