1 Introduction

AI provides many transformational benefits to organisations across all industries and sectors (Alshahrani et al., 2021; Dennehy et al., 2022; Dwivedi et al., 2021; Elbanna et al., 2020; Vassilakopoulou et al., 2022). Recent studies have reported that AI can lead to new forms of business value (Enholm et al., 2022; Mikalef and Gupta, 2021), dynamic business-to-business relationships (Dwivedi and Wang, 2022; Keegan et al., 2022), enriched customer experiences (Jain et al., 2022; Griva et al., 2021; Kautish and Khare, 2022), enhanced human capabilities (Dwivedi et al., 2021), resilient supply chains (Zamani et al., 2022) and improved safety in the workplace (Gangadhari et al., 2022).

At the same time, there is a growing awareness of the risks and ethical issues surrounding AI (e.g., Bryson, 2018; Jobin et al., 2019) and the need to move from ethical principles to implementable practices (Schneiderman, 2021; Mäntymäki et al., 2022a; Seppälä et al., 2021) through for example the responsible design (Dennehy et al., 2021) and governance (Mäntymäki et al., 2022b) of AI systems. While organisations are increasingly investing in ethical AI and Responsible AI (RAI) (Zimmer et al., 2022), recent reports suggest that this comes at a cost and may lead to burnout in responsible-AI teams (Heikkilä, 2022). Thus, it is critical to consider how we educate about RAI (Grøder et al., 2022) and rethink our traditional learning designs (Pappas & Giannakos, 2021), as this can influence end-users’ perceptions towards AI applications (Schmager et al., 2023) as well as how future employees approach the design and implementation of AI applications (Rakova et al., 2021; Vassilakopoulou et al., 2022).

The use of algorithmic decision-making and decision-support processes, particularly AI is becoming increasingly pervasive in the public sector, also in high-risk application areas such as healthcare, traffic, and finance (European Commission, 2020). Against this backdrop, there is growing concern over the ethical use and safety of AI, fuelled by reports of ungoverned military applications (Butcher and Beridze, 2019; Dignum, 2020), privacy violations attributed to facial recognition technologies used by the police (Rezende, 2022), unwanted biases exhibited by AI applications used by courts (Imai et al., 2020), and racial biases in clinical algorithms (Vyas et al. 2020). The opacity and lack of explainability frequently attributed to AI systems makes evaluating the trustworthiness of algorithmic decisions challenging even for technical experts, let alone the public. Together with the algorithm-propelled proliferation of misinformation, hate speech and polarising content on social media platforms, there is a high risk for erosion of trust in algorithmic systems used by the public sector (Janssen et al., 2020). Ensuring that people can trust in the algorithmic processes is essential not only for reaping the potential benefits from AI (Dignum, 2020) but also for fostering trust and resilience at a societal level.

AI researchers and practitioners have expressed their fears about AI systems being developed that are non-inclusive and enhance inequalities. There are known cases in which AI systems do not always make ethical or accurate choices (Babic et al., 2021) and biased or inaccurate data are used to train the AI algorithms which increase the risk of inequalities and injustice (Agrawal et al., 2020). For example, AmazonFootnote 1 trained their AI recruiting tool using masculine language, and thus the tool inherited bias against curriculum vitae submitted by women. This ‘bias in – bias out’ in AI models embeds the danger of inclusion. Nikon, is another example that illustrates this danger of inclusion. The company trained their AI model which identified people blinking excluding Asian peopleFootnote 2. There are several examples of discrimination in AI applications which impose the need of critical thinking to question the AI results since it seems inevitable to completely regulate the AIs, which are in essence human opinions embedded in algorithms.

Researchers and practitioners state that these fears can be addressed and AI can be more inclusive – by designing ‘human-AI hybrids’ (Rai et al., 2019). In this context, researchers highlight the need to create Ambient Intelligent (AmI) AI systems to amplify AI-human collaboration (Gams et al., 2019). In such environments the AI system will interact with humans, receive information and learn from them and the environments (Ramos et al., 2008). From a different perspective converting the AI ‘black boxes’ to ‘glass boxes’(Rai, 2020) and creating AI applications that inherit explainable features (XAI) can also facilitate inclusiveness in AI, as this transparency can make it easier to reduce the biases.

AI, like all technology, can be used in diverse ways and users may appropriate the technology in means that designers have not intended (Zamani and Pouloudi, 2020; Zamani et al., 2020). Thus, designers need to consider the intended and unintended consequences (Ransbotham et al., 2016; Majchrzak et al., 2016), by focusing on responsibility and ethical aspects to support this process. The Information Systems (IS) discipline has a sustained record of raising and addressing ethical concerns about IS, and technologies in general (e.g., Mason, 1986; Banerjee et al., 1998; Smith & Hasnas, 1999; Davison, 2000; Mingers & Walsham, 2010; Niederman, 2021). This special issue follows this cumulative tradition of academic discourse and knowledge by seeing vistas beyond technology (Stoodley et al., 2010), specifically AI.

2 The Special Issue

In this special issue, we were particularly interested in theory-building studies and empirically grounded theorising related to AI as a technology for an ethical and inclusive society. Following a rigorous review process consisting of a minimum of two and a maximum of four rounds of review, nine articles were selected to be included in this special issue. Each of the selected articles bring a distinct perspective to the emerging IS discourse on AI governance, ethics, and society. Collectively, the articles advance understanding of the socio-technical aspects of AI and its implications for society. The remainder of this editorial briefly describes the contributions that each of the selected articles made to advancing knowledge on AI for an ethical and inclusive society.

Niederman & Baker (2023) provide a reflective perspective on how ethical issues related to AI differ from other technologies. Specifically, they differentiate AI ethics issues from concerns raised by all IS applications by presenting three distinct categories of which AI ethics issues can be viewed. One can view AI as another IS application like any other. They examine this category of AI applications focusing primarily on Mason’s (1986) PAPA framework, comprised of privacy, accuracy, property, and accessibility, as a way to position AI ethics within the IS domain. One can also view AI as adding a generative capacity to produce outputs that cannot be pre-determined from inputs and code. They examine this by adding ‘inference’ to the informational pyramid and exploring its implications. AI can also be viewed as a basis for re-examining questions of the nature of mental phenomena such as reasoning and imagination. At this time, AI-based systems seem far from replicating or replacing human capabilities. However, if/when such abilities emerge as computing machinery continues growing in capacity and capability, it will be helpful to have anticipated arising ethical issues and developed plans for avoiding, detecting, and resolving them to the extent possible.

Dattathrani & De (2023) make a strong argument that with the new generation of technologies, such as AI, the notion of agency needs to differentiate between the actions of AI from that of traditional information systems and humans. Indeed human and material agency have been investigated in the IS literature to understand how technology and humans influence each other. Some framings of agency, however, treat humans and technology symmetrically, some privilege the agency of humans over technology, and others do not attribute agency to either humans or non-humans. The authors introduce the dimensions of agency to differentiate agencies while not privileging any actor. They illustrate the application of dimensions by using it as a lens to study the case of a technician using an AI solution for screening patients for early-stage breast cancer. Through the use of the dimensions of agency, they illustrate how the influence of AI over human practice, such as screening for early-stage breast cancer, is higher than the influence of traditional technology. Their study makes contributions to the theory of agency and concludes with a discussion on potential practical applications of the framework.

Harfouche et al., (2023) highlight that despite the hype surrounding AI, there is a paucity of research that focuses on the potential role of AI in enriching and augmenting organisational knowledge. The authors develop a recursive theory of knowledge augmentation in organisations (the KAM model) based on a synthesis of extant literature and a four-year revised canonical action research project. The project aimed to design and implement a human-centric AI (called Project) to solve the lack of integration of tacit and explicit knowledge in a scientific research centre (SRC). To explore the patterns of knowledge augmentation in organisations, this study extends Nonaka’s knowledge management model which includes socialisation, externalisation, combination, and internalisation, by incorporating the human-in-the-loop Informed Artificial Intelligence (IAI) approach. Their proposed design offers the possibility to integrate experts’ intuition and domain knowledge in AI in an explainable way. The findings show that organisational knowledge can be augmented through a recursive process enabled by the design and implementation of human-in-the-loop IAI. The study has important implications for both research and practice.

Koniakou (2023) engages in the discourse of AI governance from three angles grounded in international human rights law, namely, Law and Technology, Science and Technology Studies (STS), and theories of technology. The author posits that by focusing on the shift from ethics to governance, it offers a bird-eye view of the developments in AI governance, focusing on the comparison between ethical principles and binding rules for the governance of AI, and critically reviewing the latest regulatory developments. Further, by focusing on the role of human rights, it takes the argument that human rights offer a more robust and effective framework a step further, arguing for the necessity to extend human rights obligations to also directly apply to private actors in the context of AI governance. This study offers insights for AI governance borrowing from the Internet Governance history and the broader technology governance field.

Minkkinen et al., (2023) focus on addressing a gap in knowledge related to the governing AI which requires cooperation, yet the collaboration’s form remains unclear. Technological frames provide a theoretical perspective for understanding how actors interpret technology and act upon its development, use, and governance. However, there is limited knowledge about how actors shape technological frames. The authors examine the shaping of the technological frame of the European ecosystem for responsible AI (RAI). Through an analysis of EU documents, we identified four expectations that constitute the EU’s technological frame for the RAI ecosystem. Moreover, through interviews with RAI actors, we revealed five types of expectation work responding to this frame: reproducing, translating, and extending (congruent expectation work), and scrutinising and rooting (incongruent expectation work). The authors conceptualise expectation work as actors’ purposive actions in creating and negotiating expectations. Their study contributes to the literature on technological frames, technology-centred ecosystems, and RAI while also elucidating the dimensions and co-shaping of technological frames.

Papagiannidis et al., (2023) highlight that despite the use of AI, companies still face challenges and cannot quickly realise performance gains. Adding to the above, firms need to introduce robust AI systems and minimise AI risks, which places a strong emphasis on establishing appropriate AI governance practices. In this paper, we build on a comparative case analysis of three companies from the energy sector and examine how AI governance is implemented to facilitate the development of robust AI applications that do not introduce negative effects. The study illustrates which practices are placed to produce knowledge that assists with decision-making while at the same time overcoming challenges with recommended actions leading to desired outcomes. The study contributes by exploring the main dimensions relevant to AI’s governance in organisations and uncovering the practices that underpin them.

Polyviou & Zamani (2023) acknowledge that AI promises to redefine and disrupt several sectors. At the same time, AI poses challenges for policymakers and decision-makers, particularly regarding formulating strategies and regulations to address their stakeholders’ needs and perceptions. This paper explores stakeholder perceptions as expressed through their participation in the formulation of Europe’s AI strategy and sheds light on the challenges of AI in Europe and the expectations for the future. The findings reveal six dimensions of an AI strategy; ecosystems, education, liability, data availability sufficiency and protection, governance, and autonomy. It draws on these dimensions to construct a desires-realities framework for AI strategy in Europe and provide a research agenda for addressing existing realities. Their study advances the understanding of stakeholder desires on AI and holds important implications for research, practice, and policymaking.

Another interesting, yet theoretically underdeveloped application of AI is the use of AI-powered chatbots in the context of education and the experiences of students who use them. Chen et al., (2023) make the case that chatbots are increasingly used in various scenarios such as customer service, work productivity, and healthcare, which might be one way of helping instructors better meet student needs. However, few empirical studies in the field of IS have investigated pedagogical chatbot efficacy in higher education, and fewer still discuss their potential challenges and drawbacks. In this research, the authors address this gap in the IS literature by exploring the opportunities, challenges, efficacy, and ethical concerns of using chatbots as pedagogical tools in business education. In this two-study project, they conducted a chatbot-guided interview with 215 undergraduate students to understand student attitudes regarding the potential benefits and challenges of using chatbots as intelligent student assistants. The findings of this study reveal the potential for chatbots to help students learn basic content in a responsive, interactive, and confidential way. The findings also provided insights into student learning needs which we then used to design and develop a new, experimental chatbot assistant to teach basic AI concepts to 195 students. Results of this second study suggest chatbots can be engaging and responsive conversational learning tools for teaching basic concepts and for providing educational resources. The authors discuss possible promising opportunities and ethical implications of using chatbots to support inclusive learning.

Despite the concerns raised by scholars and practitioners about AI, the pervasiveness of social recommender systems (SRSs) in e-commerce platforms highlights a trend that consumers who are willing to delegate their decisions to algorithms (Schneider & Leyer, 2019). SRS are increasingly becoming embedded in e-commerce ecosystems due to their ability to reduce consumers’ decision time and effort by filtering out excess information and providing personalised recommendations (Tsai & Brusilovsky, 2021). As previous studies have largely focused on the technical aspects of the recommendation systems, there is limited understanding about the nature of the social information that improves the recommendation performance (Shokeen & Rana, 2020).

Bawack & Bonhoure (2023) investigate this phenomenon to identify the behavioural factors that influence consumers’ intention to purchase products or brands recommended by SRSs. The authors adopt a meta-analytic research approach to conduct an aggregative literature review that uses quantitative methods to test specific research hypotheses based on prior empirical findings. Through the analysis of 72 articles, the authors identify 52 independent variables which are organised into 12 categories. Emerging from the analysis of the articles the authors propose a theoretical model on the behavioural factors that affect consumers’ intentions to purchase products recommended by SRSs. As the study has important implications for research, the authors provide an agenda for future research that could advance theory-building efforts and theory-driven designs in SRS research and practice.

Each of the articles of this special issue, as well as other recent studies (e.g., Akter et al., 2021; Bankins et al., 2022; Gupta et al., 2022; Shneiderman, 2021) have advanced knowledge on the ethical issues and governance of AI. Despite these important contributions, significant learning remains about how to use AI for social good (Ashok et al., 2022; Coombs et al., 2021; Dwivedi et al., 2021; Kumar et al., 2021; Fossa Wamba et al., 2021). To this end, we make a call for future research. First, there is a need for a concerted effort within and between academic disciplines (e.g., IS, arts, engineering), policymakers, governments, and the wider society to discover innovative ways to use AI to achieve the sustainable development goals (SDGs). Second, while significant attention has been given to understanding the application of AI in a variety of contexts, there is a limited discourse about how to use AI for future-oriented inquiry, whereby IS researchers can explore future scenarios through immersive virtual experiences to better understand how to design resilient IS and incorporate these insights in future-oriented inquiry (Brooks & Saveri, 2017; Chiasson et al. 2018). Third, future scholarship on AI governance could investigate auditing of AI systems (Minkkinen et al., 2022b) as a mechanism to foster transparency, accountability, and trust.

We hope that this special issue provides scholars with a foundation in which integrity and rigor for scientific research will promote high-quality IS, and ethical principles will translate into professional and organisational practice (Calzarossa et al., 2010; Mäntymäki et al., 2022a).