1 Introduction

The rapid convergence of Artificial Intelligence (AI) technologies has the potential to reshape organisations across both the private and public sectors (Daugherty et al., 2019). In particular, there is emerging anecdotal evidence that AI is influencing patient journeys and medical practices, with the potential to revolutionise the healthcare landscape. While AI promises to unlock patient data and provide better-personalised, evidence-based medicine (He et al., 2019), it also presents significant concerns leading to patient distrust and ethical quandaries.

The collection and use of personal data by AI and analytical algorithms give rise to serious issues, including privacy invasion, fraud, lack of transparency, misuse of algorithms, information leakage, and identity theft (Sivarajah et al., 2017; Wearn et al., 2019). In fact, a survey indicated that 63% of UK adults are uncomfortable allowing AI to replace doctors and nurses for some tasks, such as suggesting treatments, and 49% are not willing to share their personal health data to develop algorithms that might improve the quality of care (Fenech et al., 2018).

AI could pose potential risks to care delivery by devaluing physicians’ skills, failing to meet transparency standards, underestimating algorithmic biases, and neglecting the fairness of clinical deployment (Vayena et al., 2018). Such ethical dilemmas and concerns, if not adequately addressed when implementing AI for digital health and medical analytics, can not only negatively impact patients but may also tarnish the reputation of healthcare organisations (Wang et al., 2018). In response to these ethical challenges, many countries have implemented data protection regulations, such as the UK’s Data Protection Act 2018, which is in line with the General Data Protection Regulation (GDPR) formulated by the European Union. These regulations aim to improve individuals’ confidence in sharing personal information with healthcare organisations, leading to a scholarly and practical focus on the responsible use of AI.

Responsible AI refers to the integration of ethical and responsible AI use into strategic implementation and organisational planning processes (Wang et al., 2023). It aims to design and implement ethical, transparent, and accountable AI solutions that help organizations maintain trust and minimise privacy invasion. Responsible AI places humans (e.g., patients) at the centre and aligns with stakeholder expectations as well as applicable regulations and laws. The ultimate goal of responsible AI is to strike a balance between satisfying patient needs through responsible AI use and attaining long-term economic value for healthcare organisations. Despite its importance for organisational prosperity and the significant attention devoted to it, responsible AI use in healthcare is still in its nascent stages.

2 Original Research in this Special Issue

This special issue includes a total of nine research studies which focus on a vast range of topics within the context of our theme of Responsible Artificial Intelligence for Digital Health and Medical Analytics.

Two research studies (Fosso Wamba and Quieroz, 2023; Trocin et al., 2023) from the special issue outputs provide a holistic review of how AI is being used for digital health which remains scarce. Fosso Wamba and Queiroz (2023)’s research presented a bibliometric approach to explore the dynamics of the interplay between AI and digital health approaches, considering the responsible AI and ethical aspects of scientific production over the years. The research found four distinct periods in the publication dynamics and the most popular approaches of AI in the healthcare field. In terms of contributions, this work provides a framework integrating AI technologies approaches and applications while discussing several barriers and benefits of AI-based health. In addition, five insightful propositions emerged as a result of the main findings. The study’s originality is stemmed from providing a framework with a set of propositions considering responsible AI and ethical issues on digital health. Trocin et al. (2023)’s work provided a comprehensive analysis of health AI using responsible AI concepts as a structural lens. The study presented a systematic literature review that supported data collection and sampling procedure, the corresponding analysis, and extraction of research themes to provide an evidence-based foundation. The research contributed with a systematic description and explanation of the intellectual structure of Responsible AI in digital health and develop an agenda for future research.

The remaining seven research studies of the special issue were based on empirical data. Al-Dhaen et al. (2023) examine the continuous intention by healthcare professionals to use the Internet of Medical Things (IoMT) in combination with responsible artificial intelligence (AI). The study was underpinned using the theory of Diffusion of Innovation (DOI) and presented a model that was developed to determine the continuous intention to use IoMT taking into account the risks and complexity involved in using AI. Data was gathered from 276 healthcare professionals through a survey questionnaire across hospitals in Bahrain. The findings show that despite contradictions associated with AI, continuous intention to use behaviour can be predicted during the diffusion of IoMT.

Johnson et al. (2023) examine the use of responsible Artificial Intelligence in Healthcare to predict and prevent insurance denials for economic and social wellbeing. This study utilises Design Science Research (DSR) paradigm and develops a Responsible Artificial Intelligence (RAI) solution helping hospital administrators identify potentially denied claims. Guided by five principles, this framework utilises six AI algorithms – classified as white-box and glass-box – and employs cross-validation to tune hyperparameters and determine the best model. The results show that a white-box algorithm (AdaBoost) model yields an AUC rate of 0.83, outperforming all other models. This research’s primary implications are to (1) help providers reduce operational costs and increase the efficiency of insurance claim processes (2) help patients focus on their recovery instead of dealing with appealing claims.

Kumar et al. (2023) conduct a mixed-method study to identify the constituents of responsible AI in the healthcare sector and investigate its role in value formation and market performance. The study context is India, where AI technologies are in the developing phase. The results from 12 in-depth interviews enrich the more nuanced understanding of how different facets of responsible AI guide healthcare firms in evidence-based medicine and improved patient centred care. PLS-SEM analysis of 290 survey responses validates the theoretical framework and establishes responsible AI as a third-order factor. The 174 dyadic data findings also confirm the mediation mechanism of the patient’s cognitive engagement with responsible AI-solutions and perceived value, which leads to market performance.

El-Haddadeh et al. (2023) research examines the considerations of responsible Artificial Intelligence in the deployment of AI-based COVID-19 digital proximity tracking and tracing applications in two countries: the State of Qatar and the United Kingdom. Based on the alignment level analysis with the Good AI Society’s framework and sentiment analysis of official tweets, the diagnostic analysis resulted in contrastive findings for the two applications. While the application EHTERAZ (Arabic for precaution) in Qatar has fallen short in adhering to the responsible AI requirements, it has contributed significantly to controlling the pandemic. On the other hand, the UK’s NHS COVID-19 application has exhibited limited success in fighting the virus despite relatively abiding by these requirements. This underlines the need for obtaining a practical and contextual view for a comprehensive discourse on responsible AI in healthcare. Thereby offering necessary guidance for striking a balance between responsible AI requirements and managing pressures towards fighting the pandemic.

Wang et al. (2023) research investigates how signals of AI responsibility impact healthcare practitioners’ attitudes toward AI, satisfaction with AI, AI usage intentions, including the underlying mechanisms. The research study outlines autonomy, beneficence, explainability, justice, and non-maleficence as the five key signals of AI responsibility for healthcare practitioners. The findings reveal that these five signals significantly increase healthcare practitioners’ engagement, which subsequently leads to more favourable attitudes, greater satisfaction, and higher usage intentions with AI technology. Moreover, ‘techno-overload’ as a primary ‘techno-stressor’ moderates the mediating effect of engagement on the relationship between AI justice and behavioural and attitudinal outcomes. The study argues that when healthcare practitioners perceive AI technology as adding extra workload, such techno-overload will undermine the importance of the justice signal and subsequently affect their attitudes, satisfaction, and usage intentions with AI technology.

Gupta et al. (2023) study attempts to establish whether AI risks in digital healthcare are positively associated with responsible AI. The moderating effect of perceived trust and perceived privacy risks is also examined. The theoretical model was based on perceived risk theory. Perceived risk theory is important in the context of this study, as risks related to uneasiness and uncertainty can be expected in the development of responsible AI due to the volatile nature of intelligent applications.

Finally, Liu et al. (2023)’s research examines the impact of responsible AI on businesses using insights from analysis of 25 in-depth interviews of health care professionals. The exploratory analysis conducted revealed that abiding by the responsible AI principles can allow healthcare businesses to better take advantage of the improved effectiveness of their social media marketing initiatives with their users.

We present a summary of the nine papers in our special issue in Table 1. This table captures the diversity and depth of the papers, indicating the methodology, key contributions, and the dataset used. The methods employed range from systematic literature reviews and bibliometric analyses to empirical studies involving surveys and interviews. Each paper explores a unique aspect of the overarching theme, providing fresh insights into the interplay between artificial intelligence, digital health, and medical analytics. From uncovering the dynamics between AI and digital health to investigating the impact of AI on businesses, these studies offer a comprehensive view of the current landscape. A variety of datasets have been used, including surveys from healthcare professionals, data from AI-based COVID-19 tracking applications, and in-depth interviews. The findings from these studies extend our understanding of responsible AI in the healthcare sector, highlighting the potential benefits and challenges, ethical considerations, and the future directions of this rapidly evolving field. By collectively examining these papers, readers can gain a holistic understanding of the role of responsible AI in shaping digital health and medical analytics.

Table 1 A summary of contributing papers in our special issue

3 Pathways for Further Research

Current research on the use of AI in healthcare primarily focuses on the technological understanding of its implementation and the exploration of the economic value of AI applications. However, there is a notable lack of comprehensive studies on the practices, mechanisms, infrastructure, and ecosystem supporting responsible AI use in this context. This presents an urgent need to develop research in AI for healthcare from a social responsibility perspective. By doing so, we can transform the ethical considerations from potential barriers into opportunities that enhance trust and engagement among patients.

Understanding the role of responsible AI use in creating value in healthcare not only contributes to an emerging field in Information Systems (IS) research but also provides practical recommendations for healthcare practitioners. The papers in this special issue have begun to address these areas, but there is much more to be explored. Potential pathways for further research could include as follows. First, while several papers have explored the application of responsible AI in various health scenarios, more in-depth research could be conducted to understand the specific contexts where these AI applications excel or face challenges. Second, the papers by Fosso Wamba and Queiroz (2023) and Trocin et al. (2023) have identified different periods and themes in the evolution of AI in healthcare. Longitudinal studies could be carried out to track the evolution of AI in healthcare and examine these changes over time and predict future trends. Third, the role of regulations and policies can be considered. Given the ethical considerations surrounding responsible AI, more research could be dedicated to studying the impact of different regulations and policies on the development and deployment of AI in healthcare. Fourth, As Al-Dhaen et al. (2023) have shown, user behaviour plays a crucial role in the adoption of AI technology. Further research could delve into the factors that influence user behaviour, especially in the face of potential risks and uncertainties. In addition, the research by Wang et al. (2023) has revealed that “techno-overload” can impact healthcare practitioners’ attitudes and usage intentions. More research could be done to explore how these “techno-stressors” can be mitigated.

Finally, the paper by El-Haddadeh et al. (2023) has compared AI-based COVID-19 tracking and tracing applications in two countries. Cross-cultural studies could be carried out to understand the cultural factors influencing the deployment and acceptance of AI in healthcare. We recommend future research to study responsible AI in medical tourism as cross-countries healthcare systems may add another layer of complexity that need to be unpacked (Olya & Nia, 2021). Based on the Siala and Wang’s (2022) SHIFT (Acronym: Sustainability, Human centeredness, Inclusiveness, Fairness, and Transparency) framework suggesting a pathway to shift AI to be responsible in healthcare, we recommend five research themes for future research on development and application AI for digital health and medical analytics (see Table 2).

Table 2 Future research theme on development and application AI for digital health and medical analytics