Introduction

Recent news stories around Microsoft’s avatars, Elon Musk’s Neuralink’s brain implants, robotic bionic prosthetics and Synchron’s next-generation brain computer interface trials in the US have helped bring artificial intelligence (AI) to the masses. Popular media featured the story around an autonomous laparoscopic robot that performed a complicated surgery on a pig with significantly better results than a human surgeon by reconnecting two different parts of the intestine, adjusting to soft tissue movement in real time with AI [1]. The 2020 joint report from EIT Health and McKinsey & Company highlighted six areas where AI has a direct impact on the patient—self-care, prevention and wellness, triage and early diagnosis, diagnostics, clinical decision support and care delivery—in the context of chronic care management [2].

Despite this widespread sense that positive patient impact from AI will materialise, very little has been published regarding how patients perceive these presumed benefits, even in a hypothetical context [3]. In fact, to our knowledge there are no academic reports of perspectives from patients about AI-enabled healthcare which they received. This imbalance between different stakeholder groups needs to be addressed to optimise implementation success and to evaluate its scale-up and sustainability. What might be perceived as an acceptable treatment, service, practice or innovation among one stakeholder group, e.g. the providers, might not be perceived as favourably across another, e.g. patients.

Co-authored by a patient engagement knowledge exchange expert and a research-active clinician, this commentary aims to briefly summarise patient perspectives on AI to date, examine clinicians’ emerging roles with AI and explore how all stakeholders can optimise the process of translating research/evidence into practice and development and implementation of clinical AI.

This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.

Patient Perspectives on AI

In this section we shall cover briefly what patients want from AI and some of the perceived challenges and limitations it has from a patient point of view.

Stakeholders acknowledge that AI will support the shift toward 4Ps, medicine that is predictive, preventative, personalised and participatory [4]. Despite this, AI is still seen largely as an abstract concept for many patients [5]. Patient perspectives may not be a dominant feature in the academic literature but reports from public and charity institutions offer some insights [3].

A recent UK nation-wide survey found differences in levels of public acceptability between AI application usages with critical care and life-limiting illnesses and emotive conditions in paediatric rare disease or cancer attracting more public “buy in” with AI still seen largely as an abstract concept for many patients [5].

“More broadly it was clear that the term ‘AI’, and the devices and applications that this might include, are sometimes not widely understood. There are misunderstandings around the application of AI and concerns around assurances in efficacy and ethical use. It was noted that this lack of understanding could be seen to affect the public and patients” [5].

The need to make AI more explainable to people from different backgrounds was identified, as were having effective governance mechanisms in place that could address patient concerns on transparency, accountability, independent oversight and appropriate data protection.

In a European survey by the European Patients Forum (EPF) similarly the most promising potential for AI was around delivering “more personalised care to patients”, and “supporting health system efficiency and improve organisation and delivery of care” [6]. However, concerns around having “less human interactions with healthcare professionals” were also voiced, as where erosion of patient choice and increase in healthcare costs.

In addition, the impact technical failures could have on a clinical system overly reliant on AI and the potential for personal harm from AI in relation to algorithmic bias were also sources of concern.

Patients’ acceptance of the AI therefore may be more conditional on mitigation of potential harms and additional assurances on autonomy in decision making rather than specific technicalities around complex technical concepts like reproducibility and ‘black box’ methodologies if the risk itself is sufficiently explained in lay terms.

The authors contend that when we consider the huge range of diseases, tasks and contexts to which specific AI tools can be applied, however, these general insights are of limited value. Even within a single disease area, for instance, the wide range of AI applications raises different concerns. Data concern issues from patients centre around fairness, access to and the resilience of “data safe havens”, transparency of data use, accountability for data stewardship and potential exacerbation of health inequalities. Some of these can be mitigated by having data quality standards, such as the CONSORT-AI and SPIRIT-AI reporting guidelines for clinical trials using AI [7, 8]. It is also encouraging to see regulators such as the FDA and MHRA taking patient inclusion to heart in their regulatory pre-submissions, with the perspectives of patients as forming some of the important criteria, on calculating if the outcomes measured align with user needs, user experience and user satisfaction.

An AI-enabled chatbot that facilitates patients independently booking an orthopaedic clinic appointment raises concerns which are distinct from those around an AI-enabled prognostic tool, which supports management planning by forecasting the risk of surgical complications, and reservations around data protection in both such examples also differ. Considering the recent hype around ChatGPT, a product from OpenAi, an AI laboratory shows that it is up to each of the stakeholders to take responsibility for their learning by using ChatGPT and playing with it, and this equally may also apply to GPT-4 the newer version of multi-modal language model AI [9]. However, the truth is that no digital transformation will work unless everyone is excited about the opportunity and can translate that excitement into direct learning made convenient for both patients and other stakeholders.

Implementation science across healthcare improvement quality studies has shown that acceptability is important at the early stages of implementation (affecting adoption) throughout implementation (affecting penetration) and late stages of implementation (affecting sustainability) so it has been recommended that implementation outcomes should be evaluated at multiple different stages of implementation. The needs of both patients and clinicians therefore need to be incorporated in policy, research and regulatory considerations to adopt a dynamic and evolving approach to training and knowledge exchange.

What Does This Mean for Clinicians?

In this section we shall briefly cover some of the perceived challenges and limitations around AI from a clinician point of view.

From the perspective of patients and other healthcare stakeholders, clinicians play an important role for unfamiliar approaches to healthcare delivery such as AI. This reflects their professional and legal accountability for the treatments they recommend and the local insights they hold into social aspects of healthcare provision. It also reflects the trusted relationship they have with their patients and their established role in explaining aspects of their healthcare [10]. If appropriately trained and resourced, clinicians can be pro-active in this role, not just to improve an AI tool’s chances of adoption and acceptance by patients, but by improving the tool itself. However, most clinicians have a limited sense of what clinical AI and its implications for practice are and so supporting their engagement in AI development is challenging. Despite pioneering educational programmes targeting early adopters and policy acknowledging the importance of workforce development, AI-specific training is yet to be scaled among many healthcare professional groups [10]. Whilst this training deficit is addressed, two implementation work-arounds appear to be emerging organically: (1) limiting the agency of the clinicians enacting AI-enabled care; (2) choosing AI solutions which depend on a small number of clinicians with high AI literacy.

Both of these compromises are exemplified by a recent qualitative study of a real-world AI-enabled care pathway [11]. Here, a US vascular specialist in secondary care was instrumental in the local development and application of AI to prioritise patients with peripheral arterial disease for smoking cessation or medical interventions in primary care. The intervention aimed to lower the risk of end-stage vascular events. Alongside a multi-disciplinary team in secondary care, the depth of expertise this clinician held in AI and vascular health was key. It enabled them to effectively absorb the unfamiliarity that may otherwise have disincentivised the primary care providers who ultimately enacted the AI-enabled healthcare interventions. The AI-trained vascular specialist did this by sending conventional patient-specific clinical recommendations to primary care providers, which made no mention of AI.

This approach appeared to succeed clinically without widespread clinician education but holds the disadvantage of low transparency for primary care providers and patients who were unlikely to become aware of the role AI played in their healthcare provision. Within this carefully selected triaging use case for preventative healthcare resources, this disadvantage does not appear as significant as it would elsewhere. However, at least a proportion of patients would expect to be informed of the presence of AI in the delivery of their care and discovering this retrospectively may impact trust. Besides this, limiting the input of the clinicians enacting AI-enabled healthcare into the design of the technology and care pathway itself can threaten the success of an AI tool. This is well illustrated by a Thai implementation study of an AI tool for retinal imaging analysis [12]. Here, it was entirely transparent to patients that specialist nurses’ decisions about whether to refer them to ophthalmologists for diabetic retinopathy were AI-enabled. Through their real-world experience with AI, the nurses became aware of a higher rate of false positives than anticipated, which did not pose risks to patients but posed major inconvenience through travel to a distant hospital. This led to challenges for patients, which could outweigh the benefits they experienced from the AI, and ad hoc alterations to the AI use case by nurses attempting to reduce burden of disease management for patients. The social and technical factors specific to the clinical use case that led to this problem may have been more effectively anticipated by nurses if they had been involved earlier in the translation of the AI tool.

Clinician interventions such as these can be extremely valuable but can carry risk if they are not based on an adequate understanding of the AI tool in question or appropriately monitored. As much as possible, these clinician insights should be incorporated earlier in the development and integration of AI tools to allow for more intentional pursuit of patient values.

How can Stakeholders Work Better with Patients?

In this section we shall present an overview of some principles of patient engagement and public involvement in research, clinical trials and innovation procurement.

Patient stakeholders have different perspectives that can add value to both the identification and prioritization of healthcare research to which we want to apply AI [13]. Their input into research design also helps to ensure broader representativeness, diversity and inclusion in clinical trial participation, with lower dropout rates through greater trust built with local communities [14].

Different countries adopt and define patient engagement and public involvement in different ways, with different methodological frameworks in place, the scope of which would be too large to cover in detail in this commentary. On the one hand we see emerging public involvement approaches include crowd-sourcing, distributed intelligence, citizen science and co-production, which can act as means of delivering enhanced communication of research findings between different stakeholders and promoting health literary in general to different audiences. Complementary to citizen science is the usage of both participatory community-building activities and arts-based creative engagement to promote awareness of science, but also embed principles of equality, diversity and inclusion and responsible research innovation (RRI), both identified as being necessary by research funders.

Across these definitions of involvement, it has been shown that there is a particular requirement for communities that already experience health inequality, whose first language is not English or have low education levels, low socioeconomic status or mental health issues to get involved in health research and regarding AI to boost public acceptability around ethics and transparency considerations.

It is therefore important to distinguish and differentiate patient engagement from general public involvement activities; in the evolving and complex area of AI, both researchers and developers need to include a range of different approaches to strike the right balance. One way this can be achieved is by enhanced governance mechanisms and having in place monitoring and reporting systems that can enable the evaluation and value of the effectiveness of patient engagement. Increasingly, we see the recognition of trained patient experts that have some familiarity with clinical, regulatory affairs and data science, via the development of bachelor-led level training programmes, by pharmaceutical industry and public funders, such as the European Patients’ Academy on Therapeutic Innovation (EUPATI) and Rare DISeases in EUROpe (EURORDIS), which represent the voice of rare disease community. These lead to different types of patient contributions, which add value in different ways. There are increasingly calls for these insights to be recognised and compensated, with the US National Institute of Health (NIH) National Health Council Fair Market Value calculator [15]. Voices from the pharmaceutical industry have also called for global standardization of renumeration for patient representatives that accounts for the differing needs and burdens [16]. Such approaches with frameworks like Patient Focused Medicines Development (PFMD) can form part of the health value chain and further contribute to the quadruple helix of innovation [17]. This is a management term used primarily in business and knowledge economy.

Embedding patient preferences can also enhance the relevance of patient engagement in research and trial settings. This can include daily diaries, notifications, on boarding experience, validated instruments and wearables data, alongside clinical research outcomes and patient reported outcomes, in the design of research. The IMI PREFER project was a public private partnership with the Innovative Medicines Initiative (now Innovative Health Innovative) and the European Commission looking at how and when to best include patient preferences in the medical product decision making with a focus on the development of guidelines [18]. The EC PERMIT (PERsonalised MedicIne Trials) project, also funded by the European Commission, examines recommendations to ensure the robustness of personalised medicine trials [19]. These trials require validation of stratification methods covering a broad range of issues around methodology, design, data management, analysis and interpretation in personalised medicine research programmes. Both of these projects provide examples of the importance of patient preferences in actual trial retention rates and how they can be used alongside other types of data to create person-centred care with transparent evaluation methods.

PiPPi, an example for policy makers, was an EU project looking at the procurement of innovation and innovation of procurement with the European Hospital Alliance. It created different models of procurement that were patient focused, providing an opportunity to share knowledge [20]. These have shown that there is a need to pay additional attention to diverse local contexts and adaptations and combine these with collaborative digital strategies that enable both in-built and explicit flexibility to achieve better stakeholder interest alignment and more effective implementation/adoption.

Besides the application of patient engagement to clinical AI, AI technology can also enhance patient engagement in many ways. We have seen industry use state-of-the-art entity-recognition models and extraction methods to provide clinical insights prior to defining subsequent downstream tasks from social media data relating to Post Covid-19 Condition colloquially known as “LongCOVID” [21].

How Can Product Development Including AI Be More Patient Driven?

In this section we present our thoughts gained from supporting start-ups and scale-ups with their funding via public research and development grants across the value chain to optimize health outcomes in relation to patient engagement considerations in AI product development.

For patients, the advantages of AI outweigh the disadvantages, providing that the impact it generates leads to them having better, more personalised care. Developers that want to clearly align the commercial sustainability of clinical AI development, with the contribution of patient engagement, need to combine a patient-centric approach with a product centric one.

Many small and medium enterprises (SMEs) already apply User Acceptance Testing such as usability considerations, user-experience design and Human Factors in the design of their product particularly medical devices [22, 23]. Start-ups already adopt a Digital mindset; Agile, Together, Open, Flat, User-centric [24], which means that they work in an agile way in consultation with patients openly, which includes publishing code on Github, and adopting open flat structures. SMEs have been encouraged to create their value proposition that aligns with the UK NHS Design principles to reflect the interests of all users, in the recently launched UK’s AI and Digital Regulations Service for health and social care [25].

Unfortunately, during the different stages of AI product development itself when patient engagement can be most efficacious, there is often a lack of detailed enough patient engagement stages or resources in place that can support its execution as there is a lack of knowledge on what this will mean in terms of implementation effect. Several articles have suggested that patient engagement in the design stages could minimize resource expenditure further down the line and in the report by the EPF, a patient representative criticised developers’ belief that “involving patients means asking patients to comment on the user friendliness of digital interfaces, rather than discussing whether the tool actually meets their needs and is deployed ethically” [6]. The inclusion of patients in any criteria development for accessing new technologies and in evaluating AI in clinical settings, including identifying criteria for granting patient access to new AI technologies and reflecting their interests in AI metrics, was also highlighted in the EPF report.

As SMEs take their products to market and scale, new needs arise around the suitability of digital health measures to better suit patient needs. It therefore becomes important to differentiate between usability testing itself and the actual digital measure development itself.

Digital Medicine Society US, DiMe, has recently highlighted this with some of their work on the development of a four-tiered framework that defined “meaningful aspects of health” (MAH) as key. The framework part of their playbook has been adopted by the EMA and considers the need to: (1) determine the MAH, (2) define the digital measure (e.g. outcome/endpoint), (3) evaluate the risk/benefit to ensure safety and efficacy [e.g. complete validation (V3), utility and usability, security, data rights], (4) evaluate the risk/benefit to ensure safety and efficacy (e.g. complete validation (V3), utility & usability, security, data rights) and (5) plan for the jobs to be done during deployment (e.g. purchasing, distribution, monitoring, data analysis). DiMe have shown in their work how this approach can help patients frame their condition in ways that demonstrate that they (1) do not want to become worse, (2) want to improve or (3) want to prevent [26].

When it comes to AI-specific considerations, we have seen the need to go a step further than just delivering a great product/service and undertaking focus groups with a patient organisation.

Start-ups and developers may work with academic researchers to support them in meeting and providing proof of their technical solution complying with digital evidence standards for regulatory pathway approval, but this will rarely include the differentiated patient insight articulated earlier or adopt a more systematic, data-driven manner allowing the patients to help design the research and data management approach. This is partially because the actual AI itself is complex, but the user-centred design principles in clinical AI development need to go a step further than aligning patient experience, patient preference data and patient reported outcomes for effective implementation evaluation, so that the innovation produced is both product and patient centric. The need to alter power dynamics and to remove barriers around compliance to enable smooth engagement between pharmaceutical industry and patients, and to make it easier to work with patients in the middle stages of product development between phases, has already been highlighted [27]. Similarly, there are lessons to be learnt from healthcare improvement methodologies including Lean, Six Sigma and PDSA [28,29,30]. This is in addition to other quality improvement work involving systems approaches from the engineering field that can further support stakeholders to derive realistic systems models taking a system-wide view of patient flows, patient concerns and resource needs [31].

Conclusion

We need to move beyond piloting stages with patient engagement on clinical AI into the specifics of real-world use cases and reach a point where patients do not feel intimidated by the complexity of AI itself. To do that we need to enable clinicians to understand, shape and demystify AI tools as they progress towards adoption into routine care. There is also a need to demonstrate impactful knowledge transfer from diverse patient groups to the stakeholders who currently drive the development and implementation of clinical AI in different stages and with different methods. One of the ways this can be achieved is with the creation of succinct, KPI-driven metrics that measure the engagement strategy itself and can translate this knowledge into viable outputs. This will allow for collaborations between these interconnected groups that more closely serve patient benefit and is framed around what actually matters to patients, but also enable competitive product development.

The impact of AI must be assessed from a multi-stakeholder perspective, but there is a need for a more innovative and commercially minded approach to integrating patient perspectives in the actual AI product development and in its delivery, ensuring that the hype around the actual AI technology created does not neglect the end users. Whilst workflow and system integration considerations specific to an AI technology and its level of maturity impact clinician’s usage of it, the need for representation in both the pathways and populations that it serves are key. Engagement with clinical networks is critical for better service capacity, implementation capability and buy-in, but there is a need for continued dialogue with the technology itself as well as its ultimate end-user beneficiaries. The authors contend that we must build on best practice from other disciplines such as design science, systems engineering and change management strategies to understand the value that patient engagement can bring to the design, application, delivery and management of digital technologies. This may also be the case in emerging models of AI involving digital and virtual twin models development in healthcare.

If the medicine, science and code are cutting edge, shouldn’t the approach to patient perspective integration be too?