1 Introduction

The availability of huge amount of data, together with an extraordinary computational power, is promoting a wide use of AI and at the same time it is raising ethical and social concerns that must be addressed to optimize the benefits and to prevent risks. The importance of the social role played by AI is described in the definition released by the Organisation for Economic Co-operation and Development [1], that verbatim quote: “AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”. The use of AI and more in general of digital technologies is causing a global socio-economic change that also affects medicine and healthcare and this can be verified by the growing interest and by the number of scientific publications [2] of researchers and players attracted by AI and digitalization. By simply performing a Google Scholar review article search [3], we can retrieve more than 26,000 articles with a searching criteria “artificial intelligence healthcare” and more than 62,000 articles with a searching criteria “artificial intelligence medicine”. Health digitalization is a broad definition encompassing various technologies, ranging from apps for smartphones used to reveal skin cancer [4], or telemedicine often used during Covid-19 pandemic as measures to mitigate risks [5], to digital therapies [6] that are being developed and that impact on the behavior, like the first game-based digital therapeutic to improve attention function in children with attention deficit hyperactivity disorder (ADHD) approved in the United States (US). Digitalization can be found to be involved also in the clinical development processes of medicines where the use of ML, a discipline encompassed by AI and based on the use of mathematical modelling to analyze data, is providing important insights in term of prediction or classification by adapting the performance of the algorithm as the availability of data increases, obtaining important and significant gain of resources. The importance of digital technologies and AI is shown also by the increasing number of ML-based tools approved by the food & drug administration (FDA) [7] and by the fact that it is considered as a strategic goal in the european medicines agency (EMA) regulatory science to 2025 strategic reflection paper [8]. Regulatory Agencies (RAs) together with political institutions and scientific organizations are working and discussing about new paradigms, that are bringing along concerns, but with the aim to foster scientific research, and to accelerate the access of patients to therapeutic opportunities, still ensuring strong regulatory requirements. For instance, the international medical device regulators forum (IMDRF) [9] has released a framework for risk categorization based on the importance of the information provided and on the seriousness of the clinical condition, while FDA is giving various substantial contributions, for instance with the promotion of good machine learning practice (GMLP) [10], referred to data management and selection, training and tuning for building reliable software. The EMA and the heads of medicines agencies (HMA) have published two reports [11, 12] focused on the regulatory validity of big data, with the definition of various steps such as data standardization and evidence generation. The need to test the ability of ML methods to identify data that may support with the interpretation of healthcare data together with real-world data (RWD), in a clinical trial setting, is clearly identified by EMA in the recent regulatory science research needs publication [13]. The application of ML in CTs may widely vary, from patient recruitment to study design, to the definition of endpoints or to perform a more accurate diagnosis; in any case the assessment of these technologies is impacting the activities of RAs involved in the authorization of clinical studies [14]. From a regulatory point of view, the lack of dedicated guidelines and harmonized approaches brings uncertainty among applicants and RAs [15], making it difficult to frame these tools, that sometimes according to the stated intended use, may be used within a trial for instance in the selection of patients to be enrolled, with the aim to just save resources in time consuming processes, and so they may not meet the definition of medical device. However, ML tool software should in principle be considered as a medical device, providing information which are even used by physicians for decision-making purposes with a therapeutic or diagnostic aim, and therefore e.g. in the EU, these are regulated under the Medical Device Regulation (EU) 2017/745 (MDR) [16] or the In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746 (IVDR) [17]. Although the regulatory assessment of a medical device may be performed by a different office from the clinical trials one, or by a different RA in some Member States (MSs) in EU, the interaction between RAs, offices and assessors is mandatory. It is crucial to share data and information, and to ensure the compliance with fundamental principles such as the protection of rights, safety, dignity and well-being of subjects, and the generation of reliable and robust data in accordance to requirements set out in the Regulation (EU) 536/2014 [18] in EU or, in case of other countries not in the European Economic Area (EEA), in compliance with those principles described in the International Council of Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) [19], Guideline for Good Clinical Practice [20] and the Declaration of Helsinki [21]. There is a clear need to standardize the regulatory approach to the assessment of ML tools in CTs to support a prompt regulatory acknowledgement and speed-up the incorporation of innovation into the CTs assessment and authorization process. We propose a step forward in such a process by discussing the requirements for a trustworthy AI and their relationship with the required key regulatory points and characteristics that need to be addressed for CTs that use ML-based technologies, focusing mainly on adaptive algorithms, as considered potentially more critical in a CT setting. However, the same approach and principles may apply to the use of any ML/AI. We also analyze how issues identified during an assessment process could potentially impact both on crucial regulatory requirements and the requirements for a trustworthy AI, highlighting critical areas of interaction.

2 Materials and methods

To identify the key points that need to be addressed in a CT setting involving the use of AI or ML systems, we took as a reference the first supporting guide available in the overall EU regulatory landscape specifically focusing on the request for authorisation and assessment of clinical trials involving the use of AI/ML [22]. Even if the guide may reflect one Competent Authority (CA) perspective, it is the only one that to the best of our knowledge lists and describes the regulatory information that should be submitted to the CA to request the authorization of a CT impacted by ML-based tools. The Ethics Guidelines for Trustworthy AI publication by the European Commission defines instead the key requirements for a trustworthy AI [23]. The standard assessment process of a CT is oriented in order to identify to what extent some key regulatory points are impacted, focusing on those primary areas of interaction with the requirements for a trustworthy AI as well as potential challenges that may be further elaborated and extrapolated to implement dedicated policies able to support the regulatory assessment and decision-making process in a real CT setting.

2.1 ML predictive model

The output generation by a ML-tool derives from a complex process that could be basically and schematically described by the critical steps of the development process of the algorithms, such as training and validation. With the training process, the relationship between input and output parameters is coded and starting from the input, by an inferential mechanism, the model generates an output; this value is then compared with the true value, and the difference guide the update of the model. In this manner over time, the model learns to recognize the input, and it is possible to obtain the desired output with an acceptable precision. The training model depends on data and on its quality, including the representativeness, crucial to allow any machine learning algorithm to learn. As shown in Fig. 1, three data sets are usually needed, the training set, the validation set, and the testing set, allowing the training, the fine-tuning of the model, and the testing. Prognostic variables of the same specific patient population enrolled in the CT would be the original representative dataset that need to be prepared and fit for purpose to support the training and validation process of the ML predictive model.

Fig. 1
figure 1

The ML predictive model development process

2.2 Key points to address in a CT setting involving the use of AI or ML systems

Data (DTA)

a data management plan including the type, the origin and method of acquisition of the data used, the reliability, security, standardization of the dataset(s), potential biases, and how potential low-quality data are intended to be managed.

Algorithm (ALG)

the type of result expected by the use of the software should be reported together with the version of the algorithm and a comparison with previous experiences and available tools.

Output (OTP)

the definition of what the machine generates together with the correlation to the scope, the objectives and/or endpoints of the CT. If it is a decision support software, an explanation of how the algorithm works in making decisions is expected, unless a proper level of access is provided to the CA.

Health care and clinical setting (HCS)

availability of statements of the clinical and epidemiological characteristics of the pathological condition, taking into considerations potential subgroups of patients, together with the description of standards of care routinely employed into clinical practice.

Intended use (INU)

the purpose and the intended use of the tool according to the statement of the manufacturer on the label (CE mark), if available, or in the protocol, including the added value (benefits) for patients in the context of the specific CT. Should the tool be a decision support software this condition should clearly be considered in a specific risk assessment, and it should be demonstrated and confirmed that the tool is safe and appropriate for the intended use.

Stakeholders (STK)

who are the users of the ML tool in the CT setting (healthcare personnel, subjects in the CT, etc.) and the compliance with the general data protection regulation (GDPR).

Level of evidence (LOE)

the intrinsic strength of the results of clinical studies deriving from scientific research used to build the model, and of the CT study results.

Training and validation datasets (TVD)

details on the representativeness of the training and validation datasets, provided together with information on the suitability of data, that is the capacity to answer to the clinical question taking into consideration potential bias and data collection methods [24]; proof of the independence of the training and of the test sets.

Performance metrics (PFM)

data on the performance of the model such as the area under the curve (AUC) and on the impact into the clinical setting, such as sensitivity, specificity, positive or negative predictivity, along with all statistic plans.

External validation / reproducibility (EVR)

the possibility of the model’s results to be generalized and reproduced, that also means the ability to obtain the same results by an independent assessor. Any possibility or additional method available to access the datasets and the model (by the CA, the subjects in the CT, the public, etc.).

Technologies and infrastructures (TAI)

data storage and cybersecurity, usability, data governance, hardware/interface requirements, data transmission, net connection etc.

In Fig. 2 it is illustrated the relationship among the core data, algorithm, and output, with the key points impacted in a clinical trial setting.

Fig. 2
figure 2

Data, algorithm, output and the relationship across key points impacted in a CT setting

2.3 Requirements for trustworthy AI in CTs

Human agency and oversight (HAO)

Respect of autonomy and of decisional process of human beings that should be ensured by the human oversight with measures through governance mechanisms.

Technical robustness and safety (TRS)

Development of system with a preventative approach to risks or threats of cyberattacks, minimizing unintentional and unexpected harm where possible.

Privacy and data governance (PDG)

Procedures for quality and integrity of data used and to process data in a manner that protects privacy.

Transparency (TRN)

Description of traceability mechanisms and the capacity to be fully understood, in term of functionality and operations by a person without any skill in AI.

Diversity, non-discrimination and fairness (DNF)

Development of system without discriminatory biases against certain group of patients.

Environmental and societal well-being (ESW)

AI should be used to benefit all human beings, including future generations.

Accountability (ACC)

Development of system able to ensure responsibility of various players throughout the AI system’s life cycle.

2.4 Assessment

According to the Regulation (EU) 536/2014 [18] in EU, the Draft Assessment Report (DAR) of Part I, that include the assessment of the scientific documentation of the CT application dossier, consists of seven parts: introduction, quality assessment, pre-clinical assessment, clinical assessment, statistical methodological assessment, regulatory assessment, conclusion [25].

CT applications are always evaluated by a multidisciplinary team of assessors, whose composition is reported in Table 1.

Table 1 Multidisciplinary team involved in the assessment of a CT

The regulatory assessor is responsible for ensuring the compliance with applicable laws and regulations of the CT application submitted by sponsors, should legal issues be identified, legal advisory is also included; the pre-clinical assessor assess the pharmacological properties such as pharmacodynamics and pharmacokinetics, comparative physiology and the toxicological profile of a drug in development stage; the clinical assessor is a physician that mainly focus the assessment activity on the CT protocol and related procedures, the clinical setting, the endpoints of the study, the population characteristics and the therapeutic area involved; the quality assessor may be a chemist, pharmacist, biologist depending on the characteristics of the investigational medicinal product (IMP) and is assessing the chemistry, manufacturing, and control (CMC) information provided by the sponsors to support the quality profile of the tested drug; the statistical assessor is focusing the assessment on the statistical analysis plan of the protocol and, when AI/ML is involved, data scientist competencies are required. If a final positive conclusion on all the parts of the DAR is achieved and an overall positive benefit–risk profile for the CT can be ensured, a final positive decision on the application can be taken. A multidisciplinary team of five assessors currently working at the clinical trials office (CTO) of a CA, that already assessed CTs impacted by the use of AI/ML, was involved in the assessment of the case in study.

We have considered in the assessment, the key points to address in a CT involving the use of AI or ML systems and the requirements for a trustworthy AI. Each of the five assessors were asked to provide, capitalizing on their experience and expertise, their input in terms of potential issues that may arise during the assessment of a CT that involves the use of AI/ML. They were also asked to focus only on those criticalities related to the key points and the requirements for a trustworthy AI. The outcome of the assessment is a list of potential issues. Any potential issue identified is then linked both to the key point impacted and to the requirements for trustworthy AI in CTs, using the issue categorization form available in the Appendix (Table 4). The association with the impacted key points and the requirements for a trustworthy AI is carried out independently and autonomously by the assessors, according to their best knowledge and belief and, although this may be considered a subjective evaluation, it has to be considered unquestionable for the purpose of this exercise, as it is the output of a scientific evaluation of an expert in the field. Reference to the assessment list for trustworthy AI (ALTAI) methodology [26] contributed to the description of the impact and challenges that a ML tool applied to a CT.

3 Results

A list of 33 potential issues was identified, reported in Table 2.

Table 2 List of potential issues identified

Potential issues were linked to the requirement for trustworthy AI and impacted key points. For a given potential issue identified during the assessment, one or multiple key points as well as one or multiple requirements for trustworthy AI were associated using the issue categorization form (Table 4). The absolute number of issues impacting each single area after linking them both to the key point impacted and to the requirements for trustworthy AI in CTs, is reported in Table 3.

Table 3 Number of issues per key point and per requirements for trustworthy AI identified

The highest number of potential issues were identified in the fields of technical robustness and safety of the ML tool and in relation to the data. However also the level of evidence, data accountability and transparency are greatly impacted. Results were further elaborated to map them in terms of interactions between the key points and the requirements for trustworthy AI, highlighting the most impacted issue combination areas as reported in Fig. 3.

Fig. 3
figure 3

Number of issues impacting combined key points and requirements for trustworthy AI, highlighting the greater issue combination areas

4 Discussion

The use of ML-based tools or AI is an opportunity as far as it is trustworthy. Fulfilling this requirement in the context of a CT setting means to be able to provide a contribution to the safety and efficacy profile of the study, whose evaluation is the main task of RAs [27]. For this reason, the assessor at the CTO should know the tools employed into the clinical study, regardless of whether it meets the regulatory definition of medical device or not. In the assessment process, the key information submitted to RAs should be evaluated taking into consideration risks and benefits and how they may affect the main requirements for trustworthy AI. The findings resulting from the application of the assessment process, collected in the issue categorization form, contain some limitations like the potential lack of a strong evidence because of the methodological bias. Our considerations are extrapolated from the experience in the assessment process of assessors and, we could not report detailed data of a specific study, providing their interpretation and explaining the reasons of a regulatory decision. It is also important to consider that the assessment of the CT is performed by one team of professionals only however, even if this is the regular process, it could be also useful to share and compare our insights with an enlarged group of assessors facilitating the identification of inferential correlations. Sharing similar projects with other RAs would be a desirable aim to reach harmonization in the assessment. Another limitation is the awareness that the issues reported in Table 2 cannot be considered as an exhaustive list, since additional information may emerge during the assessment of a real protocol, and because multiple types of CTs (complex trials, different therapeutic areas, different patient populations etc.) and designs could be approached. Although these limits, the results in Fig. 3 can provide significant suggestions on how to complement the assessment process currently followed for any CT application at the CTO and could support the output of the actual regulatory authorisation process by helping to focus on those most impacted issue combination areas, highlighting those spheres of potential greatest risk. In addition, this is valuable information that could be used to further elaborate the intrinsic value of the results retrieved, as detailed in the following subsections, in light of an extrapolation exercise to cover other study designs and ML tools. This method could be even used to support the drafting of a dedicated guideline on the assessment of CTs impacted by ML or AI tools.

The most impacted areas in terms of interaction, combining key regulatory points and the principles for a trustworthy AI, after the identification of potential issues during the assessment, are those related to the technical robustness and safety of the ML tool when connected to the level of evidence generated and the data used. However also the transparency of the algorithm and of the data are additional areas of greatest impact. There are also other combined areas highlighting how training and validation datasets, the algorithm and the intended use can directly impact on the technical robustness and safety of the tool. Stakeholders and the level of evidence are connected to human agency and oversight, and data management is impacting accountability. Important information is also that the performance metrics of the tool and the healthcare and clinical setting do not seem to be very critical information, at least in terms of quantitative number of potential issues. However, the independence from a clinical setting could be considered as a point in favor of the potential extrapolability of the used method. Even if there are also additional areas of interaction that may have a minor impact, a qualitative analysis of each issue should always be completed, and all issues should be further explored during the assessment process.

4.1 Technical robustness and safety

The most impacted issue combination areas highlight the main role that is played by data used to inform the adaptive algorithms. In the assessment process, the central role of data is directly related to the technical robustness and safety. The accuracy is the capacity of the software to make a correct prediction, the inaccuracy of the output pose unintended risks, like the one to prescribe a suboptimal therapy with less clinical efficacy and more undesirable effects. The level of evidence generated by clinical studies used to support the development phase of the tool, and that also impact on the technical robustness and safety of the tool, ultimately depend on the quality of the data used that impact the final CT study results. Risks and accuracy are estimated by performance metrics, used to quantify the predictive capability of the ML-model. The choice of metrics should be strongly related to the endpoints and should be able to ensure safety, efficacy and then equity by an early and clear statement of pre-set thresholds to be met in order to satisfy the acceptability of the system and to prevent unacceptable risks giving a substantial contribution to the achievement of technical robustness that is one of the key requirements for a trustworthy AI. To mitigate these risks, it is desirable to schedule in advance a monitoring plan of the accuracy, over time, with the aim to verify that the outputs remain acceptable. The level of accuracy, expected when the output affects humans, like in the case of a clinical trial, is also depending on the agreement around the uniqueness of clinical data definition by investigators, so far, a univocal data can be interpreted in only one way, but this condition may not always be fulfilled. This is evident and relevant for robust outcomes such as overall survival, but in some clinical settings, endpoints may not be available, and so these should be established taking into consideration potential harms [28]. The variability among investigators should be minimized and it is desirable the use of open shared data as much as possible by the most important and experienced physicians skilled in a specific disease; in addition to the accuracy of the output, another potential bias could come from the clinical site selected for the trial, because if the software is built capitalizing on data from an highly specialistic hospital, the management of the output, that is, the recommended therapy of the hypothetical study, could be more difficult in clinical settings less experienced and with a lower level of specialization. Robustness is given also by the level of evidence [29] of the data used to develop the software and the predictive model. It means that preferably standardized and secure data from clinical studies methodologically valid should be used, providing results with a relative strength ranked as high and so able to prevent risk of limited generalizability. About safety the main issue is the risk of cyber-attack that potentially impact any AI machine that should be resilient in case of threats and a certification showing the compliance with specific security standards should be provided, with a clear statement of the timeframe expected to ensure security.

4.2 Diversity, non-discrimination and fairness

Another potential bias could derive from representativeness of datasets that could contain disparities and when these are used to train the algorithm, it could lead to an over or under estimation of the results [30] and so to biased programs with reduced predictive accuracy that generate or exacerbate discrimination in subgroups of population, compromising the requirements of fairness. With regards to datasets, a clear distinction between clinical study data used as datasets for training purposes, and those used as validation datasets should be guaranteed, without data sharing and with an adequate representativeness that should be estimated by statistical tests describing the minimum level of acceptability among datasets.

4.3 Transparency, human agency and oversight, accountability

Regarding datasets, it is interesting to note that from a meeting organized by FDA [31] the importance of the explainability emerged and so of the transparency for patients that should have information about the representativeness of data used to train algorithms and even their possible changes, particularly about the intended use. The requirements of transparency of data and trustworthy AI could support to increase the quality of the data pool limiting some bias, like the selection bias related to representativity of the sample, that consequently will influence also discrimination, and by extension, fairness discussed in the previous section. There is an important link between bias and discrimination, intrinsic to processes such as those of data gathering, data cleaning and data processing [32]. Furthermore, transparency can contribute to improve the scientific rigor of clinical trials that could be affected by a critical concern know as publication bias, when only positive results of clinical studies are published, generating data that could in turn potentially be used to train algorithms. Technologies based on algorithms have an intrinsic opacity that is reducing the ability to explain the technical processes and to fully understand the reasons driving the generation of complex outputs, also by scientists that have created the algorithm [33]; this characteristic, referred as black box [34], cause a lack of predictability and raise the fear of potential loss of human oversight and consequently a reduction of trustworthy in AI. The improvement of knowledge is fundamental for the human oversight over the machine that don’t have to undermine human autonomy, that means that it should be ensured human discretion and so the users have to be able to take an autonomous decision; the oversight mechanisms are different and are dependent on the clinical trial setting; however, a variety of methods is available, and many ones can provide insights on the decision-making process. In any case, the awareness among health professionals using AI system, that the output is the result of an algorithm decision is desirable, to make a more informed decision. This would mean that the investigators should be always able not to accept the output of the machine and to prescribe, if considered necessary, any other pharmacological treatment considered as optimal for the patient. The definition of the level of autonomy in the assessment process should be considered as crucial (in particular with regard to the intended use), and so different considerations should be done if the software is classified as not autonomous or fully automated [35] because of the different risks associated, that will define a different liability in case of medical error [36] and in general different legal concerns. Legal issues that can vary significantly by jurisdiction, and cybersecurity are in any case, not specifically taken into consideration in the present manuscript where we are mainly focusing on regulatory considerations.

Transparency is a task that should be reached also by highlighting corrections or rectification of erroneous data, all changes should be traceable [37] and so a procedure should be in place to keep an audit trail to verify and identify what and how data has been changed, also including any statistical transformation or handling. Traceability procedures, as well as accuracy of data, are consistent with the principles of GDPR stating that data can be corrected or rectified, and that nothing should be hidden, and that it should be in compliance with the relevant standards for data management and governance throughout the life cycle of the CT. To achieve full transparency, clear statements regarding some key points like output and intended use should be openly communicated to RAs.

The initial and univocal description allows to avoid risks of otherwise use of the software because of the self-learning process; moreover, acknowledging the purpose of the tool is fundamental to evaluate the correctness of the data used to reach the aim, avoiding the risks to collect additional data for different purposes like e.g. marketing, consistently with the GDPR principles, such as purpose limitation and data minimization. The purpose is important also to justify the appropriateness of the output in terms of clinical relevance and the time for storage of data that must be no longer than necessary to reach the scope, in accordance with the storage principle of GDPR. Availability of data and information on process management and on algorithms could be relevant as far as it may be kept confidential by companies; difficulties in allowing data sharing because of commercial confidential information and privacy protection are acknowledged however, the replicability of results as well as data access for an independent evaluation and for inspections by CAs can improve significantly the explainability of the software and specially the auditability in compliance with accountability requirements. In terms of regulatory requirements and confidential commercial information, a fundamental contribute could be provided by the availability of ML-based tools with adequate transparency, equity and fairness, and more in general with a disclosure mechanism, because it could increase the trust of physicians and patients towards new technologies and consequently could promote its optimal use. This issue needs to be addressed by all players, by the promotion of an early interaction with academia, researchers, enterprises with patient engagement and regulatory bodies because the lack of shared guidelines increases the discretion of single CAs, extolling the differences in interpretation processes and facilitating heterogeneous evaluation approaches and so restraining the translation of research into clinical practice and ultimately the protection of public health.

4.4 Privacy and data governance

Trustworthy could be improved by implementing procedures to ensure the quality of data, having source documents providing evidence and substantiating the integrity of the data collected, as well as procedures to ensure the compliance with data protection regulations, consistently with requirement of privacy and data governance. Given the high complexity of the task of ML-powered tools, the optimal use is strongly dependent by technical skill of health professionals that have to manage the software and that should have an adequate experience to understand benefits and risks and in case of damages, must be able to implement an appropriate risks minimization plan. About digital skill of healthcare professionals, a commitment of public institutions and/or scientific societies is desirable, to promote courses for digital training and to increase the confidence of investigators with data-driven technologies.

4.5 Environmental and societal well-being

The last requirement is the societal and environmental well-being, that in the case of CTs should consider the potential change in the relationship physician–patient, that is a fundamental interaction in the clinical practice that could affect patient’s physical and mental well-being. Any data or information aimed at supporting such a relationship could increase the trustworthy of AI.

5 Conclusion

Digital health technologies are triggering a paradigm shift and consequently RAs should implement new methods and approaches to complement the assessment of the safety and efficacy profile of medicinal products developed using data-driven tools, where data used play a main role. Data can both inform the adaptive algorithms, able to optimize their performance over time, or can be used in locked algorithms that do not update themselves in presence of new data and generate always the same outputs. In any case, the advantages in optimizing the performance with the use of learning algorithms should be balanced with various potential biases whose evaluation is an emerging issue on which various health institutions and international organizations are currently working on. Although the important efforts and significant results obtained so far, when a ML-based tool is proposed in a CT setting, additional efforts should be made to achieve a global harmonization of the assessment process. Stemming from our assessment, we propose a concrete starting point providing regulatory considerations following a bottom-up approach, moving from the point of view of the assessors that already had on their desks and assessed CTs impacted by ML methods, to link the key regulatory information to general principles for a trustworthy AI. The regulatory authorisation of CTs that use ML-based tools is a challenging task for all RAs that need to identify new methods of assessment and paradigms. The initial contributes of this paper should be further explored by enlarging the pool of assessors and by extending the feedback collection to a multistakeholders’ platform including among others, additional RAs, Ethic Committees, sponsors of CTs and patients. Our insights show the interaction between key regulatory points impacted and the requirements for trustworthy AI, as designed by the potential issues identified during the assessment, highlighting the most impacted issue combination areas. There is a clear evidence of the importance of the data used, with its connected level of evidence, directly impacting the technical robustness and safety of the ML tool. The cruciality of the data and the algorithm transparency also highlights elements to take into considerations in the regulatory assessment process. Other areas of mutual involvement are those relating to the intended use, algorithm, training and validation datasets, technical robustness and safety. Further areas of interaction may have additional intrinsic value depending on the specific CT design and setting, therefore even if specific areas of attention are clearly indicated, none of the key regulatory points or requirements for trustworthy AI should be excluded during the assessment of a CT that foresee a ML-based tool.