The Centre for Medical Ethics and Law of the University of Hong Kong (HKU) hosted an international conference from 9 to 11 May 2023 on ‘Governance of Medical AI’. This event was held in collaboration with HKU’s Clinical Trials Centre, the Medical Ethics and Humanities Unit of the LKS School of Medicine of HKU, and the Hong Kong Academy of Medicine. For the purposes of this conference, medical artificial intelligence (AI) was broadly understood as an algorithm, model or software developed with the intent for use in healthcare and in health-related research. Technology governance, as defined by the Organisation for Economic Co-operation and Development (OECD 2023), was adopted in the discussions so that ‘governance’ refers not only to regulation, but also to a multitude of institutional and normative requirements, standards and mechanisms that steer technological development.

Deliberations and discussions at the conference acknowledged the ‘Collingridge dilemma’ along its two classical dimensions (Collingridge 1980). First, the impact of AI applications in health-related research and healthcare is difficult to assess (i.e. the information and temporal dimension). Second, these applications may be difficult to control or modify when deployed (i.e. the power dimension). Governance requires distinct capabilities in order to respond effectively to this dilemma (for instance, on data governance capabilities, see Janssen et al. (2020)). As David Guston (2014) explains, governance needs to anticipate harms and problems, as well as provide the opportunity and means for re-examining the public purpose and social value of the technology. In this vein, the conference considered existing and emerging capabilities in regulatory regimes on medical devices and personal data across leading medical AI jurisdictions. It also highlighted the need for a unified approach to AI governance that enables collaboration across different knowledge disciplines and participation of diverse stakeholders. Here, we provide a summary of the key insights from presenters on these themes and welcome readers to view the recordings at

Governance of AI as a Medical Device

Barry Solaiman (in an individual presentation and in a joint presentation with I. Glenn Cohen) observed that between 2015 and 2020, more than a hundred guidelines were adopted in over 60 countries. Many jurisdictions have both ‘soft’ and ‘hard’ law instruments to address challenges that appear to be mostly shared. Additionally, there is a degree of consensus among international organisations (like the World Health Organization, OECD and UNESCO) on ethical principles that apply, as well as the need to prioritise the promotion of human rights and human-centred AI. While these developments are generally welcome, they lack specificity and contextual sensitivity and are hence difficult to implement in governance.

On mainland China, Duan Weiwen explained, an AI device is difficult to govern as it is a ‘boundary object’ (Caccamo et al. 2023), where different rules or requirements could apply. Different government agencies have issued rules, policies and guidance documents, but there was no overarching framework to ensure common understanding or consistent application of the requirements. Where ethical governance of research was concerned, Ji Ping’s empirical research findings highlight a need to capacitate ethics committees or institutional review boards to anticipate harms and other problems in the deployment of medical AI. Guidance documents were considered too vague and general to be actionable through the established system of institution-based ethics review. She also observed that even AI companies have established ethics committees to guide their work, but the understanding of ethical requirements was found to be rudimentary.

Similar challenges have been observed in Europe and North America. Concerning the European Union and the UK, Colin Mitchell presented policy documents published by the PHG Foundation (2019; 2020) which showed that while AI applications have been mostly regulated as medium to high-risk medical or in vitro devices, there was a lack of clarity as to how certain real-world applications and functionalities (such as the adaptive or deep learning capability of medical AI) could be accommodated within existing regulatory regimes. In regard to Canada, Colleen Flood shared this concern in her observation that the provisions in the proposed legislation on Artificial Intelligence and Data (Government of Canada 2023) were narrowly focused on ‘high impact’ AI systems and lacked specificity for governance purposes. As for the United States, I. Glenn Cohen explained that the Food and Drug Administration (FDA) should widen its scope to evaluate medical AI as part of a wider system. This shift in perspective from device to system is crucial to ensure the safety and efficacy of medical AI. By this view, the regulatory remit of the FDA would need to be radically altered as it has been constituted to regulate products rather than systemic concerns.

Data Governance

As a data construct, data governance regimes like the EU’s General Data Protection Regulation (GDPR) apply to medical AI. However, a study published by the European Parliament (2020) states that the GDPR, while generally applicable to the regulation of AI, does not provide enough direction or guidance to data controllers. To be effective, the regulatory scope of the GDPR will need to be expanded and its provisions on medical AI will have to be clearly set out. This assessment found implicit agreement with the conference discussants as their focus was on the question of whether the GDPR and other similar data protection regimes facilitated AI innovation. The answer was at best mixed.

Alessandro Blasimme observed that while requirements like explainability in the GDPR have steered AI development, he cautioned against AI exceptionalism. He explained that innovation in medical AI will depend on conditions that are not specific to the AI system, such as the reliability of the training data, trustworthiness of data sources, accountability mechanisms and transparency. In this connection, a few presenters cautioned that personal data protection requirements could restrict research and innovation. Deborah Mascalzoni provided an insightful account of how a restrictive reading of the informed consent requirement in the GDPR led to the failure of a biobank in Italy. The need to seek repeated consent from research participants for the use of their biological materials and related data did not promote trustworthiness but created distrust that stifled socially valuable research. On mainland China, Haihong Zhang explained the challenges of working at the intersection of the ethical governance of research and the newly introduced legal regime on personal information protection. The legal requirement of re-consent impeded the secondary use of data in research, while the transfer of personal information outside of China was restricted by the additional consent and certification requirements. On the latter, Li Du added that administrative uncertainties further impeded cross-border data sharing. For the EU, Colin Mitchell observed that the decision of the European Court of Justice (2020) in Schrems II rendered uncertain when and how the GDPR applies to international transfer of personal data. While standard contract clauses could be used, it was unclear how other governance arrangements like codes of conduct and health research-specific international transfer agreements could be applied.

An interim conclusion from the discussion on this theme was that while personal data protection regimes like the GDPR or the Personal Information Protection Law on mainland China could help to promote trustworthiness within a data environment, they do not promote research and development and could even limit such endeavours. This was implicit in Johanna Blom’s presentation on an initiative by the EU-funded FACILITATE project to construct a patient-centric and trustworthy data ecosystem. A similar message was communicated in Chih-hsing Ho’s insightful discussion of the decision of the Constitutional Court in Taiwan (2022), wherein the provision of personal data from the public health insurance database by the government to third parties for secondary use was ruled to be partially unconstitutional. She explained that legislative amendments were required to establish an accountable data governance mechanism. This may explain why the European Parliament (2024) considered it necessary to introduce a new law to create a European Health Data Space. As Ciara Staunton explains, the European Health Data Space imposes an obligation on data holders to share electronic health data with data users for secondary purposes. These purposes include training, testing and evaluating of AI algorithms. However, she cautioned that the road to harmonisation was incomplete, as EU member states still have their own data protection laws which might not be congruent with the GDPR. Other concerns include the legal obligation to share data in contravention of informed consent, extension of derogation on the right to information, the use of data for AI which might not be socially accepted and the potential replacement of ethically approved governance arrangements.

Stakeholder Governance

It is worth explicitly mentioning that there was recognition in all conference presentations that a wider range of stakeholders should be involved in the governance of medical AI and its deployment in a data ecosystem. In his research on how consumer health applications use health data, Ma’n Zawati introduced three kinds of smartphone-crowdsourced medical data. From the analysis of the consent policies and practices of 18 genomic applications, he and his research team identified the limitations of conventional governance regimes with limited users’ involvement. Circling back to Colin Mitchell’s presentation on the governance of medical AI, he similarly observed the importance of involving stakeholders like healthcare providers and patients at the pre-implementation and post-deployment stages of medical AI development, as well as the need to sustain interdisciplinary discussions.

In the context of Hong Kong, Neeraj Mahboobani explained that bringing AI into a hospital setting was not simple. With reference to DeepCT, a set of AI algorithms developed to analyse non-contrast computed tomography (CT) images of the brain in acute settings and for suspected intracranial haemorrhage, the adoption of this technology required endorsement from all stakeholders, including chiefs of the hospital’s department of radiology and department accident and emergency, senior systems manager of the hospital’s information technology (IT) department, the hospital’s chief executive and the IT and quality and safety directors of Hospital Authority (i.e. the government agency overseeing the hospital). Further instructive is his proposed eight-part framework for the adoption of AI in a clinical setting, which includes the active involvement of stakeholders like patients and their carers. Beyond a healthcare institution, Alice So and Derrick Au introduced two important public initiatives that have developed governance mechanism to involve the wider public. As part of the Hong Kong Smart City Blueprint 2.0, Alice So noted that Cyberport has almost 2000 community members, both onsite and offsite. With reference to the first large-scale genome sequencing project in Hong Kong, Derrick Au explained how stakeholder engagement was developed based on the principle of common good.


This conference provided an opportunity to examine the distinct capabilities that are required in the governance of medical AI. It considered existing and emerging capabilities in regulatory regimes on medical devices and personal data across leading medical AI jurisdictions and arrived at a number of observations. One such observation is the need of a unified approach to AI governance that enables collaboration across different knowledge disciplines and participation of diverse stakeholders. Additionally, we hope that the observations here will facilitate continuing conversations, engagements and collaborations.