Keywords

13.1 Introduction

The impact of the COVID-19 pandemic on AI development and the possible impact of AI on solving problems created by the outbreak has been multifaceted. More than 18 months after the declaration of the COVID-19 outbreak as a pandemic, AI seems to have been changing the way disease outbreaks are tracked, mitigated and managed at different levels. Since the outbreak of this pandemic, international organisations and scientific centers have been using AI to track the epidemic in real-time, accelerate the development of drugs, and articulate an effective and targeted response. Throughout the pandemic, AI proved to be a cross-cutting tool that is used in different ways and can play an essential role in recognizing, explaining, and predicting infection patterns.

In view of the data-driven character of the COVID-19 pandemic, the use of AI applications has been rather intensive in healthcare settings as countries seek to understand, find cures, develop vaccines and perform conventional data analysis that is at the heart of the COVID-19 response. Although most of the machine learning applications were deployed during the COVID-19 pandemic without going through any prior authorization process, their actual impact is likely to have been modest.

AI’s wide-reaching scientific capacity also raises a diverse array of ethical challenges and questions that have disrupted the operation of the traditional ethical governance schemes. The ethical challenges caused by the application of AI in this particular public health emergency context relate mostly to AI-powered restrictive enforcement measures that include domestic containment strategies without due process and the processing of vast amounts of health data in the frame of fussy algorithmic decision-making procedures without informed consent. These challenges have been accentuated by the lack of data needed to train algorithms that would be reflective of the needs of local populations, take local patterns into account, and ensure equity and fairness.

13.2 Main Applications

A scan of the technological horizon in the context of COVID-19 illustrates that the number of AI-based applications has increased considerably for different aspects of outbreak response: early warning, data gathering and analysis, monitoring, movement surveillance, automating aspects of diagnosis and prognosis (Malgieri 2020), developing vaccines, and tracing of digital contacts (Bullock et al. 2020). The breadth of applications ranges from piloting of drones that delivered medical supplies to remote regions and of robots to disinfect hospitals (McCall 2020) to the creation of health equipment databases that monitored the availability of assets in national health systems (Van der Schaar et al. 2020).

More concretely, the main warnings about the novel coronavirus were raised by AI systems more than a week before official information about the epidemic was released. Since then, AI systems have been deployed to help detect and diagnose or slow the virus’ spread through surveillance and contact tracing (Berditchevskaia and Peach 2020) and to improve early warning tools through the development of AI-powered passenger locator forms (Kritikos 2020a) and the monitoring of body temperature through AI-based fever detection systems (Kritikos 2020b). The use of data in algorithmic processes has also helped many countries to prioritize healthcare resources in healthcare settings.

AI has also been used to understand and predict the virus's RNA secondary structure (Tang et al. 2020) and accelerate medical research on drugs and treatments via AI’s capacity to search large databases quickly and process vast amounts of medical data essentially accelerating the development of a drug and enabling a quicker and deeper analysis of the genetic sequence of SARS-CoV-2, the virus causing COVID-19. AI can also process vast amounts of unstructured text data to predict the number of potential new cases by area and which populations will be most at risk, as well as evaluate and optimise strategies for controlling the spread of the epidemic.

This expansive spectrum of AI-supported interventions demonstrates the special role that AI systems have played at different stages of the pandemic. However, by focusing on AI applications, the chapter does not aim to underline techno-solutionism or, in other words, the idea that new and emerging technologies could solve global health problems, such as the current pandemic, on their own. Instead, by examining the way AI has been used throughout the pandemic, it argues that the use of general AI has been rather intensive in various domains of the pandemic.

13.3 Ethical Challenges

The ethical challenges related to the deployment of AI solutions at unprecedented speed and scale in the context of the pandemic touch upon the protection of privacy and autonomy, possible algorithmic bias and the informational asymmetries between citizens and governments and big tech companies across Europe and the globe. Striking a balance between the need to protect public health and promote beneficence and at the same time safeguard individual privacy and autonomy has been an extremely difficult and complicated policy exercise in the context of the pandemic.

Matters of security and public safety can end up taking precedence over individual rights in the context of severe health crises and policy-makers need to constantly consider trade-offs between privacy and public health given the dynamic character of the disease. The terms and conditions under which AI applications such as contact tracing, thermal imaging and passenger locator forms need to be deployed illustrate the tensions and the complexity when attempting to protect multiple public interests under extreme time pressure.

At the same time, the responsible use of data has become a major ethical challenge in the COVID-19 pandemic. Use of AI, whether for medical purposes or for epidemiological modelling, can be extremely sensitive, with implications for the personal privacy and security of individuals and groups. The use of AI and data-intensive applications in an emergency context raises a vast range of ethical questions about the necessity, evidence, and proportionality of the respective technological intervention as it may lead to the temporary suspension of fundamental rights and/or a ‘new normal’ of eroded rights and liberties. In addition, there are concerns that data used to fight COVID-19 can be subverted for other non-medical purposes.

The massive use of AI tracking and surveillance tools in the context of this outbreak (Kim 2020), combined with the current fragmentation in the ethical governance of AI, could pave the way for wider and more permanent use of these surveillance technologies, leading to a situation known as ‘mission creep’ whereby public authorities may continue collecting sensitive information well beyond the emergency.

Other ethical considerations that relate to the use of AI in the context of the current pandemic concern the needs of vulnerable populations and issues of fairness and inclusion in the training of AI-based systems. This ethical hazard is, in fact, made worse by the disproportionately harmful effects of the COVID-19 pandemic on disadvantaged and vulnerable communities and the challenges in translating AI models to reflect local healthcare environments (Luengo-Oroz et al. 2020).

Given that the efficacy of AI systems heavily depends on the reliability and relevance of the data available, collecting and processing sufficient data for accurate monitoring, decision-making, and diagnosis has been extremely challenging in the context of the current pandemic. For that purpose, sharing data between governments and international organisations has been a crucial factor for the creation of credible and trustworthy databases.

These challenges have been augmented by the relatively immature state of most AI applications, technical limitations, and the lack of supporting ICT infrastructure and interoperability, security and standardization issues. Current processes for ethics and risk assessment around uses of AI are still relatively immature, and the urgency of a crisis highlights their limitations. Prevailing gaps in digital maturity across hospitals, regions, and countries may also act as roadblocks to accessing data of sufficient quality and quantity to pick up generalizable and transportable signals from target populations. Moreover, the use of AI in sectors such as radiological imaging is relatively new, and codes of ethics and practice for use of AI in imaging are just now being contemplated by the medical community. Current processes for ethics and risk assessment encompassing uses of AI are still relatively immature, and the urgency of a crisis of this magnitude highlights their limitations.

In addition, AI models may have differential impacts and disproportionate effects across subpopulations and on society in general, with harmful consequences that are difficult to predict in advance. A further concern is that the lack of transparency in AI systems used to aid decision-making around COVID-19 may make it nearly impossible for the decisions of governments and public officials to be subject to public scrutiny and lead to blurred accountability schemes.

13.4 Policy and Legal Responses

The challenges associated with the deployment of AI solutions have exerted undue pressure on traditional legal structures and ethical governance frameworks alike given the need to strike a balance between a cautious approach and the need to deploy technological solutions at scale. Circumstances of a public health crisis such as the COVID-19 pandemic may place processes of deliberatively balancing and prioritizing conflicting or competing values under extreme pressure to yield decisions that generate difficult trade-offs between equally inviolable principles.

Indeed, the international health emergency related to the pandemic had a huge impact on the process of review and authorization of research. The crisis of the COVID-19 pandemic has elicited a number of unprecedented emergency regulatory responses that aim at achieving a deliberate and well-informed balancing of interests. The goals of these emergency procedures are to reduce practical obstacles, save efforts, resources, and time, and ensure a rigorous ethical assessment of COVID-19-related research protocols.

As a response to these ongoing challenges, many organisations have reorganized their procedures and have adjusted them to the special COVID-19 circumstances. The main aim of these reforms has been to create a fast-track legal environment for the development and testing of effective and safe means (drugs, vaccines, tests) for the treatment, prevention and diagnosis of SARS-CoV-2 infections. Reforming these procedures was essential in order to safeguard that all major public health and privacy ethics principles are well-protected.

Since the beginning of the COVID-19 pandemic, the World Health Organization (WHO) and the Pan American Health Organization (PAHO) have emphasized the moral duty to conduct ethical research in response to the pandemic and have developed operational strategies and guidance on key ethical issues for ethics review and oversight taking into account the lessons learned from past outbreaks.

The urgency related to the pandemic forced many jurisdictions and the EU to introduce expedited review procedures for AI-related research protocols which have also led many data protection authorities to adjust their notification and evaluation procedures to the increased regulatory needs associated with the deployment of novel technological applications. Most European countries have put in place accelerated procedures for the evaluation and authorization of clinical trials related to the management of the pandemic covering also the ethics review process.

At the EU level, the Commission, in its different COVID-19 related Recommendations and Communications has emphasized the need for technologies deployed to fight the pandemic to respect fundamental rights, notably privacy as well as data protection and prevent surveillance and stigmatization. Throughout the pandemic, the European Data Protection Board and various national data protection authorities and ethics committees have underlined the need for all deployed technological solutions to respect human rights and ethical acquis.

In general, despite the involvement of several ethics committees during the deployment of several AI-powered solutions, their normative footprint remained rather weak and their involvement inconsistent and fragmentary. Ethics committee members’ lack of familiarity with the technical features and the potential of these technological applications, the limited time frame within which they had to provide an opinion and their overshadowing by data protection authorities and vaccine advisory committees, appear as the main factors that limited their influence and lasting presence during the various stages of technological response to the pandemic.

13.5 Policy Suggestions

As the COVID-19 pandemic illustrates, times of crisis necessitate rapid deployment of new technologies in order to save lives. However, this urgency both makes it more likely that ethical issues and risks will arise, and makes them more challenging to address. Rather than neglecting ethics, we must find ways to address ethical tensions. If ethical practices can be implemented with urgency (Tzachor et al. 2020), the current crisis could provide an opportunity to drive greater application of AI for societal benefit, and to ensure AI is used responsibly without undermining protection of fundamental values and human rights. For that reason, this chapter is putting forward a series of policy options that could ensure that AI can be safely and beneficially used in the COVID-19 response and beyond.

First of all, AI applications should be deployed only on the basis of clear, transparent criteria with sunset clauses for emergency legislation. Thus, ethical safeguards need to be embedded in all policy decisions that authorize the use of AI for handling various aspects of the pandemic. The incorporation of clauses of this kind could ensure that the deployment of AI systems is conditional to its constant compliance with ethical norms and lead to the framing of a reflexive framework of emergency technology ethics advice. Beyond the introduction of sunset clauses, other safeguards are needed, including purpose limitation, transparency, explainability of the data processing operations and constant monitoring, especially for automated tracing tools. The drafting of these safeguards requires the meaningful involvement of data protection authorities, local and national ethics committees, and AI designers but also their activation in accordance with commonly agreed guidelines. The introduction of special ethical safeguards into the development and deployment procedures has become an essential condition for the use of AI during COVID-19. A regular review of the continued need for the processing of personal data for the purposes of combating the COVID-19 crisis should be performed and appropriate sunset clauses need to be introduced so as to ensure that the processing does not extend beyond what is strictly necessary for those purposes.

The general AI principles adopted in Europe and elsewhere do not seem to offer sufficient guidance in emergency situations. Where certain values are in conflict, there is an urgent need to support the development of standard operating procedures for emergency response ethical review.

Moreover, AI systems should be designed by taking into account the diversity of socio-economic and healthcare settings. Their development and deployment should be accompanied by community engagement, awareness-raising and digital literacy capacity-building actions, given that the automation of several diagnostic and healthcare tasks could challenge the decision-making and autonomy of healthcare providers and patients. At the same time, policy is needed to ensure that AI systems used in the context of the pandemic are transparent, explainable, robust, secure and safe; also, actors involved in their development and use should remain accountable especially when it comes to temporary measures of population control and monitoring. Within this frame, the principle of explicability becomes particularly important for AI-based decisions about treatment and allocation of resources which require improvement in the accuracy and efficacy of AI-based tools related to medical detection and treatment. Strict interpretation of public health legal exemptions could be crucial to ensuring the responsible use of this disruptive technology during public health emergencies.

The capacity of ethics review mechanisms to react promptly and thoroughly to pressing and demanding challenges needs to be strengthened, not only in institutional and legal terms but also in terms of its positioning within the entire ecosystem of policy advice. Governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment. In the absence of established legal frameworks, policy, or practice standards that specifically guide research ethics review and oversight, it is imperative to acknowledge the need to address gaps in the ethical governance of health emergencies. This may serve as a basis for the development of a treaty framework that will help ethics boards to anticipate and address issues uniquely associated with rapid advances in technological capabilities and novel applications. Acknowledging the need to address gaps in preventing, preparing for, and responding to health emergencies from an ethical perspective would safeguard timely and equitable access to vaccines, therapeutics and diagnostics and ensure the ethically sound deployment of digital technologies.

What the COVID-19 crisis has made clear to many in the field of health data science and governance is the need for coordinated, dedicated data infrastructures and ecosystems for tackling dynamic societal and environmental threats as well as an improved governance of rapid data sharing. Towards this direction, common data reporting and interoperability rules and standards are needed to ensure trusted sharing of useful data in times of crisis. There is also a policy need to encourage multi-disciplinary and multi-stakeholder cooperation and data exchange both nationally and internationally by the AI community, medical community, developers and policy makers to formulate the problems, identify relevant data and open datasets, share tools and train models, and facilitate the responsible sharing of medical, molecular, and scientific datasets and models on collaborative platforms to help AI researchers build effective tools for the medical community.

Last, but not least, derogations of human rights, albeit in the interests of the public good must be temporary, and hence exceptional measures taken by governments for the use of AI must be necessary and proportionate. Therefore, restrictions of rights and freedoms that are imposed in an emergency situation—including those implemented through technological surveillance from mobile devices through to drones and surveillance cameras—need to be removed, and data need to be destroyed, as soon as the emergency is over or infringements are no longer proportionate. Preventing AI use from contributing to the establishment of new forms of automated social control, which could persist long after the epidemic subsides, must be addressed in ongoing legislative initiatives on AI at EU level as some AI systems raise concerns about purpose specification and the danger that personal data could be re-used in ways that infringe privacy and other individual rights.

13.6 Concluding Remarks

In the context of the current pandemic, numerous data-collection and location-tracking technological applications have been launched on the basis of emergency laws that involve the temporary suspension of fundamental rights and authorisation of medical devices and vaccines via fast-tracked procedures (Kritikos 2020c). Based on the analysis above, several conclusions can be drawn that relate both to the technological readiness and the preparedness of the ethical governance structures to meet the ever-increasing challenges that the rapid deployment of a wide range of AI applications has brought to society and to the policy sphere.

Firstly, unlike previous public health crises, this one indicates that technology in itself and in particular AI is not a technological silver bullet that could contain COVID-19 (Heaven 2020) but should serve as a means not only to digitize the main tenets of the public health ecosystem and facilitate the use of health management AI-based tools and also to reinforce the importance of the human factor in the management of public health crises.

Secondly, although the uptake of AI has been limited mostly to certain aspects of the medical and healthcare domains, its use during the pandemic has illustrated its vast potential to play an increasingly critical role in emergency responses. Thirdly, the deployment of AI-powered applications triggers questions about the effects on civil liberties as well as concerns about state authorities maintaining heightened levels of surveillance, even after the pandemic ends (Kritikos 2020c).

Fourthly, the deployment of AI-powered systems that have not been tested previously on a global scale illustrates not only the limitations of the meaningful involvement of ethics governance structures in policy discussions and their rather weak policy impact in the context of this public health emergency but also the general lack of ethical preparedness to provide policy-relevant guidance and introduce ethical safeguards that go beyond the traditional data protection legal requirements.

Moreover, the current pandemic represents an excellent opportunity for policy-makers and regulators to develop a new international pandemics technology ethics framework that could respond to the need for timely ethics advice. The continuous adoption of AI ethics guidelines and frameworks worldwide can in fact pave the way for the shaping of a common, robust procedural framework for ethics advice under emergency conditions.

The uptake of AI applications will not only depend on their technical capacities but also on how inclusive, privacy-friendly and human-centered their algorithmic procedures will end up being. In fact, building public trust around AI may be particularly challenging in crisis times, where review timelines need to be significantly reduced, without compromising on ethical and legal principles and guidelines. The deployment of AI for various applications requires a paradigm shift in the way ethical principles are taken into account and ethics review procedures are followed. If methodologies to perform ethics assessments of technological applications under time pressure are developed swiftly, the current crisis could provide an opportunity to deploy AI for societal benefit, and to build public trust in such applications.

Therefore, the management of the risks associated with infectious diseases is likely to remain an ongoing challenge for local, national and global efforts to shape a robust and transparent ethical governance. Towards this direction, it is essential that processes are put in place in advance to better understand potential trade-offs involved in deploying an AI system and acceptable ways to resolve them. In other words, emergency ethics preparedness needs to be seen not only as part of the policy response to the current pandemic but also as part of the ongoing discussions to build an ethics-by-design framework for the domain of AI (Kritikos 2020d).