A risk management approach seeks to strengthen each of the steps “identify, protect, detect, defend, recover” in relation to risks, notably of critical infrastructures such as electricity, water, health, cloud services, etc. The approach involves large scale sensoring/monitoring of complex assets; big data-based threat detection and analysis; real-time response interpreting business, legal, and ethical rules; and managed infrastructure recovery.
In each of these, AI is considered an essential aid and is already becoming big business. Only with AI is it possible to quickly sift through billions of sensor data points so that the responsible CERTFootnote 1 can focus on a handful of noteworthy situations only. The New York Stock Exchange reportedly is attacked half a trillion times a day, with 30–40 attacks of consequence.Footnote 2 Providers of AI-based cyber-resilience solutions are already multi-billion-dollar companies.
What are the ethical challenges in cybersecurity risk management, notably when making use of AI? Extensive monitoring and pervasive risk-prevention with the help of AI can be highly intrusive and coercive for people, whether employees or citizens. AI can also be so powerful that people feel that their sense of being in control is taken away. They may get a false sense of security too. Deep-learning AI is, as of today, not transparent in how it reaches a decision from so many data points, yet an operator may blindly trust that decision. AI also can incite freeriding as it is tempting to offload responsibility onto ‘the system’.
Risk management is also an approach that accepts a residual risk. Financially this may be offset by cyber insurance, but a political and sovereignty question is how many lost lives are acceptable until internal legitimacy of the state and thereby sovereignty is really at risk (the 2017 Wannacry attack that affected many UK hospitals may have led to the loss of lives). This political question becomes even more sensitive when it is an AI system that autonomously invokes a cyber-defensive strategy, such as shutting down part of the electricity grid which implies a choice which people to put at risk or not.
Technical experts also argue that systems are so complex that they can never be fully protected. The fear is that risk management may not detect the presence of a ‘kill switch’ in a system which could be activated in international conflict or by accident and shutdown a critical infrastructure such as tele-communications (such arguments have been put forward in the 5G/Huawei debate). Alternatively, the fear is for systematic below-the-radar leakage of intellectual property, which eroding long-term national competitiveness. The role of malicious AI would be to keep such a kill-switch or systematic leakage hidden.
We are therefore confronted with a plethora of ethical issues when combining AI and cybersecurity in a risk management approach to strategic autonomy. They include erosion of individual autonomy, unfair allocation of liability, the fallacy of human in the loop, the contestable ethics of mass surveillance and of trading off individual casualties versus collective protection.
Internationally, risk management is a fruitful and even the main area for developing norms and values of state behaviour. The UN Governmental Group of Experts has developed norms and principles for stability and restraint (mutual responsiveness), open information and transparency, and compatible governance (Heinl 2019 and Timmers 2019b), see Fig. 2.
Likewise, private–public initiatives such as the Paris Call for Trust and Security in Cyberspace have put forward such norms. Norms become concrete through Confidence Building Measures (CBMs).
An example of a restraint norm is ‘do not harm’, i.e. a commitment to not attack each other’s critical infrastructures. Transparency confidence building measures include information exchange on cyber threats and joint cyber exercises. A much more ambitious transparency CBM would be mutual software code inspection by an independent party. An example of compatible governance CBM could be for governments to agree on consultation on say, on cyber resilience in the health sector, with WHO, global industry and civil society and to ‘compare notes’.
Clearly, a restraint norm like ‘do not harm’ has an ethical basis. Likewise, transparency includes commitment to the ethics of ‘do not deceive’, and compatible governance includes a commitment ‘to be fair, equitable and inclusive’.
In practice, it is hard to successfully implement cyber-CBMs. So far, they only seem to work where strategic autonomy is least at stake such as the Global Alliance against Child Abuse and assistance in awareness raising, training of law enforcement, and national strategy development. In most critical infrastructures, global collaboration on information exchange and CERT-like capacity building is still a dream. Nevertheless, a strong case could be made to at least collaborate, as suggested, on resilience for the most ‘civilian’ of critical infrastructures, namely health.
CBMs also only work where there is a credible guarantee of effectiveness, which again has ethical aspects. Let’s consider here code inspection. Huawei’s code inspection approach in the UK (involving GCHQ) is claimed to be flawed amongst others as it does not include one influential party, namely the Chinese government, who supposedly might make Huawei to implant vulnerabilities in its products (Drew and Parton 2019).
There is another effectiveness challenge, specific to software, namely the frequent updating that far outpaces what manual inspection can keep up with. How do you know that the vendor is honest and diligent in doing these updates? Nevertheless, there may be some light thanks to blockchain-based ‘locking’ of software versions and AI-based software inspection to ensure such binary equivalence.
Effectiveness of code control is further challenged. Software algorithms are often proprietary, having a high intellectual property value and are therefore not independently inspectable. Moreover, neural network-based AI cannot explain yet how it gets to a decision and is vulnerable to data bias and data poisoning. Effective transparency would then also have to address data input, storage and transmission. Is it ethical to accept such an effectiveness gap?
In short, a risk management approach to strategic autonomy—even if it is the most followed approach today—leaves us with a host of uncomfortable questions on ethics, not exclusive to the application of AI, but certainly exacerbated by AI.