Abstract
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a doubleedged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Athalye, Anish, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2017, July. Synthesizing robust adversarial examples. ArXiv:1707.07397 [Cs]. http://arxiv.org/abs/1707.07397.
Biggio, Battista, and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC conference on Computer and Communications Security – CCS ‘18, 2154–2156. Toronto: ACM Press. https://doi.org/10.1145/3243734.3264418.
Borno, Ruba. 2017. The first imperative: The best digital offense starts with the best security defense. https://newsroom.cisco.com/feature-content?type=webcontent&articleId=1843565.
Carlini, N., and D. Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), 39–57. https://doi.org/10.1109/SP.2017.49.
DarkLight Offers First of Its Kind Artificial Intelligence to Enhance Cybersecurity Defenses. 2017, July 26. Business wire. https://www.businesswire.com/news/home/20170726005117/en/DarkLight-Offers-Kind-Artificial-Intelligence-Enhance-Cybersecurity.
European Commission. 2019. High level expert group’s ‘Ethics Guidelines for Trustworthy AI’. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
European Union. 2019. Regulation of the European Parliament and of the Council on ENISA (the European Union Agency for Cybersecurity) and on Information and Communications Technology Cybersecurity Certification and Repealing Regulation (EU) No 526/2013 (Cybersecurity Act).
Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physical-world attacks on deep learning visual classification. In 2018 IEEE/CVF conference on Computer Vision and Pattern Recognition, 1625–1634. Salt Lake City: IEEE. https://doi.org/10.1109/CVPR.2018.00175.
Gemalto. 2018. Breach level index. Belcamp, USA. https://www.breachlevelindex.com/request-report?utm_campaign=breach-level-index&utm_medium=press-release&utm_source=&utm_content=&utm_term.
Glaessgen, Edward, and David Stargel. 2012. The digital twin paradigm for future NASA and U.S. Air force vehicles. In 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference< BR> 20th AIAA/ASME/AHS Adaptive Structures Conference< BR>14th AIAA. Honolulu: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2012-1818.
Gu, Tianyu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017, August. BadNets: Identifying vulnerabilities in the machine learning model supply chain. ArXiv:1708.06733 [Cs]. http://arxiv.org/abs/1708.06733.
IDC FutureScape. 2018. IDC FutureScape: Worldwide IT industry 2019 predictions. https://www.idc.com/getdoc.jsp?containerId=US44403818.
IEEE. 2017. Artificial intelligence and machine learning applied to cybersecurity. https://www.ieee.org/about/industry/confluence/feedback.html.
Jagielski, Matthew, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018, April. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. ArXiv:1804.00308 [Cs]. http://arxiv.org/abs/1804.00308.
King, Tariq M., Jason Arbon, Dionny Santiago, David Adamo, Wendy Chin, and Ram Shanmugam. 2019. AI for testing today and tomorrow: Industry perspectives. In 2019 IEEE international conference on Artificial Intelligence Testing (AITest), 81–88. Newark: IEEE. https://doi.org/10.1109/AITest.2019.000-3.
Liao, Cong, Haoti Zhong, Anna Squicciarini, Sencun Zhu, and David Miller. 2018, August. Backdoor embedding in convolutional neural network models via invisible perturbation. ArXiv:1808.10307 [Cs, Stat]. http://arxiv.org/abs/1808.10307.
Microsoft Defender ATP Research Team. 2018. Protecting the protector: Hardening machine learning defenses against adversarial attacks. https://www.microsoft.com/security/blog/2018/08/09/protecting-the-protector-hardening-machine-learning-defenses-against-adversarial-attacks/.
Mittal, Sudip, Anupam Joshi, and Tim Finin. 2019, May. Cyber-All-Intel: An AI for security related threat intelligence. ArXiv:1905.02895 [Cs]. http://arxiv.org/abs/1905.02895.
OECD. 2019. Recommendation of the council on artificial intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
Primiero, Giuseppe, and Mariarosaria Taddeo. 2012. A modal type theory for formalizing trusted communications. Journal of Applied Logic 10 (1): 92–114. https://doi.org/10.1016/j.jal.2011.12.002.
Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC conference on Computer and Communications Security – CCS’16, 1528–1540. Vienna: ACM Press. https://doi.org/10.1145/2976749.2978392.
Sinha, Aman, Hongseok Namkoong, and John Duchi. 2017, October. Certifying some distributional robustness with principled adversarial training. ArXiv:1710.10571 [Cs, Stat]. http://arxiv.org/abs/1710.10571.
Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013, December. Intriguing properties of neural networks. ArXiv:1312.6199 [Cs]. http://arxiv.org/abs/1312.6199.
Taddeo, Mariarosaria. 2010a. Modelling trust in artificial agents, a first step toward the analysis of e-Trust. Minds and Machines 20 (2): 243–257. https://doi.org/10.1007/s11023-010-9201-3.
———. 2010b. Trust in technology: A distinctive and a problematic relation. Knowledge, Technology & Policy 23 (3–4): 283–286. https://doi.org/10.1007/s12130-010-9113-9.
———. 2017. Trusting digital technologies correctly. Minds and Machines 27 (4): 565–568. https://doi.org/10.1007/s11023-017-9450-5.
Taddeo, Mariarosaria, and Luciano Floridi. 2018. Regulate artificial intelligence to avert cyber arms race. Nature 556 (7701): 296–298. https://doi.org/10.1038/d41586-018-04602-6.
The 2019 Official Annual Cybercrime Report. 2019. Herjavec group. https://www.herjavecgroup.com/the-2019-official-annual-cybercrime-report/.
Uesato, Jonathan, Brendan O’Donoghue, Aaron van den Oord, and Pushmeet Kohli. 2018, February. Adversarial risk and the dangers of evaluating against weak attacks. ArXiv:1802.05666 [Cs, Stat]. http://arxiv.org/abs/1802.05666.
World Economic Forum. 2018. The global risks report 2018. World Economic Forum. http://www3.weforum.org/docs/WEF_GRR18_Report.pdf.
Yang, Guang-Zhong, Jim Bellingham, Pierre E. Dupont, Peer Fischer, Luciano Floridi, Robert Full, Neil Jacobstein, et al. 2018. The grand challenges of science robotics. Science Robotics 3 (14): eaar7650. https://doi.org/10.1126/scirobotics.aar7650.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Taddeo, M., McCutcheon, T., Floridi, L. (2021). Trusting Artificial Intelligence in Cybersecurity Is a Double-Edged Sword. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-81907-1_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81906-4
Online ISBN: 978-3-030-81907-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)