Projecting AI-Crime: A Review of Plausible Threats
Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which I term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area of research—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems.
KeywordsAI and Law AI-Crime Artificial Intelligence Dual-Use Ethics
This article is a shorter version of a longer one. I would like to thank the following coauthors of the longer article for their input and comments on this work: Nikita Aggarwal, Professor Luciano Floridi, and Dr. Mariarosaria Taddeo.
- Alaieri, F., and A. Vellino. 2016. Ethical decision making in robots: Autonomy, trust and responsibility. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9979 (LNAI): 159–168. https://doi.org/10.1007/978-3-319-47437-3_16.CrossRefGoogle Scholar
- Alvisi, L., A. Clement, A. Epasto, S. Lattanzi, and A. Panconesi. 2013. SoK: The evolution of sybil defense via social networks. Proceedings – IEEE Symposium on Security and Privacy (2): 382–396. https://doi.org/10.1109/SP.2013.33.
- Archbold, J. 2018. Criminal pleading, evidence and practice. London: Sweet & Maxwell Ltd.Google Scholar
- Ashworth, A. 2010. Should strict criminal liability be removed from all imprisonable offences? Irish Jurist 45: 1–21.Google Scholar
- Bendel, O. 2017. The synthetization of human voices. AI & SOCIETY, Online First.Google Scholar
- Brundage, M., S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, et al. 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Oxford: Future of Humanity Institute.Google Scholar
- Cath, C., S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi. 2017. Artificial intelligence and the “good society”: The US, EU, and UK approach. Science and Engineering Ethics 24 (604): 1–23.Google Scholar
- Chantler, A., and R. Broadhurst. 2006. Social engineering and crime prevention in cyberspace. Queensland University of Technology 22: 1–22.Google Scholar
- Chen, Y., P. Chen., R. Song., and L. Korba. 2004. Online gaming crime and security issues – Cases and countermeasures from Taiwan. In Proceedings of the 2nd annual conference on privacy, security and trust.Google Scholar
- Cognitive Security – Watson for Cyber Security | IBM. 2018. Retrieved February 27, 2018, from https://www.ibm.com/security/cognitive.
- Delamaire, L., H. Abdou, and J. Pointon. 2009. Credit card fraud and detection techniques: A review. Banks and Bank Systems 4 (2).Google Scholar
- Europol. 2017. Serious and organised crime threat assessment. European Union. Retrieved from https://www.europol.europa.eu/socta/2017/.
- Ezrachi, A., and M.E. Stuck. 2016. Two artificial neural networks meet in an online hub and change the future (of Competition, Market Dynamics and Society). Oxford legal studies research paper No. 24/2017 University of Tennessee legal studies research paper No. 323.Google Scholar
- Ferrara, E. 2015. Manipulation and abuse on social media. https://doi.org/10.1145/2749279.2749283.
- Floridi, L., and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines 14 (3): 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d.CrossRefGoogle Scholar
- Graeff, E. C. 2014. What we should do before the social bots take over: Online privacy protection and the political economy of our near future. MIT Media Arts and Sciences. Presented at media in transition 8: Public Media, Private Media. Place: MIT, May 5, 2013.Google Scholar
- James, Gips. 1995. Towards the ethical robot. In Android epistemology, 243–252. Cambridge, MA: MIT Press.Google Scholar
- Kerr, I.R. 2004. Bots, babes and the Californication of commerce. University of Ottawa Law & Technology Journal 1: 287–324.Google Scholar
- Lin, T.C.W., J. Fanto, J. Fisch, J. Heminway, D. Hollis, K. Johnson, et al. 2017. The new market manipulation. Emory Law Journal 66: 1253.Google Scholar
- Marrero, Tony. 2016. Record Pacific cocaine haul brings hundreds of cases to Tampa court. Tampa Bay Times, September 10.Google Scholar
- Martínez-Miranda, E., P. McBurney., and M.J. Howard. 2016. Learning unfair trading: A market manipulation analysis from the reinforcement learning perspective. Proceedings of the 2016 IEEE Conference on Evolving and Adaptive Intelligent Systems, EAIS 2016, 103–109. https://doi.org/10.1109/EAIS.2016.7502499.
- Mckelvey, F., and Dubois, E. 2017. Computational propaganda in Canada: The use of political bots. Computational Propaganda Research Project (6): 32.Google Scholar
- Neff, G., and P. Nagy. 2016. Talking to bots: Symbiotic agency and the case of tay. International Journal of Communication 10: 4915–4931.Google Scholar
- Office for National Statistics. 2016. Crime in England and Wales, year ending June 2016 – Appendix tables, (June 2017), 1–60.Google Scholar
- Ratkiewicz, J., M. Conover., M. Meiss., B. Gonçalves., S. Patil., A. Flammini., and F. Menczer. 2011. Truthy: Mapping the spread of astroturf in microblog streams. Proceedings of the 20th International Conference Companion on World Wide Web (WWW ’11), 249–252. https://doi.org/10.1145/1963192.1963301
- Sætenes, G.M. 2017. Manipulation and deception with social bots : Strategies and indicators for minimizing impact, (May).Google Scholar
- Seymour, J., and P. Tully. 2016. Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter. Presented at the Black Hat USA.Google Scholar
- Spatt, C. 2014. Security market manipulation. Annual Review of Financial Economics 6 (1): 405–418. https://doi.org/10.1146/annurev-financial-110613-034232.CrossRefGoogle Scholar
- Twitter – Impersonation policy. 2018. Retrieved January 29, 2018, from https://help.twitter.com/en/rules-and-policies/twitter-impersonation-policy.
- Wang, Y., and M. Kosinski. 2017. Deep neural networks can detect sexual orientation from faces. Journal of Personality and Social Psychology 114: 1–47.Google Scholar
- Wang, G., M. Mohanlal., C. Wilson., X. Wang., M. Metzger., H. Zheng., and B.Y. Zhao. 2012. Social turing tests: Crowdsourcing sybil detection. Retrieved from http://arxiv.org/abs/1205.3856.
- Weizenbaum, J. 1976. Computer power and human reason: From judgment to calculation. Oxford: W. H. Freeman & Co.Google Scholar
- Williams, R. 2017. Lords select committee, artificial intelligence committee, written evidence (AIC0206), October 11. Retrieved from http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/70496.html#_ftn13.