Advertisement

Projecting AI-Crime: A Review of Plausible Threats

  • Thomas KingEmail author
Chapter
Part of the Digital Ethics Lab Yearbook book series (DELY)

Abstract

Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which I term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area of research—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems.

Keywords

AI and Law AI-Crime Artificial Intelligence Dual-Use Ethics 

Notes

Acknowledgements

This article is a shorter version of a longer one. I would like to thank the following coauthors of the longer article for their input and comments on this work: Nikita Aggarwal, Professor Luciano Floridi, and Dr. Mariarosaria Taddeo.

References

  1. Alaieri, F., and A. Vellino. 2016. Ethical decision making in robots: Autonomy, trust and responsibility. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9979 (LNAI): 159–168.  https://doi.org/10.1007/978-3-319-47437-3_16.CrossRefGoogle Scholar
  2. Alazab, M., and R. Broadhurst. 2016. Spam and criminal activity. Trends and Issues in Crime and Criminal Justice 526.  https://doi.org/10.1080/016396290968326.CrossRefGoogle Scholar
  3. Alvisi, L., A. Clement, A. Epasto, S. Lattanzi, and A. Panconesi. 2013. SoK: The evolution of sybil defense via social networks. Proceedings – IEEE Symposium on Security and Privacy (2): 382–396.  https://doi.org/10.1109/SP.2013.33.
  4. Anderson, K., and M.C. Waxman. 2013. Law and ethics for autonomous weapon systems: Why a ban won’t work and how the laws of war can. Social Science Research Network (SSRN) Electronic Journal 11: 1–32.  https://doi.org/10.2139/ssrn.2250126.CrossRefGoogle Scholar
  5. Archbold, J. 2018. Criminal pleading, evidence and practice. London: Sweet & Maxwell Ltd.Google Scholar
  6. Ashworth, A. 2010. Should strict criminal liability be removed from all imprisonable offences? Irish Jurist 45: 1–21.Google Scholar
  7. Bendel, O. 2017. The synthetization of human voices. AI & SOCIETY, Online First.Google Scholar
  8. Bilge, L., T. Strufe, D. Balzarotti, E. Kirda, and S. Antipolis. 2009. All your contacts are belong to us : Automated identity theft attacks on social networks. Www 2009: 551–560.  https://doi.org/10.1145/1526709.1526784.CrossRefGoogle Scholar
  9. Boshmaf, Y., I. Muslukhov, K. Beznosov, and M. Ripeanu. 2012. Design and analysis of a social botnet. Computer Networks 57: 556–578.  https://doi.org/10.1016/j.comnet.2012.06.006.CrossRefGoogle Scholar
  10. Brundage, M., S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, et al. 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Oxford: Future of Humanity Institute.Google Scholar
  11. Cath, C., S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi. 2017. Artificial intelligence and the “good society”: The US, EU, and UK approach. Science and Engineering Ethics 24 (604): 1–23.Google Scholar
  12. Chantler, A., and R. Broadhurst. 2006. Social engineering and crime prevention in cyberspace. Queensland University of Technology 22: 1–22.Google Scholar
  13. Chen, Y., P. Chen., R. Song., and L. Korba. 2004. Online gaming crime and security issues – Cases and countermeasures from Taiwan. In Proceedings of the 2nd annual conference on privacy, security and trust.Google Scholar
  14. Chen, Y.-C., P.S. Chen, J.-J. Hwang, L. Korba, S. Ronggong, and G. Yee. 2005. An analysis of online gaming crime characteristics. Internet Research 15 (3): 246–261.CrossRefGoogle Scholar
  15. Chu, Z., S. Gianvecchio, H. Wang, and S. Jajodia. 2010. Who is tweeting on twitter: Human, bot, or cyborg? Acsac 2010: 21.  https://doi.org/10.1145/1920261.1920265.CrossRefGoogle Scholar
  16. Cliff, D., and L. Northrop. 2012. The global financial markets: An ultra-large-scale systems perspective. In Monterey workshop 2012: Large-scale complex IT systems. Development, operation and management, 29–70.  https://doi.org/10.1007/978-3-642-34059-8_2.CrossRefGoogle Scholar
  17. Cognitive Security – Watson for Cyber Security | IBM. 2018. Retrieved February 27, 2018, from https://www.ibm.com/security/cognitive.
  18. Delamaire, L., H. Abdou, and J. Pointon. 2009. Credit card fraud and detection techniques: A review. Banks and Bank Systems 4 (2).Google Scholar
  19. Europol. 2017. Serious and organised crime threat assessment. European Union. Retrieved from https://www.europol.europa.eu/socta/2017/.
  20. Ezrachi, A., and M.E. Stuck. 2016. Two artificial neural networks meet in an online hub and change the future (of Competition, Market Dynamics and Society). Oxford legal studies research paper No. 24/2017 University of Tennessee legal studies research paper No. 323.Google Scholar
  21. Farmer, J.D., and S. Skouras. 2013. An ecological perspective on the future of computer trading. Quantitative Finance 13 (3): 325–346.  https://doi.org/10.1080/14697688.2012.757636.CrossRefGoogle Scholar
  22. Ferrara, E. 2015. Manipulation and abuse on social media.  https://doi.org/10.1145/2749279.2749283.
  23. Ferrara, E., O. Varol, C. Davis, F. Menczer, and A. Flammini. 2014. The rise of social bots. Communications of the ACM 59 (7): 96–104.  https://doi.org/10.1145/2818717.CrossRefGoogle Scholar
  24. Floridi, L. 2010. The Cambridge handbook of information and computer ethics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  25. ———. 2016. Faultless responsibility : On the nature and allocation of moral responsibility for distributed moral actions. Royal Society’s Philosophical Transactions A: 1–22.  https://doi.org/10.1098/rsta.2016.0112.CrossRefGoogle Scholar
  26. ———. 2017. Digital’s cleaving power and its consequences. Philosophy & Technology 30 (2): 123–129.CrossRefGoogle Scholar
  27. Floridi, L., and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines 14 (3): 349–379.  https://doi.org/10.1023/B:MIND.0000035461.63578.9d.CrossRefGoogle Scholar
  28. Floridi, L., and M. Taddeo. 2016. What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374 (2083).  https://doi.org/10.1098/rsta.2016.0360.CrossRefGoogle Scholar
  29. Floridi, Luciano, Mariarosaria Taddeo, and Matteo Turilli. 2009. Turing’s imitation game: Still an impossible challenge for all machines and some judges––an evaluation of the 2008 Loebner contest. Minds and Machines 19 (1): 145–150.CrossRefGoogle Scholar
  30. Freitas, P.M., F. Andrade, and P. Novais. 2014. Criminal liability of autonomous agents: From the unthinkable to the plausible. In Ai approaches to the complexity of legal systems, 145–156.CrossRefGoogle Scholar
  31. Gogarty, B., and M. Hagger. 2008. The laws of man over vehicles unmanned : The legal response to robotic revolution on sea , land and air. Journal of Law, Information and Science 19: 73–145.  https://doi.org/10.1525/sp.2007.54.1.23.CrossRefGoogle Scholar
  32. Golder, S.A., and M.W. Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. Science 333 (6051): 1878–1881.  https://doi.org/10.1126/science.1202775.CrossRefGoogle Scholar
  33. Graeff, E. C. 2014. What we should do before the social bots take over: Online privacy protection and the political economy of our near future. MIT Media Arts and Sciences. Presented at media in transition 8: Public Media, Private Media. Place: MIT, May 5, 2013.Google Scholar
  34. Grut, C. 2013. The challenge of autonomous lethal robotics to international humanitarian law. Journal of Conflict and Security Law 18 (1): 5–23.  https://doi.org/10.1093/jcsl/krt002.CrossRefGoogle Scholar
  35. Hay, G.A., and D. Kelley. 1974. An empirical survey of price fixing conspiracies. The Journal of Law and Economics 17 (1): 13–38.CrossRefGoogle Scholar
  36. Hildebrandt, M. 2008. Ambient intelligence, criminal liability and democracy. Criminal Law and Philosophy 2 (2): 163–180.  https://doi.org/10.1007/s11572-007-9042-1.CrossRefGoogle Scholar
  37. Jagatic, T.N., N.A. Johnson, M. Jakobsson, and F. Menczer. 2007. Social phishing. Communications of the ACM 50 (10): 94–100.  https://doi.org/10.1145/1290958.1290968.CrossRefGoogle Scholar
  38. James, Gips. 1995. Towards the ethical robot. In Android epistemology, 243–252. Cambridge, MA: MIT Press.Google Scholar
  39. Janoff-Bulman, R. 2007. Erroneous assumptions: Popular belief in the effectiveness of torture interrogation. Peace and Conflict: Journal of Peace Psychology 13 (4): 429.CrossRefGoogle Scholar
  40. Kerr, I.R. 2004. Bots, babes and the Californication of commerce. University of Ottawa Law & Technology Journal 1: 287–324.Google Scholar
  41. Kerr, I.R., and M. Bornfreund. 2005. Buddy bots: How turing’s fast friends are under-mining consumer privacy. Presence: Teleoperators and Virtual Environments 14: 647–655.CrossRefGoogle Scholar
  42. Lin, T.C.W., J. Fanto, J. Fisch, J. Heminway, D. Hollis, K. Johnson, et al. 2017. The new market manipulation. Emory Law Journal 66: 1253.Google Scholar
  43. Mackey, T.K., J. Kalyanam, T. Katsuki, and G. Lanckriet. 2017. Machine learning to detect prescription opioid abuse promotion and access via twitter. American Journal of Public Health 107 (12): e1–e6.  https://doi.org/10.2105/AJPH.2017.303994.CrossRefGoogle Scholar
  44. Marrero, Tony. 2016. Record Pacific cocaine haul brings hundreds of cases to Tampa court. Tampa Bay Times, September 10.Google Scholar
  45. Martínez-Miranda, E., P. McBurney., and M.J. Howard. 2016. Learning unfair trading: A market manipulation analysis from the reinforcement learning perspective. Proceedings of the 2016 IEEE Conference on Evolving and Adaptive Intelligent Systems, EAIS 2016, 103–109.  https://doi.org/10.1109/EAIS.2016.7502499.
  46. McAllister, A. 2016. Stranger than science fiction: The rise of AI interrogation in the dawn of autonomous robots and the need for an additional protocol to the UN convention against torture. Minnesota Law Review 101: 2527–2573.  https://doi.org/10.3366/ajicl.2011.0005.CrossRefGoogle Scholar
  47. ———. 2017. Stranger than science fiction: The rise of A.I. Interrogation in the dawn of autonomous robots and the need for an additional protocol to the U.N. convention against torture. Minnesota Law Review 101: 2527–2573.  https://doi.org/10.3366/ajicl.2011.0005.CrossRefGoogle Scholar
  48. McCarthy, J., M.L. Minsky, N. Rochester, and C.E. Shannon. 1955. A proposal for the Dartmouth summer research project on artificial intelligence.  https://doi.org/10.1609/aimag.v27i4.1904.CrossRefGoogle Scholar
  49. Mckelvey, F., and Dubois, E. 2017. Computational propaganda in Canada: The use of political bots. Computational Propaganda Research Project (6): 32.Google Scholar
  50. Moor, J.H. 1985. What is computer ethics? Metaphilosophy 16 (4): 266–275.CrossRefGoogle Scholar
  51. Neff, G., and P. Nagy. 2016. Talking to bots: Symbiotic agency and the case of tay. International Journal of Communication 10: 4915–4931.Google Scholar
  52. Nunamaker, J.F., Jr., D.C. Derrick, A.C. Elkins, J.K. Burgo, and M.W. Patto. 2011. Embodied conversational agent–based kiosk for automated interviewing. Journal of Management Information Systems 28 (1): 17–48.CrossRefGoogle Scholar
  53. Office for National Statistics. 2016. Crime in England and Wales, year ending June 2016 – Appendix tables, (June 2017), 1–60.Google Scholar
  54. Ratkiewicz, J., M. Conover., M. Meiss., B. Gonçalves., S. Patil., A. Flammini., and F. Menczer. 2011. Truthy: Mapping the spread of astroturf in microblog streams. Proceedings of the 20th International Conference Companion on World Wide Web (WWW ’11), 249–252.  https://doi.org/10.1145/1963192.1963301
  55. Sætenes, G.M. 2017. Manipulation and deception with social bots : Strategies and indicators for minimizing impact, (May).Google Scholar
  56. Searle, J.R. 1983. Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  57. Seymour, J., and P. Tully. 2016. Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter. Presented at the Black Hat USA.Google Scholar
  58. Sharkey, N., M. Goodman, and N. Ross. 2010. The coming robot crime wave. IEEE Computer Magazine 43 (8): 6–8.CrossRefGoogle Scholar
  59. Solis, G.D. 2016. The law of armed conflict: International humanitarian law in war. 2nd ed. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  60. Spatt, C. 2014. Security market manipulation. Annual Review of Financial Economics 6 (1): 405–418.  https://doi.org/10.1146/annurev-financial-110613-034232.CrossRefGoogle Scholar
  61. Taddeo, M. 2017. Deterrence by norms to stop interstate cyber attacks. Minds and Machines (September): 10–15.  https://doi.org/10.1007/s11023-017-9446-1.CrossRefGoogle Scholar
  62. Taddeo, M., and L. Floridi. 2005. Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence 17 (4): 419–445.CrossRefGoogle Scholar
  63. Turing, Alan M. 1950. Computing machinery and intelligence. Mind 59 (236): 433–460.CrossRefGoogle Scholar
  64. Twitter – Impersonation policy. 2018. Retrieved January 29, 2018, from https://help.twitter.com/en/rules-and-policies/twitter-impersonation-policy.
  65. van de Poel, I., J.N. Fahlquist, N. Doorn, S. Zwart, and L. Royakkers. 2012. The problem of many hands: Climate change as an example. Science and Engineering Ethics 18: 49–67.CrossRefGoogle Scholar
  66. Van Lier, B. 2016. From high frequency trading to self-organizing moral machines. International Journal of Technoethics (IJT) 7 (1): 34–50.  https://doi.org/10.4018/IJT.2016010103.CrossRefGoogle Scholar
  67. Wang, Y., and M. Kosinski. 2017. Deep neural networks can detect sexual orientation from faces. Journal of Personality and Social Psychology 114: 1–47.Google Scholar
  68. Wang, G., M. Mohanlal., C. Wilson., X. Wang., M. Metzger., H. Zheng., and B.Y. Zhao. 2012. Social turing tests: Crowdsourcing sybil detection. Retrieved from http://arxiv.org/abs/1205.3856.
  69. Weizenbaum, J. 1976. Computer power and human reason: From judgment to calculation. Oxford: W. H. Freeman & Co.Google Scholar
  70. Wellman, M.P., and U. Rajan. 2017. Ethical issues for autonomous trading agents. Minds and Machines 27 (4): 609–624.CrossRefGoogle Scholar
  71. Williams, R. 2017. Lords select committee, artificial intelligence committee, written evidence (AIC0206), October 11. Retrieved from http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/70496.html#_ftn13.
  72. Yang, G.-Z., J. Bellingham, P.E. Dupont, P. Fischer, L. Floridi, R. Full, et al. 2018. The grand challenges of Science Robotics. Science Robotics 3 (14): eaar7650.  https://doi.org/10.1126/scirobotics.aar7650.CrossRefGoogle Scholar
  73. Zhou, W., and G. Kapoor. 2011. Detecting evolutionary financial statement fraud. Decision Support Systems 50 (3): 570–575.  https://doi.org/10.1016/j.dss.2010.08.007.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Oxford Internet Institute, Digital Ethics LabUniversity of OxfordOxfordUK

Personalised recommendations