There are both optimistic and pessimistic possible scenarios of using artificial intelligence. Given the outcomes of its possible application, AI may be seen as a double-edged sword.
AI to do good things
As widely known, nowadays AI is increasingly used in many domains of our lives to help people (e.g., to make decisions, predict, solve complex problems, etc.). There are a myriad of such applications and deployment of AI solutions (discussed in [4, 5, 12]), to name just a few). Actually, due to the broad range of applications, as well as their complexity, it would probably be impossible to mention all of them here. Nevertheless, AI technologies are commonly believed to be effective, reliable, created with best intentions and used to help and do good things within the framework of regulations and societal expectations.
AI designed to do bad things intentionally
Unfortunately, as with all the technologies, there is the possibility to misuse them for bad purposes. AI technologies may be utilized by criminals to enable fake news spreading, perform cyberattacks, commit computer crimes, launder money, steal data, etc. [2, 1]). The malicious use of AI has been so widespread, that the term AI for Crime (AIC) has been introduced .
Therefore, researchers and societies, as well as law enforcement agencies, need to be prepared for those new, modern, and sometimes unprecedented AI-supported crimes, and most importantly should be aware that such crimes have become a part of current ecosystem, especially on the internet.
One of the interesting yet alarming examples of AIC is the situation when criminals or hackers attack (or fool) normally working, legal machine learning and artificial intelligence solutions; this in turn may result in their malfunctioning. Such practices are termed as adversarial machine learning; several classes of such attacks on AI systems have already been distinguished, such as evasion attacks, poisoning attacks, exploratory attacks, and many more. As a result, crucial AI systems, such as those used for medical images classification or the ones applied in intelligent transport and personal cars, while attacked, could generate mistakes, faults, could be simply fooled; all this might result in doing considerable harm.
So far, such attacks have not been common yet. However, there are some theoretical advances and considerations that foresee adversarial attacks as an emerging threat. For example, it has been shown that skillfully crafted inputs can affect artificial intelligence algorithms to sway the classification results in the fashion tailored to the adversary needs , and that successful adversarial attacks can change the results of medical images classification or healthcare systems , as well as other decision support systems.