Radiomics, moving medical imaging towards a data-driven approach, is then the perfect field for adversarial examples, whatever the approach chosen (traditional or deep radiomics). Although traditional radiomics may seem less subject to this kind of adversarial examples, it is not difficult to think of small modifications in the image able to suitably change the values of some features, fooling the subsequent data analysis .
In analogy to adversarial learning, we introduce the term adversarial radiomics, referring to “adversarial examples in radiomics”.
It is very important to stress the difference between adversarial radiomics and intrinsic problems of radiomics due to reproducibility . While of course these phenomena can share some common sources and features, there are some basic properties that distinguish them.
While the principal source of lack of reproducibility in radiomics can be reconducted to different clinical image acquisition settings, pre- and post-processing images transformation, sparsity of data, and analysis algorithms, some of these concerns can be thought to be fixed, e.g., improving data analysis techniques or/and harmonizing the datasets in terms of protocols and processing. Instead, adversarial radiomics has its roots in the problem of adversarial examples in machine learning, that is more related to the intrinsic nature of algorithms . Moreover, while standard errors in radiomics can be thought of as due to chance, adversarial learning can be guided to give precise wrong results. The idea leading adversarial examples is to find the smallest perturbation (in extremis just one pixel) able to fool the model classification, giving a wrong unpredictable or predictable result. The latter is the result of what is called a targeted attack, that is a deliberated manipulation of clinical images causing a misdiagnosis motivated by, e.g., insurance fraud, cyber terrorism, sabotage research, or in extremis stop a political candidate or even commit murder.
Adversarial examples are in general hard to defend against, machine learning models being trained only on a very small amount of all the many possible inputs they might encounter. Moreover it is difficult to construct a theoretical model of the adversarial example crafting process.
Awareness of adversarial examples gave rise to defense strategies , looking for algorithms resilient to adversarial attacks, able to make decisions for consistently explainable and appropriate reasons [13, 25]. Some strategies have been proposed such as using clever data processing to mitigate potential tampering or exposing algorithms to adversarial examples in the training [11, 13]. Woods et al.  suggest that such attacks may be leveraged to produce cogent explanations of robust networks. However, designing a defense that can protect against a powerful, adaptive adversarial attacker remains an important research area.