Key points

  • Radiomics represents a method for the quantitative description of medical images.

  • A step-by-step “how-to” guide is presented for radiomics analyses.

  • Throughout the radiomics workflow, numerous factors influence radiomic features.

  • Guidelines and quality checklists should be used to improve radiomics studies’ quality.

  • Digital phantoms and open-source data help to improve the reproducibility of radiomics.

Background

Like many other areas of human activity in the last decades, medicine has seen a constant increase in the digitalization of the information generated during clinical routine. As more medical data became available in digital format, new and always more sophisticated software was developed to analyze them. At the same time, the research on artificial intelligence (AI) has long reached a point where its methods and software tools have become not only powerful, but also accessible enough to leave the computer science departments and find applications in an increasing variety of domains. As a consequence, the recent years have witnessed a continuous increase of AI applications in the medical sector, aiming at facilitating repetitive tasks clinicians encounter in their daily clinical workflows and to support clinical decision-making.

The different techniques used in AI—i.e., mainly machine learning and deep learning algorithms—are especially useful when it comes to the emerging field of “big data”. Big data is defined as “a term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture, storage, distribution, management, and analysis of the information.” Footnote 1 Due to the high amount of multi-dimensional information, techniques from the field of AI are needed to extract the desired information from these data.

In medicine, various ways to generate big data exist, including the widely known fields of genomics, proteomics, or metabolomics. Similar to these “omics” clusters, imaging has been used increasingly to generate a dedicated omics cluster itself called “radiomics”. Radiomics is a quantitative approach to medical imaging, which aims at enhancing the existing data available to clinicians by means of advanced, and sometimes non-intuitive mathematical analysis. The concept of radiomics, which has most broadly (but not exclusively) been applied in the field of oncology, is based on the assumption that biomedical images contain information of disease-specific processes [1] that are imperceptible by the human eye [2] and thus not accessible through traditional visual inspection of the generated images. Through mathematical extraction of the spatial distribution of signal intensities and pixel interrelationships, radiomics quantifies textural information [3, 4] by using analysis methods from the field of AI. In addition, visual appreciable differences in image intensity, shape, or texture can be quantified by means of radiomics, thus overcoming the subjective nature of image interpretation. Thus, radiomics does not imply any automation of the diagnostic processes, rather it provides existing ones with additional data.

Radiomics analysis can be performed on medical images from different modalities, allowing for an integrated cross-modality approach using the potential additive value of imaging information extracted, e.g., from magnetic resonance imaging (MRI), computed tomography (CT), and positron-emission-tomography (PET), instead of evaluating each modality by its own. However, the current state-of-the-art of the research still shows lack of stability and generalization, and the specific study conditions and the authors’ choices have still a great influence on the results.

In this work, we present the typical workflow of a radiomics analysis, discussing the current limitations of this approach, suggesting potential improvements, and commenting relevant literature on the subject.

Radiomics–how to?

The following section will give a practical advice on “how to do radiomics” by illustrating each of the required steps in the radiomics pipeline (illustrated in Fig. 1) and highlighting important points.

Fig. 1
figure 1

The radiomics workflow. Schematic illustration of the patient journey including image acquisition, analysis utilizing radiomics, and derived patient-specific therapy and prognosis. After image acquisition and segmentation, radiomic features are extracted. High-level statistical modeling involving machine learning is applied for disease classification, patient clustering, and individual risk stratification

Step 1: image segmentation

For any radiomics approach, delineation of the region of interest (ROI) in two-dimensional (2D) or of the volume of interest (VOI) in three-dimensional (3D) approaches is the crucial first step in the pipeline. ROIs/VOIs define the region in which radiomic features are calculated.

Image segmentation might be done manually, semi-automatically (using standard image segmentation algorithms such as region-growing or thresholding), or fully automatically (nowadays using deep learning algorithms). A variety of different software solutions—either open-source or commercial—are available, such as 3D Slicer Footnote 2 [5], MITK Footnote 3, ITK-SNAP Footnote 4, MeVisLab Footnote 5, LifEx Footnote 6, or ImageJ Footnote 7 [6], to name only some frequently used open-source tools. For reviews on various different tools for image segmentation, please refer to [7, 8].

Manual and semi-automated image segmentation (usually with manual correction) are the most often encountered methods but have several drawbacks. Firstly, manual segmentation is time-consuming – depending on how many images and datasets have to be segmented. Second, manual and semi-automated segmentation introduce a considerable observer-bias, and studies have shown that many radiomic features are not robust against intra- and inter-observer variations concerning ROI/VOI delineation [9]. Consequently, studies using manual or semi-automated image segmentation with manual correction should perform assessments of intra- and inter-observer reproducibility of the derived radiomic features and exclude non-reproducible features from further analyses.

Deep learning-based image segmentation (often using some sort of U-Net [10]) is rapidly emerging and many different algorithms have already been trained for image segmentation tasks of various organs (currently, most of them being useful for the segmentation of entire organs, but not for segmentation of dedicated tumor regions), several of them being published as open-source. Since recently, there are also several possibilities for integration of such algorithms in platforms like 3D Slicer or MITK. Automated image segmentation certainly is the best option, since it avoids intra- and inter-observer variability of radiomic features. However, generalizability of trained algorithms currently is a major limitation, and applying those algorithms on a different dataset often results in complete failure. Thus, further research has to be devoted to the development of robust and generalizable algorithms for automated image segmentation.

Step 2: image processing

Image processing is located between the image segmentation and feature extraction step. It represents the attempt to homogenize images from which radiomic features will be extracted with respect to pixel spacing, grey-level intensities, bins of the grey-level histogram, and so forth. Preliminary results have shown that the test-retest robustness of radiomic features extracted largely depends on the image processing settings used [11,12,13,14,15]. In order to allow for reproducible research, it is therefore important to report each detail of the image processing step.

Several of the above-mentioned software platforms (namely, 3D Slicer and LifEx) have integrations for radiomics analyses. 3D Slicer has incorporated an installable plugin for the open-source pyRadiomics package [16] (which can otherwise be used within a solo Python framework), whereas LifEx is a stand-alone platform with integrated segmentation and texture analysis tools and a graphical user interface. The image processing step in the pyRadiomics package (which currently is one of the most commonly used packages for radiomics analyses) can be defined by writing a so-called parameter file (in a YAML or JSON structured text file). This parameter file can be loaded into 3D Slicer or be incorporated into a Python framework. Example parameter files for different modalities can be found in the pyRadiomics GitHub repositoryFootnote 8.

Interpolation to isotropic voxel spacing is necessary for most texture feature sets to become rotationally invariant and to increase reproducibility between different datasets [17]. Currently, there is no clear recommendation whether upsampling or downsampling should be the preferred method. In addition, data from different modalities might need different approaches for image interpolation. CT, for example, usually delivers isotropic datasets, whereas MRI often delivers non-isotropic data with need for different approaches to interpolation. After applying interpolation algorithms to the image, the delineated ROI/VOI should also be interpolated. For a detailed description of image interpolation and different interpolation algorithms, please refer to [17].

Range re-segmentation and intensity outlier filtering (normalization) are performed to remove pixels/voxels from the segmented region that fall outside of a specified range of grey-levels [17]. Whereas range re-segmentation usually is required for CT and PET data (e.g., for excluding pixels/voxels of air or bone within a tumor ROI/VOI), range re-segmentation is not possible for data with arbitrary intensity units such as MRI. For MRI data, intensity outlier filtering is applied. The most commonly used method is to calculate the mean μ and standard deviation σ of grey-levels within the ROI/VOI and to exclude grey-levels outside the range μ ± 3σ [17,18,19].

The last image processing step is discretization of image intensities inside the ROI/VOI (Fig. 2). Discretization consists in grouping the original values according to specific range intervals (bins); the procedure is conceptually equivalent to the creation of a histogram. This step is required to make feature calculation tractable [20].

Fig. 2
figure 2

Image intensity discretization. Original data (a) and a generic discretized version (b)

Three parameters characterize discretization: the range of the discretized quantity, the number of bins, and their width (size). The range equals the product of the bin number times the bin width; therefore, only two of the parameters can be freely set. Different combinations can lead to different results; the choice of the three parameters is usually influenced by the context, e.g., to simplify the comparison with other works using a particular binning:

  • The range is usually preserved from the original data, but exceptions are not uncommon, e.g. when the discretized data is to be compared with some reference dataset or when ROIs with much smaller range than the original have to be analyzed. It is worth mentioning that when the range is not preserved and if the number of bins is particularly small, the choice of the range boundaries can have a strong impact on the results;

  • Fixing the bin number (as is the case of discretizing grey-level intensities) normalizes images and is especially beneficial in data with arbitrary intensity units (e.g., MRI) and where contrasts are considered important [17]. Thus, it is the recommended discretization method for MRI data, although this recommendation is not without controversies (for further discussion, please refer to the relative pyRadiomics documentationFootnote 9). The use of a fixed bin number discretization is thought to make radiomic features more reproducible across different samples, since the absolute values of many features depend on the number of grey levels within the ROI/VOI;

  • Fixing the bin size results in having direct control on the absolute range represented on each bin, therefore allowing the bin sequence to have an immediate relationship with the original intensity scale (such as Hounsfield units or standardized uptake values). This approach makes it possible to compare discretized data with different ranges, since the bins belonging to the overlapping range will represent the same data interval. For that reason, previous work recommends the use of a fixed bin size for PET images [14]. It is recommended to use identical minimum values for all samples, defined by the lower bound of the re-segmentation range

A still open question is the optimal bin number/bin width which should be used in this discretization step. This question becomes particularly important when considering that the discretization is equivalent to averaging the values within each bin, and the effect is similar to applying a smoothing filter on the data distribution. When the bins are too wide (too few), features can be averaged out and lost; when the bins are too small (too many), features can become indistinguishable from noise. A balance is reached when discretization can filter out the noise while preserving the interesting features; unfortunately, this implies that the optimal choice of binning is highly dependent from the both data acquisition parameters (noise) and content (features). As an example, previous preliminary work has shown that different MRI sequences might need different bin numbers for obtaining robust and reproducible radiomics features [11]. Moreover, small number of bins can generate undesired dependencies on the particular choice of range and bin boundaries, thus undermining the robustness of the analysis. The present recommendation is to always start by inspecting the histogram of the data from which radiomic features are to be extracted and to decide upon a reasonable set of parameters for the discretization step based on the experience.

Step 3: feature extraction

After image segmentation and processing, extraction of radiomic features can finally be performed. Feature extraction refers to the calculation of features as a final processing step, where feature descriptors are used to quantify characteristics of the grey levels within the ROI/VOI [17]. Since many different ways and formulas exist to calculate those features, adherence to the Image Biomarker Standardization Initiative (IBSI) guidelines [17] is recommended. These guidelines offer a consensus for standardized feature calculations from all radiomic feature matrices. Different types (i.e., matrices) of radiomic features exist, the most often encountered ones being intensity (histogram)-based features, shape features, texture features, transform-based features, and radial features. In addition, different types of filters (e.g., wavelet or Gaussian filters) are often applied during the feature extraction step. In practice, feature extraction means simply pressing the “run” button and waiting for the computation to be finished.

Step 4: feature selection/dimension reduction

Depending on the software package used for feature extraction and the number of filters applied during the process, the number of extracted features to deal with during the following step of statistical analysis and machine learning ranges between a few and, in theory, unlimited. The higher the number of features/variables in a model and/or the lower the number of cases in the groups, e.g., for a classification task, the higher the risk of model overfitting.

As a consequence, reducing the number of features to build statistical and machine learning models during a step called feature selection or dimension reduction is of crucial importance for generating valid and generalizable results. Several “rules of thumb” may exist for defining the optimal number of features for a given sample size, but no true evidence for these rules exists in the literature. For some guidance regarding study design or sample size calculation, please consider reference [21]. The dimension reduction is a multi-step process, leading to exclusion of non-reproducible, redundant, and non-relevant features from the dataset.

Multiple ways for dimension reduction and feature selection exist among researchers. The following steps reflect our personal experience and have been performed in several clinical studies so far [2, 22,23,24,25,26,27] (Fig. 3).

Fig. 3
figure 3

Dimension reduction and feature selection workflow

The first step should involve exclusion of non-reproducible features, if manual or semi-automated ROI/VOI delineation was used during the image segmentation step. A feature which suffers from higher intra- or interobserver variability is not likely to be informative, e.g., for assessing therapeutic response. Similarly, the test-retest robustness of the extracted features should be assessed (e.g., using a phantom). Non-robust features should also be excluded if the study aim is the evaluation of longitudinal data, although it is important that the relevant change of features over time is incorporated into the selection procedure [28]. Simply assessing reproducibility/robustness by calculation of intra-class-correlation coefficients (ICCs) might not be sufficient since ICCs are known to depend on the natural variance of the underlying data. Recommendations for assessing reproducibility, repeatability, and robustness can be found in [29].

The second step in the feature selection process is the selection of the most relevant variables for the respective task. Various approaches often relying on machine learning techniques can be used for this initial feature selection step, such as knock-off filters, recursive feature elimination methods, or random forest algorithms.

Since these algorithms often do not account for collinearities and correlations in the data, building correlation clusters represents the logical next—third—step in the dimension reduction workflow. In some cases, this step might be combined with the previous (second) step since few machine learning techniques are able to account for correlations within the data. The majority, however, is not. Correlation clusters (for an example, see Fig. 3) visualize clusters of highly correlated features in the data and allow selection of only one representative feature per correlation cluster. This selection process again might be based on machine learning algorithms and/or on conventional statistical methods and data visualization. As a general principle, the variable with the highest biological-clinical variability in the dataset should be selected since it might be most representative of the variations within the specific patient cohort. The data visualization step is also of high importance once the dimensionality of the data has been reduced.

Finally, the remaining, non-correlated and highly relevant features can be used to train the model for the respective classification task. Although the present review does not aim to cover the model training and selection process, the importance of splitting the dataset into a training and at least an independent testing dataset (for optimal conditions even an additional validation dataset) cannot be stressed enough [30]. This is especially relevant given the limitations currently encountered in the field of radiomics as discussed in the following section.

Current limitations in radiomics

Although radiomics has shown its potential for diagnostic, prognostic, and predictive purposes in numerous studies, the field is facing several challenges. The existing gap between knowledge and clinical needs results in studies lacking clinical utility. In case a clinically relevant question is considered, the reproducibility of radiomic studies is often poor, due to lack of standardization, insufficient reporting, or limited open source code and data. Also, the lack of proper validation and the subsequent risk of false-positive results hampers the translation to clinical practice [31]. Moreover, the interpretability of the features, especially those derived from texture matrices and/or after filtering, mistakes in the interpretation of the results (e.g., causation vs. correlation), or the lack of comparison with well-established prognostic and predictive factors, results in reservation towards its use in clinical decision support systems. Furthermore, radiomics studies are often based on retrospectively collected data and thus have low level of evidence and mainly serve as proof-of-concept, whereas prospective studies are required to confirm the value of radiomics.

Due to the retrospective nature of radiomic studies, imaging protocols, including acquisition, and reconstruction settings, are often not controlled or standardized. For each image modality, multiple studies have assessed the impact of these settings on radiomic features or attempted to minimize their influence by eliminating features that are sensitive to these variabilities. Although these studies are relevant to create awareness of the influencing factors, it should be noted that the information is often not directly helpful to future studies. The reproducibility of radiomic features is not necessarily generalizable to different disease sites, modalities, or scanners, e.g., robust features in one disease site are not necessarily robust in another disease site [32]. Moreover, in case robust radiomic features are assessed using cut-off values of correlation coefficients, one should be aware that these cut-offs are often arbitrarily chosen and the number of “robust” features depend on the number of subjects involved. Furthermore, for the generalizability of robustness studies, it is important that radiomic feature calculations are compliant with the IBSI guidelines [17].

Apart from the variations in scanners and settings, radiomic feature values are also influenced by patient variabilities, e.g., geometry, which impact the levels of noise and presence of artifacts in an image. Therefore, the aim of a recent study was to quantify these so-called “non-reducible technical variations” and stabilize the radiomic features accordingly [33].

The next sections summarize the studies that assessed radiomic feature robustness for different acquisition and reconstruction settings of CT, PET, and MRI, as well as for ROI delineation and image pre-processing steps. Figure 4 provides an overview of factors that have been investigated in literature for their influence on radiomic feature values. In Tables 1, 2, and 3, the studies are collected in one overview for all three modalities considered in this review: CT, MRI, and PET, respectively. A recent review provides an overview of existing phantoms that have been used for radiomics for all three modalities [120].

Fig. 4
figure 4

Factors influencing radiomics stability. Summary of technical factors in each step of the radiomics workflow potentially decreasing radiomic feature robustness, reproducibility, and classification performance

Table 1 Literature review for oncologic imaging or phantom studies with computed tomography
Table 2 Literature review for oncologic imaging or phantom studies with positron emission tomography
Table 3 Literature review for oncologic imaging or phantom studies with magnetic resonance imaging

CT and PET CT

Multiple studies (16 were identified in this review) have investigated the stability over test-retest scenarios for CT radiomics (Table 1), where the publicly available RIDER Lung CT collection was often evaluated [121]. For PET, only a few test-retest studies were performed, which were either on a phantom or lung cancer data (Table 2). Recently, an extensive review on factors influencing PET radiomics was published [122].

The voxel size was the mostly investigated influencing reconstruction factor for CT, whereas this was the full-width half maximum (FWHM) of the Gaussian filter for PET. Four and 12 studies were identified that studied the influence of image discretization on CT and PET radiomic features, respectively. Figure 4 provides an overview of factors that have been investigated in literature for their influence on radiomic feature values.

MRI

The impact of test-retest, acquisition and reconstruction settings, segmentation, and image pre-processing has been explored less extensively to date than for PET and CT. Only four studies were found that investigated the influence of reconstruction settings, one of these studies included patient images. The influence of segmentation on MRI radiomic features has been more extensively studied for a variety of tumor sites. Table 3 summarizes the present literature for influencing factors on radiomic features in MRI. Figure 4 provides an overview of factors that have been investigated in literature for their influence on radiomic feature values.

Reduce radiomics’ dependency

Recent literature regarding the robustness for different acquisition and reconstruction settings, ROI delineation, and image pre-processing steps shows that the most commonly used approach to deal with this is to eliminate radiomic features that are not robust against these factors. The drawback of this method is that potentially relevant information could be removed, whereas stability not necessarily means informativity. A few solutions have been proposed in order to reduce the influence of the aforementioned factors on radiomics studies. One proposed solution is to eliminate the dependency of features on a certain factor by modeling the relationship and applying corrections accordingly. This had been explored recently for different CT exposure settings [123]. Another method to eliminate the dependency is to convert images using deep learning, in order to simulate reconstruction with different settings, which was shown to improve CT radiomics’ reproducibility for images reconstructed with different kernels [62]. This approach has the potential to solve other radiomics dependencies to improve robustness in the future. Different than image-wise dependency corrections, post-reconstruction batch harmonization has been proposed in order to harmonize radiomic feature sets originating from different institutes, which is a method called ComBat [124,125,126]. Furthermore, a recent study investigated the performance of data augmentation instead of feature elimination to incorporate the knowledge on influencing factors on radiomic features [127].

Open-source data

Publicly available datasets like the RIDER dataset Footnote 10 help to gain knowledge about the impact of varying factors in radiomics [121]. Also, the availability of a public phantom dataset, intended for radiomics reproducibility tests on CT, could help to further assess the influence of acquisition settings in order to eliminate non-robust radiomic features [128]. However, studies are needed to show if robustness data acquired on a phantom can be translated to the human. Similar initiatives for PET and MRI would help to understanding of the impact of changes in settings on radiomics. In other words, open-source data plays an important role in the future improvement of radiomics.

Solution: quality control and standardization

In order to increase the chance of clinically relevant and valuable radiomics studies, we would recommend verifying whether the following questions could be answered with “yes,” prior to commencement of the study:

  • Is there an actual clinical need which could potentially be answered with (the help of) radiomics?

  • Is there enough expertise in the research team, preferably from at least two different disciplines, to ensure high quality of the study and potential of clinical implementation?

  • Is there access to enough data to support the conclusions with sufficient power, including external validation datasets?

  • Is it possible to retrieve all other non-imaging data that is known to be relevant for the research question (e.g., from biological information, demographics)?

  • Is information on the acquisition and reconstruction of the images available?

  • Are the imaging protocols standardized and if not, is there a solution to harmonize images or to ensure minimal influence of varying settings on the modeling?

Besides these general questions, which should been asked before the start of a study, there are some recent contributions in the field that aim to facilitate the execution of radiomics studies with higher quality: (1) IBSI: harmonization of radiomics implementations and guidelines on reporting of radiomic studies [17, 129], (2) Radiomics Quality Score (RQS): checklist to ensure quality of radiomics studies [130], and (3) Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) statement—guidelines for reporting of prediction models for prognosis or diagnosis [30]. For the radiomic feature calculation, we recommend to use an implementation that is IBSI compliant, which could be verified using the publicly available digital phantom [129, 130]. Also, regarding choices for image discretization and resampling, we recommend following the IBSI guidelines. Besides that, it is important to be consistent and transparent, and detailed reporting on the pre-processing steps applied to improve reproducibility and repeatability of radiomic studies need to be ensured.

A recent study evaluated the quality of 77 oncology-related radiomics studies using RQS and TRIPOD, and concluded that “the overall scientific quality and reporting of radiomics studies is insufficient,” showing the importance of guidelines and criteria for future studies [131].

Outlook: workflow integration

While currently many research efforts aim towards standardization of radiomics, translation into clinical practice also requires adequate implementation of radiomics analyses into the clinical workflow once the standardization issue has been adequately addressed and clinical utility has been proven in prospective clinical trials.

A useful radiomics tool should seamlessly integrate into the clinical radiological workflow and be incorporated into or interfaced with existing RIS/PACS systems. Such systems should provide segmentation tools or ideally deep learning-based automated segmentation methods as well as standardized feature extraction algorithms and modality-adjusted image processing adhering to the standards described above. In case of fully automated segmentation, the possibility to inspect and manually correct the segmentation results should be incorporated.

In a future workflow, known important radiomics features could then be displayed alongside other quantitative imaging biomarkers and the images themselves. The radiologist could then use all these information to support his clinical judgement or—where possible—estimate, e.g., prognostic factors.

It is, however, important to note, that radiomics should only be viewed as an additional tool and not as a standalone diagnostic algorithm. Certainly, many challenges lie ahead until radiomics can be integrated in our daily routine: from the above-mentioned issues surrounding image standardization to legal issues that will certainly arise regarding regulatory issues. Nonetheless, it could prove a valuable if not critical step towards a more integrated approach to healthcare.

Conclusions

Throughout the radiomics workflow, multiple factors have been identified that influence the feature values, including random variations in scanner and patients, image acquisition and reconstruction settings, ROI segmentation, and image preprocessing. Several studies have proposed to either eliminate unstable features, correct for influencing factors, or harmonize datasets in order to improve the robustness of radiomics. Recently published guidelines and checklists aim to improve the quality of future radiomics studies, but transparency has been recognized as the most important factor for reproducibility. Assessment of clinical relevance and impact prior to study commencement, increased level of evidence using studies with large enough datasets and external validation, and its combination with established methods will help moving the field towards clinical implementation.