Background

One of the main goals of genetic epidemiology is the identification and characterization of polymorphisms that present an increased risk of disease. It is increasingly assumed that complex diseases are the result of a myriad of genetic and environmental risk factors [1, 2]. This complex etiology limits the utility of traditional, parametric statistical approaches in genetic association studies [3, 4]. The ubiquitous nature of gene-gene and gene-environment interactions [1, 5, 6] has inspired the development the novel statistical approaches designed to detect epistasis [79].

Multifactor Dimensionality Reduction (MDR) is one such method [10]. MDR was designed to detect interactions in categorical independent variables and a dichotomous dependent variable (i.e. case/control status or drug treatment response/non-response). MDR performs an exhaustive search of all possible single-locus through n-locus interactions (as computationally feasible) to evaluate all possible high/low risk models of disease. MDR selects a single model as optimal for each n-locus interaction as a result of these evaluations. Permutation testing (PT) is used to determine the significance of these models. MDR is nonparametric and model-free, so no hypotheses concerning the value of any statistical parameter nor any genetic inheritance model are made [10]. MDR has successfully identified interactive effects in simulated data as well as real data applications in diseases such as hypertension [3, 11, 12], cancer [10, 13, 14], and atrial fibrillation [15, 16].

The end-goal of an MDR analysis is ultimately hypothesis generation (or refinement within candidate gene strategies) [17]. Hypothesis testing is used within the MDR analysis framework to determine whether resulting models are significantly different than expected by chance. Significance of a model is intended to indicate an interesting model that should be followed up in replication cohorts or functional studies. In recent work, there has been more emphasis on selecting all statistically significant models [17] in order to avoid missing a true signal (false negatives) in exchange for risking the selection of a few false positives. This generation of multiple hypotheses opens up questions about the PT procedure used to ascribe significance to this end set of models.

PT is a commonly used non-parametric statistical procedure that involves re-sampling the data without replacement to actually construct the distribution of the test statistic under the null hypothesis rather than make specific distributional assumptions. If the value of the test statistic based on the original samples is extreme relative to this distribution (i.e. if it falls far into the tail of the distribution), then the null hypothesis is rejected [18]. Validity of PT relies only on the property of exchangeability under the null hypothesis – that the joint distribution of the data samples must remain invariant to permutations of the data subscripts. Thus, permutation tests maintain a wide applicability under a much broader range of data and research conditions than most parametric tests [19]. In addition, PT requires minimal assumptions about the data being examined, yet often has power equal to, or even greater than, parametric counterparts that require stronger, and sometimes untenable data assumptions [20]. Unlike many parametric and other nonparametric tests, the results of permutation tests (the p-values) are unbiased [18]. The chief drawback of this method is that it is computationally expensive, but the easy availability of fast computing has made this a practical approach even for large datasets.

MDR implements PT to statistically test to the best model(s) [21]. Typically, omnibus PT is used, where a single null distribution is generated from the best model of each of at least one thousand randomized datasets. With a focus on selecting all potentially interesting models from the final MDR set, this omnibus method may be too conservative. n-locus PT is an alternative, where a separate null distribution is created for each n-level of interaction. So if single-locus through five-way interactions were evaluated in an original MDR analysis, a separate distribution would be created for the single-locus model, for the two-locus model, etc (for a total of five null distributions).

Currently, we compare the significance cut-offs, power, and false positive rates of omnibus PT and n-locus PT implemented in MDR for a wide range of disease models. We also examine the overall false positive rate of the MDR method using both types of PT. As the MDR method gains acceptance and is increasingly used in the genetics community, it is important that users understand how to properly apply PT.

Multifactor Dimensionality Reduction (MDR)

Figure 1 (adapted from [10]) outlines the MDR procedure. Details of the algorithm and of the alternative PT strategies implemented in the current study can be found in Additional file 1.

Figure 1
figure 1

An overview of the MDR method. Steps correspond to those described in the supplemental information.

Data Simulations and Analysis

Simulated datasets that exhibit gene-gene interactions were generated for the purpose of evaluating the power and false positives of MDR using either omnibus or n-locus PT. Multiple disease models, as well as null data with no disease model, were generated with varying allele frequencies, heritability, and number of interacting functional polymorphisms. Details of the simulations and analysis are found in Additional file 1.

Results

Null data was used to check the false positive rates of both permutation-testing strategies in the absence of any signal from the data. Best models were chosen for each dataset based on low prediction error and high cross-validation consistency and compared to the appropriate permutation distribution. Table 1 shows the false positive rate for each permutation distribution, with alpha = 0.05, where the false positive rate is estimated as the number of models that were declared significant by PT out of the 100 datasets analyzed. These results demonstrate that the false positive rate is nominal for each permutation distribution – the error rate is at or below the selected alpha level.

Table 1 False positive rate for null data

Both omnibus and n-locus permutation distributions were created for each model, and the highest prediction error that would be ascribed statistical significance at the alpha = 0.05 level was recorded (cut-offs for significance). Table 2 lists these cut-offs for omnibus testing and each possible n-locus distribution. For each model, the most conservative cut-off is highlighted with bold font. These results demonstrate that omnibus PT consistently provides the most conservative PT cut-offs, as its cut-off prediction errors are the lowest. There is also a general trend within the n-level PT distributions that as the level of n increases, so does the corresponding cut-off value. This demonstrates that the n-locus PT becomes more liberal as the level of dimensionality increases.

Table 2 Permutation testing significance cut-offs.

While the anti-conservative nature of n-locus PT could potentially increase power, it is undesirable if that results in an increased false positive rate. To evaluate this, we investigated the false positive rate of each n-locus permutation distribution for each model. MDR analysis was performed on each dataset for all single-locus through five-locus combinations, and a best model was chosen for each level of interaction. For each dataset, the best model for each level of interaction was compared to the appropriate n-locus permutation distribution to estimate the false positive rate where a false positive result was any model that is not correct (may contain only incorrect loci or correct loci with additional false positive loci) and was found statistically significant according to the appropriate permutation distribution. Summarized in Table 3, using this definition of power, the false positive rate is extremely high for any n-level interaction above the true genetic model. For example, for the two-locus interaction model with 0.2 minor allele frequency and 3% heritability, all three, four, and five locus models were statistically significant.

Table 3 False positive rates for n-locus permutation distributions.

To better understand this trend, we estimated power for each model as the number of times all functional loci (with or without additional/false positive loci) were identified within the best model for any n-level interaction and was called significant according to the corresponding permutation distribution out of the 100 simulated datasets per model. Table 4 summarizes these results. These results suggest that the false positive rates shown in Table 3 may be driven by the inclusion of functional loci in higher-level interactions. This trend was seen for all models, but is especially apparent in higher heritability models. This suggests, especially in the case of a relatively strong signal from the data, that even containing the correct loci within the model is enough to drive it to statistical significance according to a more liberal PT procedure. This is a highly likely explanation, especially considering the nominal false positive rates demonstrated for null data (Table 1).

Table 4 Power (with or without additional loci) for n-locus permutation distributions.

Table 4 shows that the power of MDR is relatively high in lower order models, especially at the n-locus level of analysis. Interpretation of an MDR analysis is complicated, however, when using n-locus PT by the high level of false positives. Even though the functional loci are included in the significant models, choosing the correct order of interaction is difficult when n-locus PT is used for each level of interaction.

Understanding that omnibus PT is the more conservative option, we investigated its impact on both the overall power and false positive rates of the MDR method. Table 5 summarizes these results for each disease model. First, we wanted to compare the power of MDR to detect the correct model as the final model (through minimization of prediction error and maximization of cross validation consistency) without considering statistical significance. Power was estimated as the number of times the functional/disease associate loci were chosen as the best model across the 100 replicates, with no false positive loci included in the model. By defining power in this context (with no PT), this estimate represents the least conservative estimate. These results are equivalent to the least conservative cut-off possible with n-locus PT. By defining power in this way, by using any significance testing the power cannot possibly be higher – all results that count towards "power" under this definition can only be changed to non-significant by using any significance testing. Results are summarized in the column of Table 5 labeled "Power Without Permutation Testing". This is then compared to the power of MDR to not only find the correct model, but to also ascribe statistical significance to that model through omnibus PT. Results of this evaluation are presented in the column of Table 5 labeled "Power With Permutation Testing". Comparing the power with and without permutation demonstrates the results are similar. Omnibus PT does not severely limit the power of the method.

Table 5 Power of MDR with and without omnibus permutation testing and false positive rate with permutation testing.

Finally, we evaluated the impact of omnibus PT on the false positive rate. The false positive rate was estimated for each model as the number of incorrect final models that were statistically significant using omnibus PT. This calculation included significance testing at each level of interaction – not just a single test for one overall best model. As Table 5 shows, the false positive rates are near the expected 5% level.

From the results presented in Table 5, we conclude that omnibus PT controls for false positives while preserving power.

Conclusion

In this study we confirmed that the overall false positive rate of MDR is as expected according to the selected alpha level. Additionally, we demonstrated the conservative nature of omnibus testing in comparison to an n-locus strategy.

We also demonstrated that omnibus PT is preferred to n-locus since it controls false positives without limiting power. While MDR has high power using either permutation-testing scenario, final model selection is complicated by the more liberal n-locus strategy of PT. While the final goal of MDR is hypothesis generation, and the user may prefer the risk of false positives to the risk of missing a true signal, it is recommended that significance levels be assigned to one or more models from the final set using the omnibus permutation distribution, and not using corresponding n-locus tests.

While these results are most immediately applicable to genetic epidemiologists using MDR, they may generalize to any computational method that involves PT. Additionally, as MDR gains acceptance and becomes more widely used, it is important that the consequences of alternative permutation strategies should be explored and understood. Recent work is also implementing alternative hypothesis testing strategies for MDR that are computationally feasible for extremely large-scale datasets [22].