Background

Microarray technology has become a standard experimental method in bio-medical research. In the analysis of microarray data, one of the most fundamental tasks is the identification of differentially expressed genes while controlling false positives and minimizing false negatives. This is a multiple hypothesis test problem which analyzes thousands or tens of thousands of genes simultaneously. In these tests we often need to control the false discovery among the rejected hypotheses under a pre-specified level while maintaining maximal power. Thus, there is a trade off in the control of the type-I error between rejecting true null hypotheses (false discovery) versus accepting true alternative hypotheses (false negative).

Traditional Bonferroni correction procedures are designed to control the Family Wise Error Rate (FWER), which guards against making one or more type I errors among a family of hypothesis tests. However, these procedures may be excessively conservative for microarray analysis where the number of hypotheses is very large and a substantial fraction of the genes are differentially expressed [1]. A more appropriate approach is to control the False Discovery Rate (FDR), which is the proportion of type I errors among all rejected hypotheses [2, 3]. This approach is particularly useful in exploratory analyses, where the objective is to maximize the discovery of true positives, rather than guarding against one or more false positive results.

A number of methods have been proposed to control the FDR given a population of hypothesis tests. These methods usually assume that the distribution of the test statistics, f, can be modeled by a mixture of two components [4]:f(x) = π0f0(x) + (1 - π0)f1(x)     π0 = m0/m

Where f0 is the distribution of the test statistics under H0, which by definition equals to 1 when using p-values when tests are independent, f1 is the distribution of the test statistics under H1, m0 is the number of true H0, m is the total number of hypotheses under consideration, and π0 is the proportion of true H0. The methods proposed by Benjamini et al [2, 3] to control FDR do not estimate π0; therefore, they provide the strongest controls on FDR but have the lowest power compared to other methods that do so.

In many actual applications where a considerable number of genes are differentially expressed, assuming π0 = 1 may be too conservative causing loss of power. Several alternative methods, such as nonparametric empirical Bayesian pFDR criterion and its p-value equivalent called q-value method [1, 5, 6], bin-wise model [79], local FDR method [10], parametric beta-uniform mixture models [1114], the Lowest Slope estimator (LSL) [15], the Spacing LOESS Histogram (SPLOSH) method [16], the nonparametric MLE method [17], the moment generating function approach [18], and the Poisson regression approach [1820], have all been proposed to estimate π0 by pooling test statistics and controlling FDR based on the estimated π0.

In these methods, one of the critical steps is estimating the proportion of null hypotheses, π0. When using p-values, these estimations usually depend on the assumption that f0 follows a uniform distribution. This assumption, which is of critical importance for the methods of statistical inference that employ pooling test statistics across genes [21], is valid when all test hypotheses are independent and identically distributed. Furthermore, when there are only weak correlations, or "clumpy" correlations (a large number of groups that have a small number of genes with high correlation within groups but no correlation between groups [21, 22]), the uniform assumption is not strongly violated and the method remains adequate. However, in datasets with large scale strong correlations, the joint distribution of the test statistics will no longer be the product of marginal distributions, and the observed f0 will severely deviate from uniform, causing the current π0 estimation methods to become very unstable. Increased variation and bias of π0, as well as FDR, was also observed by Wu et al [14] in datasets with strong local correlations.

The effect of correlation on simultaneous significance tests was previously discussed theoretically [2325], and a number of permutation based FDR control methods were proposed, such as SAM [26], dChip [27], Ge et al [28], Meinshausen et al [24] and Efron [25]. In these methods, the distribution of f0 was modeled empirically through permutations, which naturally considered the correlation. However, like Benjamini et al [2, 3], these methods don't estimate π0; therefore, in datasets with a large number of differentially expressed genes, the FDR control may be overly conservative with a loss of power.

Therefore we proposed 2 re-sampling schemes, similar to model averaging in bagging methods, to reduce the variation in estimating π0 in datasets with strong correlation between gene expression values. Our methods produced a more stable and conservative estimation of π0 and, therefore, provided stronger control of False Discovery Rate with only a minor sacrifice of power.

Implementation

Creating simulated data set

To test the performance of various algorithms in estimating π0, we generated 2 types of simulated datasets. Both datasets had strong correlation between subsets of genes and a known proportion of true null hypotheses, to represent the log transformed microarray expression data.

Data-B

The first simulation method is similar to the method used in Qiu et al [21] and Wu et al [14]. Assume n samples and m genes, with n/2 samples per class. The m genes were divided into K blocks with each block consisting of m/K genes. Assume independence between blocks and constant correlation coefficient between genes within each block. For block l and sample j, we first created a block center vectorb lj = d l ·x j + δ lj , l = 1,...,K, j = 1,...,n

where d l was the mean difference between groups, it equals to 0 (for simulating non-differentially expressed genes) with probability π0, or was generated from beta distribution with parameter (4, 20) otherwise; x j was a group indicator; and δ lj was i.i.d. N(0,1) to represent sample specific noise. Then the expression value of gene i in sample j in that block, Y lji , was generated byY lji = ρ·b lj + (1 - ρe lji , i = 1,...,m/K

where ρ was the correlation constant which takes value between [0,1] determining the correlation coefficient between genes within the block, and e lji was i.i.d. N(0,1). The K blocks of genes were generated independently of each other and then pooled to form the whole dataset. We call this type of dataset which contains blocks of highly correlated genes Data-B in our experiments.

Data-M

The above Data-B model is over-simplified in many aspects, and is still a "clumpy" structure, although the clumpiness can be pronounced. To mimic more realistic situations, we generated a second type of simulated data based on an actual human breast cancer microarray dataset [29] obtained with Affymetrix U133 plus 2.0 microarrays. The dataset contains 65 estrogen receptor positive (ER+) cases and 46 estrogen receptor negative (ER-) cases. The data were normalized by the GCRMA algorithm[30, 31], and the gene (probe-set) expression levels were log2-transformed. According to published literatures [32, 33], the ER status is one of the most predominant partitioning factors for molecular classification of breast cancer. We therefore took some of the genes differentially expressed between the two classes as "truly H1" genes. We selected 8778 genes with differences in mean of log transformed expression levels between the two classes greater than 0.58 (equivalent to a 1.5 fold change). From these genes, each time we randomly chose 1000 to form our simulated dataset and then randomly picked π0 proportion of the 1000 genes to establish H0 genes by scrambling these genes together between samples. Thus, we obtained a simulated dataset with a known number of null hypotheses while the correlations among both the differentially and non-differentially expressed genes were maintained. We call this type of dataset Data-M in our experiments [34].

Estimating π0by re-sampling strategy

To get a better estimation of π0 in datasets with strong correlations, we proposed 2 re-sampling methods to replace the original π0 estimation step in q-value method which estimates π0 directly from the p-value distribution [6].

The first method, termed SampS, re-samples without replacement 2/3 of the samples from each class, calculates p-values for each gene, and then estimates the π0 from the p-values. For each dataset, we performed this procedure 100 times and used the upper quartile to replace the π0 estimated by the q-value method.

The second method, termed SampG, re-samples without replacement 2/3 of the values from each class for each gene independently, calculates the p-values for each gene, and then estimates the π0. We also performed this procedure 100 times for each dataset and used the upper quartile to replace the π0 estimated by the q-value method.

After each re-sampling step, we have to feed the p-values into a π0 estimator. This estimator could be the π0 estimation by q-value method, the mgf method [18], or any other unbiased π0 estimators. In this paper, we tried to use both q-value and mgf as the plug-in estimator, and called them SampS.q and SampG.q when using q-value as the plug-in estimator, or SampS.m and SampG.m when using mgf as the plug-in estimator.

Results

Variation of the π0estimation when genes are correlated

To evaluate the impact of correlation structure in large datasets on the estimation of the proportion of true null hypotheses (π0), we first evaluated current methods on a published microarray dataset [29]. This dataset consists of 65 estrogen receptor positive (ER+) and 46 estrogen receptor negative (ER-) breast cancers. The gene expression levels were normalized by the GCRMA algorithm [30, 31] and log2 transformed. Expression values were filtered to eliminate low expressing genes with mean expression below 5 and constant expressing genes with coefficient of variation below 0.1. A total of 9,993 genes passed this filtering criterion. We bootstrapped 200 datasets from this microarray data and used the q-value [6] and twilight [9] methods to estimate the proportion of null hypotheses. The π0 estimates were similar by the 2 methods on each bootstrapped data set; however, both methods showed a large range of π0 among the bootstrapped datasets that varied from 0.36 to 0.83 (Figure 1).

Figure 1
figure 1

Estimating the π 0 in an actual microarray data set. Box plot of π0 estimated by q-value and twilight methods on 200 bootstrap breast cancer datasets.

To further investigate the influence of gene correlation on the estimation of π0, we generated simulated dataset Data-B which contained blocks of highly correlated genes, but all genes were non-differentially expressed. In our simulation we set the correlation constant ρ equal to 0.5, and created datasets containing 100, 1,000 or 10,000 genes with 1, 10, 100 or 1,000 genes per block. After calculating the p-value for each gene, we calculated the coefficient of variation (CV) of the bar heights in the histogram of p-values by splitting the p-values into 20 bins between [0, 1] with the width of each bin being 0.05. Figure 2(a) shows the histogram of p-values in one of the simulations having 10,000 genes all independent with each other, and Figure 2(b) shows the histogram of p-values in another simulation having 10,000 genes with 1,000 genes per block. Comparing Figure 2(a) and 2(b) we can see that with highly correlated genes, the distribution of p-value deviated significantly from uniform, although none of the genes were differentially expressed. We created 100 simulations for each type of data, calculated their CV of the bar height in the p-value histogram, and plotted the box plots of the CVs in Figure 3.

Figure 2
figure 2

Histogram of p-values in simulated data sets where all genes were non-differentially expressed. (a) dataset having 10,000 genes, all independent with each other (b) dataset having 10,000 genes, with 1,000 genes per block.

Figure 3
figure 3

Box plot of the CV of p-value histogram under different correlation structures. (a) datasets having 100 genes, with 1 or 10 genes per block. (b) datasets having 1000 genes, with 1, 10 or 100 genes per block. (c) datasets having 10000 genes, with 1, 10, 100 or 1000 genes per block. The histogram was calculated by splitting p-values into 20 bins between [0, 1] with the width of each bin being 0.05. The horizontal lines represent the expected CV when genes are independent.

In this simulated study, since all genes were non-differentially expressed, the p-values should follow a uniform distribution, and the histogram of p-values should be flat if genes were independent of each other. When the number of correlated genes in each block was small, for example, 1 (independent) or 10 (weak correlation) genes per block, the distribution of p-values approximated a uniform distribution and the CV of the histogram of p-values were close to expected under the independent assumption. However, the CV of the histogram of p-values increased significantly with the growth of correlation structure. In other words, although none of the genes were differentially expressed, the distribution of p-value deviated increasingly from a uniform distribution when large groups of genes were correlated. We also calculated the CV of the histogram of p-values in our microarray dataset by randomly permuting the class labels, which makes all genes non-differentially expressed but still correlated, as well as randomly permuting all expression values across genes and samples which makes genes non-differentially expressed and also independent. The results of 100 permutations showed a dramatically higher CV for the p-value histogram of datasets with only class labels permuted but with gene-gene correlations intact (Figure 4).

Figure 4
figure 4

Box plot of the CV of p-value histogram of permuted microarray datasets. PermLab: Breast cancer dataset with class labels randomly permuted. By randomly permuting class labels, all genes become non-differentially expressed but the gene-gene correlation intact. PermAll: Randomly permuting all expression values between the 111 samples and 9993 genes. By randomly permuting expression values all genes become non-differentially expressed and independent with each other. The 2 types of permutations were performed 100 times each, and the p-values were split into 20 bins between [0, 1] with the width of each bin being 0.05. The horizontal line represents the expected CV when genes are independent.

Comparing Figure 2, Figure 3 and Figure 4, it is apparent that increased correlation among genes greatly increased the deviation of p-value distribution from uniform. Therefore, strong correlation structures will increase the variation in estimated π0. And inevitably subsequent statistical inferences and false discovery control procedures would be influenced by this unstable π0 estimation.

Improving the π0estimation by re-sampling strategies

To improve the estimation of π0 in datasets with strong correlations, we proposed two methods, termed SampS and SampG, to replace the original π0 estimation step in the q-value method. In the SampS algorithm, we used a model averaging strategy. We repeatedly sampled 2/3 of the data from each class without replacement, calculated the p-values for genes, estimated π0 from the p-value distribution and finally used the upper quartile of the re-sampling estimations in the subsequent statistical inferences. In the SampG algorithm, we further broke down the correlations between genes and stabilized the π0 estimation. In this algorithm, we repeatedly sampled without replacement 2/3 of the expression values from each class independently for each gene, calculated their p-values, estimate π0, and then used the upper quartile of the re-sampling estimations in subsequent analysis. For the choice of the plug-in π0 estimator, we tried to use both q-value [6] and moment generating function [18] methods, and called them SampG.q, SampS.q and SampG.m, SampS.m, respectively.

To test the variation in π0 estimation induced by strong correlation structures, and the performance of our proposed SampS and SampG methods, we created simulated datasets Data-B and Data-M, with true π0 being 0.9, 0.8, 0.7, 0.6, and the correlation constant ρ being 0.3, 0.5 and 0.7, respectively. We created 100 datasets for each combination of these control parameters. In Data-B, we created 1000 genes forming 10 blocks with 100 genes per block to simulate large scale correlation between genes. In Data-M we randomly selected 1000 genes with differences in the mean of log transformed expression levels greater than 0.58, then randomly scrambled 90%, 80%, 70% or 60% of them to create datasets with known proportion of true null hypothesis and strong gene-gene correlations. We applied the SampS and SampG strategies to estimate π0 for each of the simulated datasets, and compared our results to a number of other methods.

The π0 estimation methods we used and their corresponding R functions are:

  1. 1.

    The Lowest Slope estimator (LSL) [15] with parameter determined via bootstrap (bootstrap) [5]; function fdr.estimate.eta0

  2. 2.

    q-value method with tuning parameter chosen by smoother method (Qvalue) [6]; function qvalue

  3. 3.

    The Spacing LOESS Histogram (SPLOSH) [16]; function splosh

  4. 4.

    The beta-uniform mixture model (BUM) [12]; function bum.mle

  5. 5.

    The nonparametric MLE method (convest)[17]; function convest

  6. 6.

    The Poisson regression approach (PRE) [1820]; function p0.mom

  7. 7.

    The moment generating function (mgf) [18]; function p0.mom

  8. 8.

    The Lowest Slope estimator (LSL) [15] with parameter determined adaptively (adaptive) [35]; function fdr.estimate.eta0

  9. 9.

    SampG method, with q-value plug in, 2nd quartile (SampG.q Q2)

  10. 10.

    SampG method, with q-value plug in, 3rd quartile (SampG.q Q3)

  11. 11.

    SampS method, with q-value plug in, 2nd quartile (SampS.q Q2)

  12. 12.

    SampS method, with q-value plug in, 3rd quartile (SampS.q Q3)

  13. 13.

    SampG method, with mgf plug in, 2nd quartile (SampG.m Q2)

  14. 14.

    SampG method, with mgf plug in, 3rd quartile (SampG.m Q3)

  15. 15.

    SampS method, with mgf plug in, 2nd quartile (SampS.m Q2)

  16. 16.

    SampS method, with mgf plug in, 3rd quartile (SampS.m Q3)

The π0 estimations were shown as boxplots in Figure 5, and the Mean Square Error (MSE) were listed in Table 1.

Figure 5
figure 5

Boxplot of π 0 estimated by various methods on the simulated data sets. The methods are (from left to right in each figure): LSL, Qvalue, SPLOSH, BUM, convest, PRE, mgf, adaptive, SampG.q Q2, SampG.q Q3, SampS.q Q2, SampS.q Q3, SampG.m Q2, SampG.m Q3, SampS.m Q2, and SampS.m Q3. A-D: Data-B with ρ = 0.3 and π0 varies from 0.9 to 0.6. E-H: Data-B with ρ = 0.5 and π0 varies from 0.9 to 0.6. I-L: Data-B with ρ = 0.7 and π0 varies from 0.9 to 0.6. M-P: Data-M with π0 varies from 0.9 to 0.6. Horizontal lines represent the true π0 in each simulated case.

Table 1 Mean Square Error of π0 estimation by different methods

Table 1 listed the MSE of π0 estimations by various methods. To better understand the methods tested, we listed both the 2nd and 3rd quartiles of the SampS and SampG methods compared to other methods. Later, we used only the 3rd quartile to provide strong control of FDR. From Figure 5 and Table 1 we can see that the bootstrap, Qvalue and SPLOSH methods are very sensitive to correlation and have higher MSE, especially the SPLOSH methods tend to under-estimate higher π0 but over-estimate lower π0; the BUM, convest and PRE methods tend to under-estimate π0 in most of the simulated data sets with strong correlations, which is not favorable in FDR controls; the adaptive method tends to over-estimate π0 when the correlation is not very strong, which is also observed by [18] with independent and weak correlated data, but it worked better on cases with strong correlations when the MSE were the least among all tested methods; mgf method is generally the second best among the above, with relatively small bias and variation among all simulated cases. In terms of SampS and SampG methods, both the 2nd and 3rd quartile outperformed the corresponding plug-in estimator, due to smoothed variation by model averaging, and SampG performs better than SampS with smaller MSE because SampG breaks down correlation between genes whereas SampS does not. And since mgf outperforms the q-value method, the SampG and SampS with the mgf plug-in also outperforms SampG and SampS with the q-value plug-in.

We then selected genes with FDR controlled under 0.05 level based on the π0 estimated by our method, calculated the actual false discovery rate and power, and compared our results to that of FDR controlling method proposed by Benjamini et al. with correlations considered (BY; function p.adjust with method "BY") [3], the permutation based FDR controlling method (howmany; function howmany_dependent) [24], and q-value method [6]. The boxplot of actual false discovery rate and power on the 4 types of Data-M simulations were shown in Figure 6. From Figure 6 it can be seen that both the BY and howmany methods provided strong control of FDR, their actual FDR level was much lower than 0.05, and their power to detect true alternatives were much lower than methods where the proportion of true null hypotheses was estimated and used in the FDR control. Comparing the q-value method and our proposed SampS and SampG method, for majority of the cases the actual FDR level were still controlled below 0.05 level, although some outliers exist with actual FDR up to 0.4~0.6, due to the unstable pi_0 estimations. The SampS and SampG methods, especially when using mgf as the plug-in π0 estimator, tend to have lower FDR on those outlier cases compared to q-value method. The differences in power between q-value method and our proposed methods were very minor, compared to the difference between q-value and BY method. We also tested on the Data-B simulations, and obtained the same result. Since we used the 3rd quartile of π0 estimated by SampS and SampG, our estimations were biased but conservative. With the same FDR control level our method would make smaller numbers of rejections than the q-value method, therefore the actual FDR and power of the genes selected were lower than that of the q-value method. This was shown by the p-value in pair-wise comparison of actual FDR and power between the re-sampling based methods and q-value method in these simulations, where all FDR and most power comparisons were significant (Table 2, Table 3). Interestingly, comparing the mean of FDR and power, as shown in Table 4 and 5, we found that the SampS and SampG methods, compared to the q-value method, can reduce the average FDR up to 40%, with a decrease of average power in most cases less than 1%. In fact, with the highest correlation constant we tested, and in most cases for SampG.m method, the decrease of power was not even significant from the q-value method. In contrast, the most conservative BY method reduced the average FDR by more than 90% compared to the q-value method, but also reduced the average power by approximately 10% in cases with strong correlation.

Figure 6
figure 6

Boxplot of actual FDR and power by various methods on Data-M. The methods are (from left to right in each figure): BY, howmany, Qvalue, SampG.q, SampS.q, SampG.m, SampS.m. A-D: boxplot of FDR on Data-M with π0 varies from 0.9 to 0.6. E-H: boxplot of power on Data-M with π0 varies from 0.9 to 0.6. Horizontal lines represent FDR at 0.05 level.

Table 2 P-values comparing FDR by different methods.
Table 3 P-values comparing power by different methods.
Table 4 Percentage in decrease of mean FDR
Table 5 Percentage in decrease of mean power

Comparing the re-sampling strategy to bootstrap p-values directly

When there is strong correlation between genes, and thus also between p-values, bootstrapping p-values does not change the correlation structure and therefore the estimations are still unstable. In contrast, re-sampling of samples or gene expression values as we proposed could address the variations induced by the correlation structure and therefore smooth the estimation.

For example, for each of the simulated datasets, we bootstrapped the p-values 100 times and then estimated π0 by the q-value method. Figure 7 shows the scatter plots of π0 estimated by the q-value method versus the 1st, 2nd and 3rd quartiles of the SampS and SampG methods, as well as the estimations by bootstrapping p-values on the 100 Data-M simulations with true π0 equals to 0.7. For the bootstrap method, as expected, the scatter plot showed that the median estimation of π0 was close to the estimation using original p-values for all cases. Whereas for the SampS and SampG methods, since they smoothed the variation induced by correlation between p-values, the variation was much smaller than the q-value method. Especially for cases where the q-value method severely under- or over-estimated the π0, the estimation by the SampS and SampG methods were closer to the true value. This was also observed in all simulated datasets that we have tested (data not shown).

Figure 7
figure 7

Scatter plot of π 0 estimated by q-value method versus the 3 quartiles by SampG, SampS and bootstrapping p-values. X-axis represents the π0 estimated by q-value method on the 100 Data-M simulations, and y-axis represents the 1st, 2nd and 3rd quartiles of π0 estimated by SampG, SampS method and the bootstrapped p-values, in the 3 plots respectively. Circles are the 3rd quartile of estimation, triangles are the median and diamonds are the 1st quartile. The lines in the images are the 45 degree diagonal line and the horizontal line corresponds to the true π0 which equals to 0.7. We used q-value plug-in for SampG and SampS in this figure.

Discussion

In actual microarray datasets, genes expression is often correlated due to co-regulation, sharing of transcription factor binding motifs, or technical reasons such as sequence similarity, cross-hybridization or signal leak during hybridization. This is of critical importance for statistical inferences that rely on pooling of test statistics across genes [21]. The distribution of p-values of these correlated genes can be viewed under a mixture model where groups of highly correlated genes share similar p-values and the whole distribution is actually a mixture of components corresponding to the groups of highly correlated genes. The effect of strong and large scale correlations is equivalent to reducing the total number of independent components in this mixture model. Storey [22] argued that subsets of genes fall into small but highly correlated groups due to co-regulation or cross-hybridization, but these groups are small in size and nearly independent with each other ("clumpy dependency"), therefore the uniformity of p-value distribution of true null genes would not be strongly affected. However, other researchers, such as Qiu et al [21], have found that the impact of correlation may be quite strong. It is also true that in our permutation study on a breast cancer microarray dataset, the distribution of p-values could be extremely far from uniform due to gene-gene correlation.

Systems biology research has shown that biological gene networks have a scale free [36], hierarchical structure [37, 38], where most of the genes are connected to a small number of other genes forming small groups of complexes, while some "hub" genes may be connected to large number of peripheral genes. The distribution of connectivity degree (the number of genes being connected to a given gene) decreases with a power-law, which is much slower than the exponential decay expected in a random network [36]. These gene-gene interactions may be reflected by co-regulation or correlation in expression under certain conditions, and the possible scale of gene interaction is unlimited given the scale free structure. Therefore, large scale correlation of gene expression levels is not surprising in microarray studies.

We have shown in our simulated study that with the growth of correlation structures, the p-value distribution of H0 genes increasingly deviates from a typical uniform distribution. This may influence the estimation of π0 and the following statistical inferences. The effect of strong correlation was also discussed by other researchers [14]. To solve this problem, we proposed the SampS and SampG methods. These algorithms replaced the unbiased but unstable π0 estimation step in the q-value method with a model averaging procedure of re-sampling samples or furthermore re-sampling independently for each gene to partially break the correlations between genes. Strong correlation between genes will inevitably increase the variation of the π0 estimation, even though the variation could be partially smoothed by the re-sampling strategies proposed in this paper. Therefore, it is necessary to compromise between safety and efficiency; in this case, we would like to shrink the estimation toward 1 from an unbiased estimation to guarantee a strong control of FDR. That is why we used the upper quartile instead of the median of the re-sampling estimations, although medians had a smaller MSE in estimating π0 in our simulated studies. We showed in our simulations that these plug-in revisions, compared to the q-value method, can greatly reduce the variation of the π0 estimation under strong gene-gene correlations, and enhance the performance of FDR control by reducing false discovery rate up to 40% with a reduction of power less than 1% compared to q-value method.

In our study, to create datasets with a known proportion of true null hypotheses while still having a similar correlation structure to that in actual microarray datasets, we developed 2 strategies to generate simulated datasets. The first one, Data-B, is simply a block-wise structure with arbitrary block size and intra-block correlation, but independent between blocks. When the block size was small, this was similar to the "clumpy" hypothesis [22]. When the block size became bigger, we showed that the correlations influenced the π0 estimation, and the re-sampling strategies we proposed improved the performance of gene selection by significantly reducing the FDR with a minor reduction of power. To mimic a more realistic scenario, we also developed the Data-M strategy to generate simulated data from actual microarray datasets by arbitrarily permuting a given proportion of the genes. This permutation breaks the correlation between arbitrarily assigned differentially and non-differentially expressed genes, but maintains the correlation within the 2 groups of genes.

Conclusion

The SampS and SampG methods we proposed and tested in this paper are simple revisions, but they greatly improved the π0 estimations. The same approach using independent re-samples of expression values to estimate the π0 and then using the upper quartile of the re-sampling estimations in FDR control, could be applied to other FDR algorithms in data where strong correlation between hypotheses exists. In this paper we considered the typical 2-sample comparison problem with a reasonable number of independent replicates. For more complex problems, such as time-course studies, the SampG method (re-sample gene expression values independently for each gene) may not be able to be applied directly without a reasonable number of replicates in each time point, but the SampS method (re-sample samples) may be applicable if there is sound reason to assume the existence of stationary time patterns in the biological system under investigation.

Availability and requirements

The R code of SampG and SampS methods, and R code to generate simulated data sets Data-B and Data-M, is included as Additional file 1.

Requirements: R environment.

Operating systems: Windows XP or Linux.

License: free.