Background

The development of high-throughput measurement technologies and associated computational analysis tools allow tumors to be identified based on a profile of mRNA expression levels [113]. Currently, most DNA chips contain more than 20 000 probe sets. These expression profiles obtained from biopsies or fine needle aspirations can then be correlated with traditional tumor characteristics (size, grade) and behaviour (recurrence, sensitivity to treatment). In breast cancer, neoadjuvant chemotherapy, which is treatment provided prior to surgery, allows breast tumor chemosensitivity to be tested in vivo [11, 1416]. A pathologic complete response (PCR) at surgery correlates with an excellent outcome, whereas residual disease (NoPCR) is associated with a poor outcome. Investigators have reported the use of gene profiling of tumors and multigene predictors (signatures) to predict response to treatment. Accurate prediction of tumor sensitivity to preoperative chemotherapy is important because NoPCR patients could be spared ineffective treatment and instead be administered alternative treatments. Therefore, such predictors allow for the delivery of individualized treatments [9, 16, 17].

The design of such a multigene predictor of patient class (PCR or NoPCR) involves the use of a learning data set, in which the cases have been divided into two groups according to the known outcome of the treatment, and of an independent validation set.

Three main challenges arise when designing such predictors [10, 12, 1822]:

  • selecting subsets of DNA probe sets relevant to the pathology and to the preoperative chemotherapy

  • combining the mRNA expression levels of these subsets of DNA probe sets in order to get a reliable prediction of the efficacy of the preoperative chemotherapy

  • ensuring that the performance of the predictor is independent of the learning data set (in other words, estimating the accuracy of future predictions)

The most commonly used methods for selecting a subset of DNA probe sets identify probes that deviate most from a random distribution of expression levels or that are the most differentially expressed between the PCR and NoPCR learning cases. In the former approach, statistical analysis is used to rank genes based on the calculated p-values of the probe sets, and this ranking provides the basis for gene selection [18, 2023].

In this study, we hypothesized that multigenic predictor performance could be improved if it were based on probe sets whose individual predictions were close to those of a hypothetical ideal probe set. First we considered single probe sets and their individual predictions of treatment outcomes. Then we used a valuation function to assign high values to probe sets that correctly predicted many cases in the learning set, and that predicted relative proportions of PCR and NoPCR cases close to those of the whole learning set.

We compared the performance of multigenic predictors using the 30 probe sets showing the highest p-values in t tests and the highest results for the valuation function.

Results

Top-ranked probe sets

We calculated the valuation of the 22 283 probe sets contained in the microarrays and ranked the probe sets according to their v(s) values. Table 1 gives, for each of the 30 top ranked probe sets, the corresponding gene, the probe set valuation, the number p(s) of correctly predicted PCR learning cases, the number n(s) of correctly predicted NoPCR learning cases, and the total number c(s) = p(s) + n(s) of correctly predicted learning cases.

Table 1 Top 30 probes. Thirty probes of highest value v(s). 1st column: gene name in Hugo Gene nomenclature; 2nd column: reference of the Affymetrix DNA probe; 3rd column, v(s): probe valuation; 4th to 6th columns, p(s), n(s), c(s): numbers of pcr and nopcr correct predictions and total number c(s) = p(s) + n(s) of correct predictions for the 21 PCR and 61 NoPCR cases of the learning set.

For instance, each of the two probe sets of gene BTG3 correctly predicted the outcome of 12 of 21 PCR learning cases and 40 of 61 NoPCR cases.

Since the valuation function based on the mean and standard deviation of gene expression level, we used a t test to determine the p values of gene expressions for PCR and NoPCR for the 30 top-ranked probe sets. The p values ranged from 2.56 × 10-8 to 0.008.

The ranks of these p-values for the 30 probe sets with the highest valuation functions among the 22 283 probe sets ranged from 3 to 1062 (median: 124). The 30 probe sets with the highest p-values shared eight probe sets in common (Table 2).

Table 2 Valuation of the probes selected by K. Hess & al. Top 30 probes of K. Hess & al. [1]. 3rd and 4th columns: probes' values v(s) and ranks in this valuation. 5th and 6th columns: numbers of pcr and nopcr predictions of the probes. Total numbers of pcr and no pcr predictions: 123 and 894. Ratio = 0.13

We then studied whether there was a correlation between the level of expression of the probe sets and their valuation. As shown in Figure 1A, we did not find any correlation (r = 0.1), suggesting that the valuation function does not depend on the level of expression of the probe set.

Figure 1
figure 1

A. correlation between the level of expression of the probe sets and valuation function; B. correlation of valuation function in 30 replicates.

We also studied the correlation of valuation function in 30 replicates. In Figure 1B, we report the correlation of the valuation function for the 30 top-ranked probe set. We did not re-rank all the probe sets because the number of PCR cases (6) was relatively low in this particular subset. The correlation between original samples and replicates was high: r = 0.81. When p-values obtained by t-test were compared between replicates, the correlation was r = 0.87. This demonstrates that our method is reproducible.

Bi-informative and mono-informative probe sets

The definition of the pcr and nopcr predictions of a probe set leads to three different kinds of probe sets:

  • the bi-informative probe sets: each of them predicts at least one PCR learning case and one NoPCR learning case, i.e. p(s)>0 and n(s)>0;

  • the mono-informative probe sets: each of them is informative of a single class of patient cases:

◦ PCR-probe sets: p(s)>0 and n(s) = 0,

◦ NoPCR-probe sets: n(s)>0 and p(s) = 0;

  • the non-informative probe sets: p(s) = n(s) = 0.

Figures in additional file 1 and Additional file 2 illustrate a bi-informative probe set, a PCR probe set, and a NoPCR probe set. In the upper part of Figure 2, the expression levels of a bi-informative probe set, probe s = 213134_x_at of gene BTG3, is shown for the 82 cases of the learning set.

Figure 2
figure 2

expression levels of a bi-informative probe set, probe s = 213134_x_at of gene BTG3, for the 82 cases of the learning set.

The expression levels of the 21 PCR patient cases are plotted with the character "+" and those of the 61 NoPCR patient cases with the character "-". The interval of PCR expression levels I p (s) is represented by three lines of height m p (s), m p (s)-sd p (s) and m p (s)+sd p (s) drawn based on the expression levels of the PCR learning cases.

The interval of NoPCR expression levels I n (s) is represented by lines of heights m n (s), m n (s)-sd n (s) and m n (s)+sd n (s) drawn based on the expression levels of the NoPCR learning cases.

The lower part of Figure 2 shows the pcr and nopcr predictions of the probe set. One can see that the probe set of gene BTG3 predicted the treatment's outcome of 12 PCR learning cases and 40 NoPCR learning cases.

Using the same representation system, additional file 1 reports the PCR probe set s = 213033_s_at of gene NFIB, and additional file 2 reports the NoPCR probe set s = 203928_x_at of gene MAPT. The former predicted the treatment's outcome of 13 PCR learning cases; the latter predicted the outcome of 28 NoPCR learning cases.

For the learning data set, the proportion of the 22 283 probe sets belonging to each of the three types of probe sets were as follows:

  • mono-informative probe sets: 71% (59% NoPCR probe sets and 12% PCR probe sets);

  • bi-informative probe sets: 25%;

  • non-informative probe sets: 4%.

In spite of the high proportion of mono-informative probe sets, none of them was found among the set of 30 top-ranked probe sets. In fact, the first mono-informative probe set was ranked at position 63: NoPCR probe set s = 207067_s_at of gene HDC.

This is the direct result of the valuation function v(s) for the probe set. It is not an arbitrarily imposed requirement of our analysis.

We have investigated the informativity of the probe sets with the highest p-values by t-test and it appeared that this property is a characteristic of our method. In the study of Hess et al. [1], which provided the data for the present work, the probe sets were ranked according to the p-value calculated from a t test. Of the 30 highest-ranking probes in that study, 11 are NoPCR-probe sets (Table 2).

Ratio of pcr to nopcr correct predictions

The ratio of the PCR to NoPCR cases (P/N) in the learning set was

P/N = 21/61 = 0.34

This ratio is in excellent agreement with the ratios of the total number of pcr and nopcr correct predictions using the k top-ranked probe sets; the ratios for k = 1 to 50 lay between 0.30 and 0.38 (Figure 3). This result confirmed that the predictions of the high-ranking probe sets were close to those of the ideal probe set not only in the number of learning cases they correctly predicted, but also in the ratio of PCR to NoPCR cases among these predicted cases.

Figure 3
figure 3

Ratios of pcr to nopcr predictions of the k top-ranked probes, 1 ≤ k ≤ 50.

These results also seemed to be a particular feature of our method of probe selection: in the study reported by Hess et al. [1], the ratio of pcr to nopcr predictions of the 30 probe sets with the highest p-values was 123/894 = 0.13. This ratio was three times lower than the ratio of PCR to NoPCR cases in the learning set. These ratios were, in turn, very close to that of correct pcr to nopcr predictions for all probe sets: 22 925/180 874 = 0.13.

Multigenic predictors

Internal validation of k probe set predictors

We first evaluated the performance of multigenic predictors in a leave-one-out cross validation and in a k-fold cross-validation (k = 3). For the leave-one-out cross-validation, we repeated the probe selection for each procedure. For the k-fold cross-validation, we used the 30 highest-ranking probes in order to investigate whether the importance of probe selection is important for every method used to construct the multigenic predictor. We investigated DLDA, and nearest centroids. The p-values of methods in cross-validation procedures were based on 1000 random permutations.

The results indicated in Table 3 showed that DLDA had similar performance with t-test probe sets and bi-informative probe sets (mean percentage of correctly classified tumors: 82% in LOOCV, and 83% in 3-fold cross validation). Bi-informative probes improved the nearest centroid method.

Table 3 Cross-validation of multigenic predictors

External validation: independent datasets

The first set of validation cases (same patient characteristics, same treatment) contained data of 51 patients, and the response to treatment was PCR for 13 patient cases and NoPCR for 38 patients. Hence, the ratios of PCR to NoPCR patient cases were the same for the learning and the validation datasets. Figure 4 depicts the values of sensitivity and specificity of the first 51 k probe set majority vote predictors (0 ≤ k ≤ 50). The 29 and 30 probe set predictors achieved the performances of the 27 probe set predictor. Table 4 shows the numbers of false positives and false negatives for the k probe sets predictors (0 ≤ k ≤ 50) for the external validation data. The 27 probe set predictor misclassified a total of seven patient cases: one was a false negative and six were false positives2. This result confirms that in this population, a 30-probe set predictor provides the most accurate results [1].

Figure 4
figure 4

Sensitivity and specificity of the first 51 k probe set majority vote predictors (0 ≤ k ≤ 50).

Table 4 Numbers of false positives and false negatives for the k probe sets predictors (0 ≤ k ≤ 50) for the external validation data (test set 1)

In the present article, we have decided to use a very simple classification criterion for defining the k probes predictors, namely, unweighted majority voting among the predictions of the probes. Many other classifiers could be developed for the selected probes, and countless studies have been devoted to this issue. Hess et al. [1] studied several of them using a varying numbers of probes and a total of 780 classifiers (sets of genes and classifying methods). These classifiers were composed of probes selected according to their p-value calculated from a t-test. The researchers showed that among these predictors, the one showing the best performance for these particular data was the diagonal linear discriminant analysis with 30 probe sets (DLDA-30 predictor).

We have evaluated the DLDA classifier composed of the 30 probes with the highest valuation functions, and compared it to a DLDA classifier composed of the 30 probe sets showing the highest p-values. The discriminations of the two classifiers are represented in Figure 5: AUC obtained with the bi-informative probes and the t-test probes were 0.87 +/- 0.07 and 0.90 +/- 0.06, respectively. The performance metrics analysis (Figure 6) showed that the DLDA classifier built from the bi-informative probes had a better accuracy (0.863, 95% confidence interval: 0.737–0.943) than the t-test DLDA-classifier (0.824, 95% confidence interval: 0.691, 0.916). Interestingly, misclassified cases were similar between the two probe sets. The DLDA predictor built from the bi-informative probes correctly classified two additional patients compared to the DLDA predictor built with the t-test probes. The DLDA classifier built with the bi-informative probes had a better sensitivity (0.923, 95% confidence interval: 0.64–0.998) than the t-test DLDA-classifier (0.846, 95% confidence interval: 0.546, 0.981).

Figure 5
figure 5

Discriminations of the two DLDA classifiers (30 probes with the highest valuation functions, and 30 probe sets showing the highest p-values (t-test)) in the independent test set 1.

Figure 6
figure 6

Performance metrics of the two DLDA classifiers (30 probes with the highest valuation functions, and 30 probe sets showing the highest p-values (t-test)) in the independent test set 1.

The second set of validation consisted of 147 patients treated with the same chemotherapy regimen as the learning set, but very few of these patients had a tumor with HER2 amplification. The discriminations of the two classifiers are represented in Figure 7: AUC obtained with the bi-informative probes and the t-test probes were 0.736 +/- 0.058 and 0.709 +/- 0.06, respectively. The performance metrics analysis (Figure 8) showed that the DLDA classifier built with the bi-informative probes was slightly more sensitive (0.741, 95% confidence interval: 0.537, 0.889) than the t-test DLDA-classifier (0.667, 95% confidence interval: 0.46, 0.835). This suggests that the "positive" informativity of bi-informative probes may translate into greater sensitivity.

Figure 7
figure 7

Discriminations of the two DLDA classifiers (30 probes with the highest valuation functions, and 30 probe sets showing the highest p-values (t-test)) in the independent test set 2.

Figure 8
figure 8

Performance metrics of the two DLDA classifiers (30 probes with the highest valuation functions, and 30 probe sets showing the highest p-values (t-test)) in the independent test set 2.

The third validation set consisted of 50 patients treated with anthracycline-based neoadjuvant chemotherapy. Discriminations were poorer compared to previous validation sets (see Additional file 3): AUC obtained with the bi-informative probes and the t-test probes were 0.654 +/- 0.078 and 0.643 +/- 0.079, respectively. The performance metrics analysis showed that the DLDA classifier built with the bi-informative probes were more accurate (0.54, 95% confidence interval: 0.39, 0.68) than the t-test DLDA-classifier (0.52, 95% confidence interval: 0.37, 0.68). The sensitivity was the same for both probe set selection methods (0.875, 95% confidence interval: 0.6764, 0.9734). This combination of low accuracy and high sensitivity suggests that multigenic predictors are at least partly specific of a chemotherapy regimen, and that they are sensitive to the ratio of PCR and NoPCR.

P-value of the majority vote predictors

The p-values of the 27, 29, and 30 probe set predictors were less than 1.12 × 10-12, based on the null hypothesis of a predictor composed of random probe sets. Individual probe set predictions were chosen at random among the three possible values, pcr, nopcr, and unspecified, with probabilities coming from the validation set data. The details of the computation of the upper bound of the p-values are in the Appendix "P-value of the predictors" [see Additional file 4].

Weighting the predictions of probes

We defined a family of valuation functions, v α (s), parameterized by the real number alpha, α ∈ [0, 1]:

v α (s)  = α × ( p ( s ) P ) + ( 1 α ) × ( n ( s ) N ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeODay3aaSbaaSqaaiabeg7aHbqabaGccqqGOaakcqqGZbWCcqqGPaqkcqqGGaaicqGH9aqpcqaHXoqycqGHxdaTdaqadaqcfayaamaalaaabaGaemiCaaNaeiikaGIaem4CamNaeiykaKcabaGaemiuaafaaaGccaGLOaGaayzkaaGaey4kaSIaeiikaGIaeGymaeJaeyOeI0IaeqySdeMaeiykaKIaey41aq7aaeWaaKqbagaadaWcaaqaaiabd6gaUjabcIcaOiabdohaZjabcMcaPaqaaiabd6eaobaaaOGaayjkaiaawMcaaaaa@4FF2@

The valuation function v(s) previously defined is the particular case v α (s)= 0.5(s). High parameter values favor probes with high numbers of pcr predictions p(s) and vice-versa. The valuation of a probe depends on the parameter α, so its rank depends on this parameter as well as on the set of k top-ranked probes, hence the k probe predictors. For each value α ∈ {0, 0.1,...,1.0}, additional file 5 gives the set of 30 top-ranked probes and the performances of the predictor composed of the 30 top-ranked probes based on the valuation function v α (s). Additional file 6 gives the ratios of pcr to nopcr predictions for this weighted valuation functions. Additional file 7 provides sets top 30 probes for the weighted valuation functions. Figure 9 depicts how the sensitivity and specificity of the 50 first k probe predictors varied with the values of the weighting parameter α ∈ {0, 0.1,...,1.0}. The parameter value α = 1 grants all the weight to the pcr predictions p(s) of the probes. For α = 1, all k probe predictors with k ≥ 15 classified any patient case as PCR (sensitivity = 1, specificity = 0). In contrast, the parameter value α = 0 grants all the weight to the nopcr predictions n(s). The specificity of the resulting predictors was 1 (all the NoPCR cases were correctly predicted) and their sensitivity was very low (almost all the PCR patient cases were misclassified). For this set of 10 parameter values, the parameter value α = 0.5, which is given by the initial valuation function v(s), provides the best 30 probe predictor. Only one other value of α (0.4), yielded a 30 probe predictor with the same accuracy value of 0.86, but it had a lower sensitivity.

Figure 9
figure 9

Sensitivity and specificity of the predictors with weighted valuation function v α (s) . Sensitivity and specificity of the k-probes predictors, 0 ≤ k ≤ 50, for weightings α, α ∈ {0, 0.1,...,1.0}. Sensitivity: continuous lines, specificity: broken lines.

The explanation of these results lies in the ratio R(α) of pcr to nopcr numbers of predictions for the top 30 probes in the ranking of the function v α (s). The values of these ratios R(α) should be compared to the ratio R = P/N = 0.34 of the numbers of PCR to NoPCR learning cases. Additional file 6 gives these ratios R(α) for α ∈ {0, 0.1,...,1.0}. The set S α of the top 30 probes for the weighting parameter α = 0.5 had a ratio R(α) = P(α)/N(α) = 0.33, where P(α) and N(α) are the total numbers of pcr and nopcr predictions of the set of probes S α . This value was the closest to the ratio R = 0.34. The ratio R(α) increases with the parameter α, from R(0) = 0.036 to R(1) = 1.015, which are far lower and far higher, respectively, than the ratio R = 0.34. These values explain the performances of the respective predictors. The conclusion is that the valuation function v(s) = v0.5(s) gives the best predictor. Nevertheless, the weighting can be used to favor specificity or sensitivity of the predictors.

Discussion

We have introduced a new procedure to select probes that can be used as multigenic predictors. This procedure selects probes that convey information on both positive and negative issues. Using cross-validation, we have confirmed that predictors built with bi-informative probes provide similar results as predictors built with probes selected using a t test. Predictors with bi-informative probes perform better on independent datasets.

One crucial problem in predicting cancer prognosis based on microarray data is that of building prediction models based on ~50 genes that are stable in both the learning set and the actual sample set. The most common approach is to consider each probe set individually and see whether it distinguishes samples with different class labels by using Student's t test (univariate parametric significance level). This is a simple method for testing whether two variables have identical Gaussian distributions. Generally, a significance level is chosen for determining the genes that will be included in the predictors; genes that are differentially expressed between the classes at less than the specified threshold are included in the predictor. Therefore, all genes are ranked according to the result and the top k genes are selected as the feature subset to be used. In our study, for example, we selected the 30 top-ranked genes because Hess et al determined that 30 probe sets were optimal [1].

Michiels et al. [24] have analyzed seven published microarray cancer data sets, and highlighted the difficulties inherent to this approach. Examining different prostate cancer data sets, Wang et al. [10] found that misclassification rates strongly depended on which samples were used for training and which probes were selected for predictor construction. There is therefore a need to find robust gene selection methods for multigenic predictors.

There are several methods to select probes that could be of interest for a multigenic predictor: these methods may be based either on biological aspects, computational aspects or, as in the present study, on the samples themselves.

Paik et al. [6], for example, have selected probes for genes previously demonstrated to be important in breast cancer in order to predict survival and response to adjuvant chemotherapy. We tried to use these genes, but obtained unsatisfactory results (data not shown). Indeed, most of these genes proved monoinformative and captured a very singular aspect of breast tumors. Because probe sets may be redundant, some authors have reported a way to remove redundancy in the selected gene set that is compatible with any method [25]. These biological aspects, while interesting when considering targeted therapy such as estrogen receptor expression for hormone therapy or HER2 amplification for trastuzumab, do not provide more information than classic biomarkers in the case of non-targeted therapy such as chemotherapy.

Other groups have reduced the dimensionality by singular value decomposition (SVD), also referred to as principal component analysis (PCA), using, for example, the first ten principal components or metagenes to build predictors [26, 27]. Bo and Jonassen have developed the "greedy-pairs method" for selecting genes [28]. In this approach, all genes are ranked based on their individual t-scores on the training set. The procedure selects the highest-ranked gene gi and finds the one other gene gj that, together with gi, provides the best discrimination. This is measured using the distance between centroids of the two classes with regard to the two genes when projected on the diagonal linear discriminant axis. These two selected genes are then removed from the gene set and the procedure is repeated on the remaining set until a specified number of genes has been selected. This method attempts to select pairs of genes that work well together to discriminate the classes. It is computationally efficient, but it rarely reveals biological characteristics.

In order to maximize the information provided by the training set, some methods are based on results provided by internal cross-validation. Wang et al. have reported a greedy robust feature selection approach built on the leave-one-out cross-validation procedure to retain the most frequently identified genes for building a predictive model [10]. Leaving out one sample at a time, they used a greedy-LDA to identify a set of predictive genes. They counted the number of times a gene was selected, and retained only the most frequently identified genes as the selected features. Similarly, Michiels et al. proposed performing repeated random sampling to improve the stability of predictors [24]. Jiang et al. proposed a gene shaving method based on Random Forests and another method based on Fisher's Linear Discrimination, leading the researchers to discover marker genes with expression patterns that could differentiate lung cancer cells from normal cells [29]. Fisher's Linear Discrimination is a traditional classification method that is computationally efficient, while Random Forests is based on growing an ensemble of trees (classifiers) on bootstrapped samples, which significantly improves the classification accuracy.

Our approach is somehow different because it favors probes that convey information on samples with positive and negative outcomes. All probes are differentially expressed in both subsets of patients; in fact, our method assumes that samples of different classes have Gaussian distributions, as demonstrated by p values calculated by t test. The probes are subjected to a more stringent criterion because the intersection of confidence intervals tends to be low for selected probes. Moreover, the ratio between negative and positive issues accounts for another criterion. Each probe could theoretically be used as a unigenic predictor. The α parameter that we introduce to account for positive and negative outcomes can be adjusted to favor sensitivity or specificity of the multigenic predictor. In the case of a predictor based on a majority vote, the α parameter should be 0.5 to maintain equity.

Ideally, these probe selection methods could be combined to identify most relevant probes because these methods each take advantage of particular strengths in probe selection [12]. Additional studies should be carried out on multiple datasets to investigate complementary methods. The discrepancies between microarray data publicly available from pharmacogenomic programs (different platforms, different regimens, different methods of response assessment) precluded any possibility of additional validation. In further studies, we plan to test other prediction problems such as molecular classification or survival issue.

Conclusion

In this study, we propose a valuation function that assigns the highest values to probes that correctly predict cases across the whole learning set, such that each probe not only successfully predict a large number of cases, but also predicts PCR and NoPCR cases in approximately the same ratio as was seen in the whole set. In addition to improving the prediction of patient response to breast cancer preoperative chemotherapy, our approach has made it possible to classify probes as bi-informative and mono-informative.

Methods

Patients

The clinical trial was conducted at the Nellie B. Connally Breast Center of The University of Texas M.D. Anderson Cancer Center. Patients with stage I-III breast cancer requiring neoadjuvant chemotherapy were asked to undergo single-pass, fine-needle aspiration (FNA) of the primary breast tumor or ipsilateral axillary metastasis before starting chemotherapy as part of an ongoing pharmacogenomic marker discovery program [30]. Neoadjuvant chemotherapy consisted of weekly paclitaxel administration, followed by chemotherapy with 5-fluorouracil, doxorubicin, and cyclophosphamide (T/FAC). Gene expression profiling was performed using oligonucleotide microarrays (Affymetrix U133A) on FNA specimens taken prior to treatment.

Patient cases were separated into a learning set (82 cases) and three validation/test sets (51, 147 and 50 patients). Characteristics of patients [see Additional file 8] in the first validation set were similar to those of learning set patients, and the two patient groups received similar chemotherapy regimens. Patients in the second validation set received a similar chemotherapy regimen as the learning set patients, but they showed different characteristics; for example, few patients in the second validation set showed HER2 amplification, (these patients received trastuzumab and were not included in the pharmacogenomic study). The third validation set comprised patients who had been treated at Gustave Roussy Institute, who had preoperative biopsies rather than FNA, and who underwent an anthracycline-based chemotherapy regimen without taxanes. These latter patients were matched pairs of patients with PCR and residual macroscopic disease.

At the completion of neoadjuvant chemotherapy, all patients underwent surgical resection of the tumor bed, with negative margins. Pathologic complete response (PCR) was defined as no histopathologic evidence of any residual invasive cancer cells in the breast, whereas residual disease (NoPCR) was defined as presence of residual cancer cells after histopathologic study [31].

The low-level treatment of the microarray data was performed using dCHIP V1.3 [32] to generate probe level intensities. This program normalizes all arrays to one standard array that represents a chip of median overall intensity. This reference chip and the normalization procedure are available online at [33]. Normalized gene expression values were transformed to the log10 scale for analysis [1].

Methods

Consider a hypothetical ideal probe set s* which would be accurate enough for classifying the patients of the learning set into PCR and NoPCR groups. Knowing the expression level of this probe set in each of the patients would be enough to predict response to chemotherapy. Hence, the interval of expression levels I p (s*) contains the expression levels of the PCR patients in the learning set, while the interval I n (s*) contains the expression levels of the NoPCR cases of the learning set. Since the probe set s* is supposed to classify all the learning cases, these two intervals are disjoints, I p (s*)I n (s*) = ∅ ; otherwise at least one treatment outcome could not be predicted for learning-set cases from the observation of the ideal probe set's expression level. Given these definitions, any patient in the learning set should belong to one or the other interval; i.e. the expression level of any PCR learning case would be in the interval I p (s*) and that of any NoPCR one would be in the interval I n (s*).

Minimum sets of expression levels of the actual probe sets

For actual probe sets, the intervals of expression levels are not disjoint for PCR and NoPCR learning cases, and the expression levels are blurred by noise. Therefore, we decided to attach two minimum sets of expression levels to any probe set s. These sets, E p (s) and E n (s), were computed from the learning set1.

Intervals of PCR and NoPCR expression levels

Let m p (s) and sd p (s) be the mean and standard deviation of the expression levels of probe set s for the PCR learning cases. The interval of 'PCR expression levels' of the probe set s, denoted I p (s), is that of length 2 × sd p (s), centered on the mean m p (s):

I p (s) = [m p (s) - sd p (s), m p (s) + sd p (s)]

In the same way, let m n (s) and sd n (s) be the mean and standard deviation of the expression levels of probe set s for the subset of NoPCR learning cases. The interval of 'NoPCR expressions' of the probe sets s is:

I n (s) = [m n (s) - sd n (s), m n (s) + sd n (s)]

Minimum sets of PCR and NoPCR expression levels

Since the intervals of expression levels I p (s) and I n (s) are not disjoint in general, we defined the minimum set of the PCR expression levels of the probe set s, denoted E p (s), as the interval of PCR expressions I p (s) minus its intersection with the interval I n (s). Conversely, we defined the minimum set of NoPCR expression levels, denoted E n (s), as the interval of NoPCR expression levels I n (s) minus its intersection with the interval I p (s):

  • minimum set of PCR expression levels of the probe set s:

    E p (s) = I p (s) \ (I p (s)I n (s))

  • minimum set of NoPCR expression levels of the probe set s:

    E n (s) = I n (s) \ (I p (s)I n (s))

Discrete prediction of a probe set

We define the prediction of a single probe set as a discrete value taken in the set {pcr, nopcr, unspecified} as follows: if patient p belongs to the learning set and the expression level of the probe set s for this patient p is e(s, p), then the discrete prediction of the single probe set s is pcr if the expression level is in the minimum set of PCR expression levels: e(s, p)E p (s). On the other hand, the prediction is nopcr if the expression level is in the minimum set of NoPCR expression levels: e(s, p)E n (s). If neither of these cases obtains, the prediction is unspecified. A pcr prediction is correct when the learning case is a PCR case, and the same is true for a nopcr prediction of a NoPCR case.

Valuation of a probe set

From the definition of the pcr and nopcr predictions of a probe set, we define its valuation v(s). Let p(s) be the number of PCR learning cases correctly predicted by the probe set, let n(s) be the number of NoPCR learning cases correctly predicted by the probe set, and let P and N be the respective numbers of PCR and NoPCR learning cases. The probe sets' valuation function v(s), whose values are in the real interval [0, 1], is:

v(s)  = 0.5 × ( p ( s ) P + n ( s ) N ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeODayNaeeikaGIaee4CamNaeeykaKIaeeiiaaIaeyypa0JaeGimaaJaeiOla4IaeGynauJaey41aq7aaeWaaeaajuaGdaWcaaqaaiabdchaWjabcIcaOiabdohaZjabcMcaPaqaaiabdcfaqbaakiabgUcaRKqbaoaalaaabaGaemOBa4MaeiikaGIaem4CamNaeiykaKcabaGaemOta4eaaaGccaGLOaGaayzkaaaaaa@467A@

(The coefficient 0.5 serves only to ensure that the values of the function lie within the unit interval.)

This function assigns a value of 1 to a hypothetical ideal probe set and a value of 0 to a non-informative one. This very simple function takes into account the proportions of PCR and the proportion of NoPCR learning cases correctly predicted by the probe set rather than simply the total proportion of correctly predicted learning cases. The reason for this choice is that the proportion of correctly predicted learning cases, regardless of their classes, would obviously be biased by the unequal numbers of PCR and NoPCR cases in the learning set. It is worth noting that the valuation function v(s) does not take into account either incorrect or unspecified predictions of the probe set.

Validation of reproducibility

The reproducibility and robustness of the present method was tested in 30 replicate experiments when the same RNA was hybridized twice several months apart in two different laboratories [34]. Valuation functions of the 30 highest-ranking probe sets were correlated in replicate experiments.

Multigenic predictors

We tested several multigenic predictors. First, we developed a majority vote predictor that could take advantage of the discrete predictions of probe sets. Hence, for any patient, the prediction of a probe set was 'pcr' if the expression level of this probe set for this patient was in its minimum set of PCR expression levels, the prediction was 'nopcr' if the expression level was in its minimum set of NoPCR expression levels, or the prediction was unspecified. In the first two cases, the prediction of a probe set can be either correct or incorrect.

We have defined k probe set predictor as, on the one hand, the set S k composed of the k top-ranked probe sets and, on the other hand, a decision criterion which was the majority decision. Let p be a patient case, let pcr(k, p) be the number of probe sets of set S k whose predictions are pcr for this patient, and let nopcr(k, p) be the corresponding number of nopcr predictions. The k probe set predictor then indicates the following responses for this patient p:

  • if pcr(k, p) > nopcr(k, p) then PCR;

  • if pcr(k, p) <nopcr(k, p) then NoPCR;

  • if pcr(k, p) = nopcr(k, p) then UNSPECIFED.

When evaluating the performances of the predictor, a false negative is a PCR patient case predicted to be NoPCR or UNSPECIFIED; a false positive is a NoPCR patient case predicted to be PCR or UNSPECIFIED.

Diagonal linear discriminant analysis (DLDA), and nearest centroid predictors

We evaluated the performance of the probes in the training dataset by leave-one-out cross validation (LOOCV) and by k-fold validation, in which case we used k = 3 and performed 1000 permutations to determine p-values. These multigenic predictor experiments were performed using BRB-Arraytools [35].

External validation

We evaluated the performance of a DLDA multivariate predictor in independent datasets. The 30 probe sets with highest t-test p-values or the 30 probe sets scoring the highest in the valuation function were used in two multivariate predictors constructed using a DLDA machine learning algorithm. The DLDA prediction model was tested on the three independent validation sets and their performance was compared using receiver-operating characteristic (ROC) curve analysis and performance metrics such as sensitivity. Classifier performance (discrimination) on the validation data was assessed using the area under the ROC curve (AUC). The ROC curve is a graphical display of the false-positive rate and the true-positive rate under multiple classification rules. The ROC curve arises when a continuous predictor value is calculated for each subject over a broad range of thresholds. A case is called test-positive (predicted to have PCR) if the threshold is above a defined value. The total area under the ROC curve is a summary measure of the test's ability to correctly classify those with and without the outcome of interest. An AUC of 1 represents a perfect test; an AUC of 0.5 represents a test no better than random prediction.