Background

The high prevalence of breast cancer motivates the development of better screening and diagnostic technologies. To complement mammography screening, which has moderate sensitivity [1] and specificity [24] and low positive-predictive value in younger women [1], researchers have investigated the efficacy of detecting breast cancer using proteins. Proteins offer detailed information about tissue health conditions, allowing the identification of cancer type and risk, and thereby prompting potentially better-targeted and more effective treatment. Some studies have correlated breast cancer prognosis with proteins in the tumor [5], such as hormone receptors, HER-2, urokinase plasminogen activator, and plasminogen activator inhibitor 1 [6, 7], and caPCNA [8]. However, accessing these local proteins requires biopsies, which is not practical for a screening regimen. We therefore consider serum proteins. Serum and plasma protein-based screening tests have already been developed for many diseases, such as Alzheimer's disease [9], cardiovascular disease [10], prostate cancer [11], and ovarian cancer [12].

For breast cancer, however, there are currently very few serum markers used clinically. Some studies have identified as possible breast cancer markers the proteins CA 15.3 [1315], BR 27.29, tissue polypeptide antigen (TPA), tissue polypeptide specific antigen (TPS), shed HER-2 [15], and BC1, BC2, and BC3 [16, 17]. However, other studies found a lack of sufficient diagnostic ability in serum proteins, including CA 15.3 [13, 1720], CA 125 [20], CA 19.9 [20], CA 125 [20], BR 27.29 [13, 18], and carcinoembryonic antigen (CEA) [13]. The European Group of Tumor Markers identified the MUC-1 mucin glycoproteins CA 15.3 and BR 27.29 as the best serum markers for breast cancer, but they could not recommend these proteins for diagnosis due to low sensitivity [18].

Cancer biomarkers are valued according to their specificity and sensitivity, suiting them for different clinical roles. For example, general population screening requires high sensitivity but not necessarily high specificity, if a low-cost secondary screen is available. Conversely, disease recurrence monitoring requires high specificity but not necessarily high sensitivity, if a more sensitive secondary test is available. Therefore to optimize clinical utility, it is important to measure the sensitivity and specificity of any proposed biomarker. Because solid tumors cause many changes in the surrounding tissue, it is likely that some potential biomarkers measure the body's response to cancer rather than the cancer itself. Cancer marker levels may be increased due to various secondary factors, such as therapy-mediated response [2123] and benign diseases. For example, CA 15-3 levels increase in chronic active hepatitis, liver cirrhosis, sarcoidosis [24], hypothyroidism [25], megablastic anemia [26], and beta-thalassaemia [27]. To help measure biomarker specificity in our study, we included normal, benign, and malignant samples.

In general, breast cancer biomarker studies have found individual circulating proteins, but it is important to consider covariation of multiple protein levels. It is useful to know which combinations of proteins may yield high diagnostic performance, even though each protein individually might yield low performance [28, 29]. Some studies have detected discriminatory breast cancer biomarkers using mass spectrometry. [16, 3036] Because of mass spectrometry's peak interpretation and reproducibility challenges [37, 38], scientists have searched for breast cancer biomarkers from predefined collections of known candidate markers using newer multiplex technologies, such as reverse-phase protein microarray [39]. To our knowledge, ours is the first study to assess a large set of serum proteins collectively by a sensitive and specific multiplex assay in order to identify the most promising proteins for breast cancer detection.

Although studies have shown correlations between serum proteins and breast cancer, it is often unclear how these correlations translate into clinical applicability for diagnosis. To quantify the diagnostic performance of a set of proposed biomarkers, we have implemented many statistical and machine-learning models. We have focused on Bayesian models, which provide full posterior predictive distributions. This detailed information would help a physician to judge how much emphasis to place on the classifier's diagnostic prediction.

The goal of this study was to quantify the ability of serum proteins to detect breast cancer. For measuring predictive performance on this noisy data set, we used many statistical classification models. To better understand the cancer-specificity of the screening test, we ran the classifiers on proteins from malignant lesions, benign lesions, and normal breast tissue. Our data indicated that some serum proteins can detect moderately well the presence of a breast lesion, but they could not distinguish benign from malignant lesions.

Methods

Enrolling subjects

This study enrolled 97 subjects undergoing diagnostic biopsy for breast cancer and 68 normal controls at Duke University Medical Center from June 1999 to October 2005. Women donating blood to this study were either undergoing image guided biopsy to diagnose a primary breast lesion (benign and cancer) or were undergoing routine screening mammography and had no evidence of breast abnormalities (normal). All subjects were enrolled only after obtaining written informed consent for this Duke IRB approved study (P.I., JRM, current registry number, 9204-07-11E1ER, Early Detection of Breast Cancer Using Circulating Markers). All subjects were premenopausal women. Table 1 shows the demographics of the study population. Additional file 1 shows subjects' ages and histology findings.

Table 1 Subject demographics

Measuring serum protein levels with ELISA

Blood sera were collected under the HIPAA-compliant protocol "Blood and Tissue Bank for the Discovery and Validation of Circulating Breast Cancer Markers." Blood was collected from subjects prior to surgical resection. All specimens were collected in red-stoppered tubes and processed generally within 4 hours (but not greater than 12 hours) after collection and stored at -80°C. Sera were assayed using the Enzyme-Linked ImmunoSorbent Assay (ELISA, Luminex platform) and reagents in the Luminex Core Facility of University of Pittsburgh Cancer Institute. The Luminex protocol was optimized for analytical performance as described by Gorelik et al. [12]. One replicate per patient sample was performed with reactions from 100 beads measured and averaged. All samples were analyzed on the same day using the same lot of reagents. Complete information about characteristics of individual assays including inter- and intra-assay coefficients of variation (CVs) is available from the manufacturers of assays [see Additional file 2] and from the Luminex Core website http://www.upci.upmc.edu/luminex/sources.cfm. Based on our analysis of assays performed monthly within one month interval for 3 months using the same lot of reagents, the intra-assay CV for different analytes was in the range of 0.7–11% (typically < 5%) and inter-assay 3.7–19% (<11% for same lot reagents).

Biomarkers were selected based on the known literature reports about their association with breast cancer. The 98 assayed proteins are shown in Table 2, with further details in Additional file 2. In addition to the protein levels, patient age and race were also recorded.

Table 2 List of the 98 serum proteins measured by ELISA assay (Luminex platform)

Regression with variable selection

In order to incorporate these proteins into a breast cancer screening tool, we built statistical models linking the protein levels to the probability of malignancy. We used the following three common regression models: linear regression Y i = X i β +ε i , ε ~N(0, σ 2), logistic regression , and probit regression Pr(Y i = 1|β) = Φ(X i , β), where Y is the response vector (breast cancer diagnosis), X is the matrix of observed data (protein levels), β is the vector of coefficients, ε is additive noise, and Φ(·) is the cumulative distribution function of the normal distribution. These classical models become unstable and predict poorly if there are relatively few observations (curse of dataset sparsity [40]) and many features (curse of dimensionality [41]). It is better to choose a subset of useful features. We used stepwise feature selection [4246] to choose the set of proteins that optimized model fit.

However, choosing only one feature subset for prediction comes with an inherent risk. When multiple possible statistical models fit the observed data similarly well, it is risky to make inferences and predictions based only on a single model [47]. In this case predictive performance suffers, because standard statistical inference typically ignores model uncertainty.

Accounting for model uncertainty with Bayesian model averaging

We accounted for model-selection ambiguity by using a Bayesian approach to average over the possible models. We considered a set of models M 1,..., M B , where each model M k consists of a family of distributions p(D|θ k , M k ) indexed by the parameter vector θ k , where D = (X, Y) is the observed data. Y is the response vector (breast cancer diagnosis), and X is the matrix of observed data (protein levels). Using a Bayesian method to average over the set of considered models [4752], we first assigned a prior probability distribution p(D|M k ) to the parameters of each model M k . This formulation allows a conditional factorization of the joint distribution,

(1)

Splitting the joint distribution in this way allowed us to implicitly embed the various models inside one large hierarchical mixture model. This form allowed us to fit these models using the computational machinery of Bayesian model averaging (BMA).

BMA accounts for model uncertainty by averaging over the posterior distributions of multiple models, allowing for more robust predictive performance. If we are interested in predicting a future observation D f from the same process that generated the observed data D, then we can represent the predictive posterior distribution p(D f |D) as an average over the models, weighted by their posterior probabilities [47, 53, 54]:

(2)

where the sum's first term p(D f |D, M k ) is a posterior weighted mixture of conditional predictive distributions

(3)

and the sum's second term p(M k |D) is a model's posterior distribution

(4)

which incorporates the model's marginal likelihood

(5)

Promoting computational efficiency with iterated BMA

BMA allows us to average over all possible models, containing all possible subsets of features. However, considering many models would require extensive computations, especially when computing the posterior predictive distributions. Such computations would be prohibitively long for a quick screening tool that is intended not to impede clinicians' workflow during routine breast cancer screening. Because it was computationally infeasible to consider all possible 2100≈1.26*1030 models, we first chose a set of the best fitting models. For computational efficiency in model selection, this study followed Yeung et al. [55] and used a deterministic search based on an Occam's window approach [54] and the "leaps and bounds" algorithm [56] to identify models with higher posterior probabilities.

In addition to choosing the best models, we also chose the best proteins. We applied an iterative adaptation of BMA [55]. This method initially ranks each feature separately by the ratio of between-group to within-group sum of squares (BSS/WSS) [57]. For protein j the ratio is

(6)

where I(·) is an indicator function, X ij is the level of protein j in sample i, and are respectively the average levels of protein j in the normal and cancer groups, and is the average level of protein j over all samples.

Ordered by this BSS/WSS ranking, the features were iteratively fit into BMA models, which generated posterior probability distributions for the proteins' coefficients. We then discarded proteins that had low posterior probabilities of relevance, Pr(b j ≠ 0| D) < 1%, where

(7)

where Pr(b j ≠ 0| D) is the posterior probability that protein j's coefficient is nonzero, Γ is the subset of the considered models M 1,..., M B that include protein j. By discarding proteins that have small influence on classification, this iterative procedure keeps only the most relevant proteins.

Other models for high-dimensional data

To compare iterated BMA's classification and generalization performance, we also classified the data using two other dimensionality-reducing methods: a support vector machine (SVM) [58] with recursive feature selection [59, 60] and least-angle regression (LAR, a development of LASSO) [61].

All modeling was performed using the R statistical software (version 2.6.2), and specifically the BMA package (version 3.0.3) for iterated BMA, the packages e1071 (version 1.5–16) and R-SVM [31] for the SVM with recursive feature selection, the lars package (version 0.9–5) for least angle regression, and the ROCR package (version 1.0–2) for ROC analysis. We extended the BMA package to compute the full predictive distributions (See Equations 2–6) within cross-validation using an MCMC approach. Additional file 3 contains the R code and Additional file 4 contains the data in comma-delimited format.

Evaluating classification performance

The classifiers' performances were analyzed and compared using receiver operating characteristic (ROC) analysis [62]. ROC curve metrics were compared statistically using a nonparametric bootstrap method [63]. To estimate generalization performance on future cases, we first defined independent train and test sets by randomly choosing 70% of samples for training and optimizing the classifiers, and then we tested the classifiers on the remaining 30% of the samples. To compare with the train/test split, we also performed leave-one-out cross-validation (LOOCV). Feature selection was performed within each fold of the cross-validation.

Evaluating feature-Selection methods

We compared models with feature-selection techniques using two methods: feature concentration and classifier performance. For feature concentration, we performed feature selection within each fold of a LOOCV. We recorded how many times each feature was chosen. This method distinguished the feature selection methods that chose few versus many features. Using classifier performance, we investigated the effect of feature selection and sampling. We compared linear models run on the data with these four techniques: 1) no feature selection (using all the proteins in the model), 2) preselecting the features (using all the data to choose the best features, and then running the model using only those preselected features in LOOCV), 3) stepwise feature selection, and 4) iterated BMA. For each case, we ran the classifier in LOOCV.

Results

Classifier performance

The classifiers achieved moderate classification performance for both normal vs. malignant (AUC = 0.77 ± 0.02 on the test set, and AUC = 0.80 ± 0.02 for LOOCV) and normal vs. benign (AUC = 0.75 ± 0.03 on the test set, and AUC = 0.77 ± 0.02 for LOOCV), but very poor performance for malignant vs. benign tumors (AUC = 0.57 ± 0.05 on the test set, and AUC = 0.53 ± 0.03 for LOOCV). The classification performance is shown as ROC curves in Figure 1. Whereas the ROC curves show the classification performance over the entire range of prediction thresholds, we also considered the threshold of Pr(Y i = 1|β) = 0.5 in particular. For this threshold, Table 3 shows the classification error. The models performed similarly, with approximately 20 false negatives and 10 false positives. All classifiers were run with leave-one-out cross-validation (LOOCV). The classifiers chose the best subsets of proteins for classification.

Table 3 Cross-validation classification errors, normal versus cancer
Figure 1
figure 1

ROC curves showing the classification performance of statistical models using the serum protein levels. The models were run with a 70% train and 30% test split of the data set (A-C) and also with leave-one-out cross-validation (LOOCV) (D-F). The classifiers performed similarly, with moderate classification results for normal vs. malignant or benign lesions (A, B, D, E) and poor classification results for malignant vs. benign lesions (C, F).

Figure 2 plots the full posterior predictive distributions for BMA of probit models run with LOOCV. In general, the predictive distributions were more "decided" (concentrated further from the 0.5 probability line) for the tasks of normal vs. cancer and normal vs. benign, but they were less "decided" (concentrated closer to the 0.5 probability line) for benign vs. cancer. This trend indicated that the serum protein levels were very similar for malignant and benign lesions.

Figure 2
figure 2

Posterior predictions of Bayesian model averaging (BMA) of probit models, run with a 70% train and 30% test split of the data set (A-C) and also with leave-one-out cross-validation (LOOCV) (D-F). The classifiers achieved moderate classification results for normal vs. malignant or benign lesions (A, B, D, E) and poor classification results for malignant vs. benign lesions (C, F).

Selected serum proteins

The iterated BMA algorithm chose the best-fitting probit models. The chosen models and their proteins are shown in Figure 3. The best proteins for each classification task are listed in Table 4. The top protein for both normal vs. cancer and normal vs. benign was macrophage migration inhibitory factor (MIF), a known inflammatory agent [6466]. Other selected proteins also play roles in inflammation and immune response, such as MMP-9 [67, 68], MPO [69], sVCAM-1 [70], ACTH [71], MICA [72], IL-5 [73], IL-12 p40 [7476], MCP-1 [77], and IFNa [7880]. For benign vs. cancer, the top protein was CA-125, which is used as a biomarker for ovarian cancer [12, 8183]. However, the greater presence of CA-125 in cancer tissue was still too subtle to allow the classifiers to achieve good classification performance.

Table 4 Proteins chosen by BMA of linear models
Figure 3
figure 3

Models selected by BMA of linear models. Features are plotted in decreasing posterior probability of being nonzero. Models are ordered by selection frequency, with the best, most frequently selected models on the left and the weakest, rarest chosen on the right. Coefficients with positive values are shown in red and negative values in blue. Strong, frequently selected features appear as solid horizontal stripes. A beige value indicates that the protein was not selected in a particular model.

Complementary to the models' matrix plots for feature strength are the coefficients' marginal posterior probability distribution functions (PDFs), which the BMA technique calculates by including information from all considered models. Figure 4 shows the marginal posterior PDFs for the top coefficients for the BMA models. The coefficients' distributions are mixture models of a normal distribution and a point mass at zero. This point mass is much larger for the benign vs. cancer models than for normal vs. cancer and normal vs. benign models. The higher weight at zero indicates that the proteins are less suitable for distinguishing benign from malignant lesions than they are for distinguishing lesions from normal tissue.

Figure 4
figure 4

Posterior distributions of the model coefficients for the proteins. The distributions are mixtures of a point mass at zero and a normal distribution. The height of the solid line at zero represents the posterior probability that the coefficient is zero. The nonzero part of the distribution is scaled so that the maximum height is equal to the probability that the coefficient is nonzero.

Feature selection and Bayesian model averaging

While doing feature selection for normal vs. cancer within LOOCV, we recorded the counts of how many times each protein was selected. The selection frequencies are shown as a heatmap in Figure 5. Iterated BMA and least-angle regression selected the fewest proteins, whereas stepwise feature selection chose many more proteins. The strongest proteins were chosen consistently across all feature selection techniques, as shown by the horizontal lines in the figure.

Figure 5
figure 5

Heatmap of normalized frequencies of selected features, normal vs. cancer. The feature selection frequencies were averaged over all folds of the LOOCV. For comparison across techniques, the frequencies in each column were scaled to sum to one. Less-frequently selected features appear as cooler dark blue colors, whereas more frequently selected features appear as hotter, brighter colors. Models that used fewer features appear as dark columns with a few bright bands, whereas models that used more features appear as denser smears of darker bands.

We also investigated the effect of feature selection upon classifier generalization. Figure 6 shows the ROC and accuracy curves for linear models with various feature selection strategies. Preselecting the features generated a very optimistically biased classification performance. When the same feature selection technique (stepwise) was applied within each fold of LOOCV, the classification performance fell dramatically. Using no feature selection (using all proteins) had extremely poor classification performance – no better than guessing. The poor performance demonstrated that the linear model needs feature selection for good classification performance when the number of features is roughly the same as the number of samples in noisy data. Iterated BMA of linear models significantly outperformed the stepwise, single-model method. This performance increase demonstrated the better predictive ability of the BMA models; averaging a set of the most promising models was better than using only the single best model.

Figure 6
figure 6

ROC and accuracy curves for linear models with four feature selection techniques. 1) Preselected: the features (using all the data to choose the best features, and then running the model using only those preselected features in LOOCV), 2) BMA: iterated Bayesian model averaging, 3) Stepwise feature selection, and 4) All features: using all the proteins in the model, no feature selection.

Discussion

The serum proteins allowed the classifiers to detect lesions (malignant or benign) from normal tissue moderately well, but they were very poor for distinguishing benign from malignant lesions. These classification results were consistent in both the test set and the leave-one-out cross-validation. This consistency implies that the classification results are not highly dependent on the sampling scheme but rather highlight consistent trends in our data set. The classification results show that the proteins were not specific for cancer and suggest that they may indicate states of biological or immunological stress. A good candidate for the dominating biological effect is inflammation, since the top protein selected for both normal vs. cancer and normal vs. benign was macrophage migration inhibitory factor (MIF), which is active in inflammation [6466]. The best protein for distinguishing malignant from benign tumors was cancer antigen 125 (CA-125), which is a prominent serum marker for ovarian cancer [12, 81, 83]. However, CA-125 levels are influenced by other factors, such as age, hormone-replacement therapy, smoking habits, and use of oral contraceptives [84]. In general it is very difficult to ascertain whether biomarker proteins are generated by the cancer itself or by secondary effects, such as immune response. Once potential biomarker proteins are identified in initial studies, follow-up studies can focus on those proteins to discover their origin and role in the disease process. Helpful experimental study designs would control for known secondary causes of biomarker activity and would collect enough samples to average over unintended secondary causes. Longitudinal studies would also lessen the effect of transient secondary causes.

To quantify and compare classification performances, we used ROC analysis, which fairly compares classifiers that may be operating at different sensitivities due to arbitrary decision thresholds applied to the classifiers' output values. Although our data set comprised three classes (normal, benign, and cancer), current ROC methods required us to split an inherently three-class classification problem into three different two-class tasks: normal vs. benign, normal vs. cancer, and benign vs. cancer. The field of ROC analysis is still in development for the three-class problem; no consensus has yet been reached about how to quantitatively score the resulting six-dimensional ROC hypersurface [8588]. However, for other methods of classifier comparison, such as the generalized Brier score or discrete counts of classification errors, full three-class models could have been used, albeit with decision thresholds.

This study investigated a group of 98 serum proteins (Table 2), which is relatively small sample of all detectable serum proteins. Future studies may identify other proteins with stronger relationships to breast cancer. Rather than relying on only a few proteins, upcoming protein technologies will allow the screening of large populations with protein-based tests that require a larger set of proteins. Microfluidics chips would simplify the process of automating blood tests in a high-throughput fashion. However, with current assay technology and cost-benefit analysis of screening programs, the fixed cost per protein assayed essentially limits the number of proteins that can be used for screening. To lower screening costs, we chose small subsets of the features via feature-selection methods. Iterated BMA and least-angle regression were able to classify well using a far smaller set of features than those chosen by stepwise feature selection.

High feature correlation impedes many feature-selection techniques. For stochastic feature-selection methods, two highly correlated features are each likely to be chosen in alternation. Similarly, a cluster of highly correlated features causes the feature selection technique to spread the feature selection rate among each feature in the cluster, essentially diluting each feature's selection rate. Severe dilution of selection rates can cause none of the cluster's features to be selected. Future work will entail adding cluster-based methods to the iterated BMA algorithm.

The currently proposed serum biomarkers for breast cancer are not sensitive or specific enough for breast cancer screening. However, better biomarkers may be identified with newer protein assay technology and larger data sets. A protein's subtle diagnostic ability may be enhanced by the assimilation of other medical information, such as gene expression and medical imaging. The proteins will boost diagnostic performance only if they provide complementary and non-redundant information with the clinical practice of mammograms, sonograms, and physical examination. The relationship of medical imaging and protein screening remains to be determined in future work.

Conclusion

We have performed feature-selection and classification techniques to identify blood serum proteins that are indicative of breast cancer in premenopausal women. The best features to detect breast cancer were MIF, MMP-9, and MPO. While the proteins could distinguish normal tissue from cancer and normal tissue from benign lesions, they could not distinguish benign from malignant lesions. Since the same protein (MIF) was chosen for both normal vs. cancer and normal vs. benign lesions, it is likely that this protein plays a role in the inflammatory response to a lesion, whether benign or malignant, rather than in a role specific for cancer. While the current set of proteins show moderate ability for detecting breast cancer, their true usefulness in a screening program remains to be seen in their integration with imaging-based screening practices.