A number of urine and blood-based biomarker tests have been described for prostate cancer, although to date there has only been a limited exploration of the methodology behind the validation studies that underpin these tests.
In this review, a selection of commercially available urine and blood-based biomarker tests for prostate cancer are described, and the underlying key validation studies for each test are critically appraised using the Standards for Reporting Diagnostic Accuracy (STARD) 2015 statement.
The ExoDx Prostate Intelliscore, SelectMDx, Progensa PCA3, Mi-Prostate Score, 4K Score, and Prostate Health Index (PHI) tests were reviewed. Most of the validation studies supporting these tests perform exploratory analyses to determine cut-off values in a post hoc manner, comprise cohorts that are primarily Caucasian, report receiver operating characteristic curves that combine the biomarker’s result with established clinical nomograms and are based on a reference standard (prostate biopsy) that lacks central pathology review. Deficiencies in STARD reporting guidelines include frequent failure to provide a published study protocol, prospective study registration in a registry, a flow diagram, justification for sample size determination, a discussion of adverse events with testing, and information on how missing or indeterminate test results should be managed.
Key validation studies that support many commercially available urine and blood-based biomarkers for prostate cancers have deficiencies in transparency based on STARD reporting guidelines, and limitations in methodology must be considered when deciding when these tests should be applied in clinical practice.
Early detection of prostate cancer aims to identify disease that is high-risk, clinically localized, and which can be successfully treated while minimizing the complications associated with advanced or metastatic presentations. The lifetime risk of being diagnosed with prostate cancer in the US is estimated to be approximately 11%, and it is the second most frequently diagnosed malignancy in men worldwide [1, 2]. PSA testing has led to significant stage migration, but the lack of specificity of PSA for detecting high-risk prostate cancer has led to both overdiagnosis and overtreatment in some patients with indolent disease.
The ideal biomarker for prostate cancer would be derived in a non-invasive manner, be specific for clinically significant prostate cancer, and be able to clearly differentiate high-risk from indolent disease. The biomarker would not be expressed or altered by other conditions or malignancies, and it would be inexpensive and easily accessible at the point-of-care. Biomarkers should be validated in high-quality and transparently reported diagnostic test accuracy studies, a type of study that provides evidence as to how well a test correctly identifies or rules out disease, while assisting clinicians and their patients in making subsequent decisions . Over the last decade, numerous novel biomarkers have emerged commercially, in an effort to improve the detection of clinically significant prostate cancer.
Although there have been a number of reviews published on this topic that explore these tests in detail with varying recommendations for how to use them in clinical practice [4,5,6,7,8], to date there has been only a limited exploration of the methodology behind the validation studies that underpin these tests. In this paper, a selection of the most commonly encountered clinically available urine and blood-based biomarker tests for the diagnosis of prostate cancer are described, and the underlying key validation studies for each test are critically appraised using the Standards for Reporting Diagnostic Accuracy (STARD) statement. The STARD 2015 statement includes a list of 30 key items that should be included in every report of a diagnostic test accuracy study (Table 1) . Similar to reporting guidelines that have been developed for randomized controlled trials (such as CONSORT) [10, 11] and systematic reviews (such as AMSTAR) [12, 13], adherence to the STARD statement improves the completeness of reporting and can highlight key deficiencies or weaknesses in study design . Given that commercially available prostate cancer biomarkers can cost upwards of several thousand dollars for a single test , transparent reporting of validation studies is critical for practicing clinicians to be able to discern which tests, if any, would be of benefit to their patients (Table 2).
ExoDx Prostate IntelliScore
The ExoDx Prostate IntelliScore from Exome Diagnostics (Waltham, MA, USA) is a urinary biomarker designed to discriminate between patients with Gleason 7 or higher prostate cancer versus those with Gleason 6 or benign biopsy. Unlike the other urinary biomarkers described, the ExoDx biomarker is assessed from a first-catch urine sample of between 25 and 50 mL, without a prior digital rectal examination (DRE) required. The test detects both PCA3 and TMPRSS2:ERG RNA in urinary exosomes. Exosomes are small membrane vesicles that are secreted by several cell types, including immune and tumor cells . The key validation study for this test was published by McKiernan et al. , and examined the ability of the ExoDx test when combined with PSA level, age, race, and family history (“standard of care”) to discriminate between Gleason score 7 and Gleason score 6 or benign disease on initial biopsy, when compared to standard of care alone. The study comprised a training cohort involving an initial 255 patients, after which a separate validation cohort of 519 patients was used to test the gene expression signature’s ability to identify Gleason 7 or higher disease. The authors report that the ExoDx gene expression signature when combined with PSA, age, race, and family history provided an AUC of 0.73 (95% CI 0.68–0.77), which was more predictive of diagnosing Gleason 7 or higher prostate cancer than PSA, age, race, and family history alone (0.63, 95% CI 0.58–0.68) .
Areas of reporting deficiency based on the STARD 2015 checklist included item 13a, which specifies that authors should report whether the clinical information and reference standard (in this case, prostate biopsy result) were visible to the performers or readers of the ExoDx test. It is not clear in the manuscript whether those performing the ExoDx assay were blinded to the results of the prostate biopsy. Items 15 and 16 on the STARD criteria were also absent, which require authors to report how indeterminate or missing test results are handled. In the manuscript, the authors note that although 499 patients were included in the training cohort, data from 110 patients were excluded either because the provided urine volume was too high or because insufficient RNA was detected in the provided samples. As this represents nearly 21% of the training cohort and a comparably high percentage (26%) of the validation cohort was also excluded for similar reasons, the omission of a discussion as to how clinicians should manage the potential for similar indeterminate results in real-world practice is important to consider. No flow diagram is provided (Item 19). Flow diagrams help readers appreciate the potential for bias, by illustrating the basic structure of the study and clearly identifying where and how patients were ultimately analyzed, and how or whether patients in the training and validation groups were excluded. Flow diagrams can help provide true-positive, true-negative, false-positive, and false-negative test numbers and permit the reader to independently determine the sensitivity and specificity of the test in question . There is also no discussion of adverse events, if any, of the ExoDx test (Item 25). Given that prostate biopsy is being used as the reference standard to validate the test, another weakness of the paper is the lack of a central pathology review.
The STARD guidelines also suggest that diagnostic accuracy studies register their protocols prospectively in a trial registry, and no evidence of such registration is provided (Item 28). The study is funded by Exosome Diagnostics. Strengths of the paper include a provision of the full study protocol for review as well as a relatively high percentage of non-Caucasian patients in the validation cohorts, including patients of African descent, who are historically underrepresented in these studies.
The SelectMDx test (MDxHealth, Irvine, CA, USA) is urine-based biomarker that screens for HOXC6 and DLX1 mRNA levels in urine that is obtained post-DRE. The key validation study for this test was published by Van Neste et al. . Urine was collected from two cohorts of patients who were scheduled to undergo either an initial or repeat prostate biopsy based on elevated PSA levels ( ≥ 3 ng/mL), abnormal DRE, or a family history of prostate cancer. Providers performed a standardized DRE that consisted of three strokes per lobe, after which the first voided urine sample was collected and analyzed. The study explored the RNA levels of several different genes, including HOXC4, HOXC6, TDRD1, DLX1, KLK3, and PCA3; following an exploratory analysis, the authors found that HOXC6 and DLX1 together provided the highest AUC of 0.76 (95% CI 0.71–0.81) in identifying high-grade prostate cancer (defined as Gleason 7 or higher) . This was then combined with clinical variables (age, PSA, PSA density, family history of prostate cancer, DRE, and history of prostate biopsy) and tested with a validation cohort comprising a separate 386 patients. Of note, racial demographics were not recorded as part of the study, but were assumed to be > 95% Caucasian based on general hospital records in The Netherlands, where the study was conducted. With the addition of clinical variables, the AUC for detection of Gleason 7 or higher prostate cancer increased to 0.86 (95% CI 0.80–0.92). Separately, the authors reanalyzed the model without the inclusion of the DRE as a clinical variable and demonstrated an even higher AUC of 0.90 (95% CI 0.85–0.95) .
The SelectMDx validation study highlights several interesting features common to many of the prostate cancer biomarker studies, which are worth considering in detail. First, the population studied underrepresents patients of African descent, a cohort of patients known to be at higher risk for advanced prostate cancer . Although this is inherent to the study being conducted in a Dutch population, it raises the question of whether biomarkers should be validated in other populations before they can be used more broadly. Second, the authors perform an exploratory analysis on several candidate genes before settling on an “ideal” combination upon which validation is then performed. Because the candidate genes and their positivity cut-offs were determined post hoc, there is an increased risk that the selected gene signature is overly optimistic in identifying high-risk disease. This has been previously demonstrated in settings where the cut-off value for a test’s positivity is made post hoc [20,21,22]. Particularly with gene expression-based tests, it is not clear whether the “ideal” gene combination identified is specific to this training cohort of patients alone or if it would be reproducible in the population at large. Indeed, when the authors proceed to validate their chosen signature they do so only after adding clinical variables to the mix, the latter representing the other common feature in many prostate cancer biomarker validation studies . Though the authors report an AUC of 0.90 with the HOXC6/DLX1 gene signature when combined with clinical variables in the validation cohort, it is worth noting that the predictive ability of clinical variables alone had an AUC of 0.87, although this difference was reported to be statistically significant (p = 0.018). There are also several reporting deficiencies by the STARD 2015 criteria, including Item 4 (pre-specified hypothesis), item 15/16 (how indeterminate or missing results were handled), and item 18 (how sample size of the cohorts was determined). No study protocol was provided for review nor was advance registration of the study protocol performed. Both the first and senior authors were employees or consultants of MDxHealth, with appropriate disclosures made within the manuscript.
The PCA3 assay (Progensa PCA3, Hologic Inc, Marlborough, MA, USA) measures the concentration of prostate cancer gene 3 (PCA3) and PSA RNA in post-DRE first-catch urine specimens. The ratio of PCA3 RNA to PSA RNA is calculated and reported as a “PCA3 score”, with a cut-off of less than 25 considered “negative” and associated with a lower likelihood of positive biopsy. The key validation study for this test was reported by Marks et al. . Urine was collected from 233 consecutive men with PSA levels of 2.5 ng/mL or higher with at least one negative prostate biopsy; 95% of the patient population was Caucasian and the mean age was 64 years. Of the 233 men included in the study, 226 had sufficient RNA for analysis (97% yield rate) and 60 (26.5%) had a positive repeat biopsy for prostate cancer. Among those who had a positive biopsy, 39 patients (17%) were Gleason 6 and only 21 (9%) were Gleason 7. Unlike the previous two urinary biomarkers discussed, the PCA3 validation study intended to address whether the test could identify a positive biopsy from a negative one and was not intended to differentiate high-risk from low-risk disease. There were no statistically significant differences in PCA3 scores for Gleason 6 versus Gleason 7 cases. PCA3 was found to have an AUC of 0.678 (95% CI 0.597–0.759) in identifying a positive biopsy . An exploratory analysis for various PCA3 score cut-off values was then performed in a post hoc fashion, with a value of 35 reported to provide a specificity of 72% and sensitivity of 58% in detecting a positive prostate biopsy. STARD reporting deficiencies included a lack of clarity as to how indeterminate or missing results were handled, how sample size of the cohorts was determined, no study protocol or study registration, absence of a flow diagram (which would have more clearly drawn attention to the fact that only 9% of the cohort had Gleason 7 disease), no discussion of adverse events, and no clear discussion of study limitations. Study authors included paid consultants and investigators for Gen-Probe, DiagnoCure, and Beckman-Coulter, Inc.
Additional validation of PCA3 has come from patients in the placebo arm of the REDUCE trial, who provided post-DRE urine samples before the per-protocol 2- and 4-year prostate biopsies . There were no statistically significant differences between the AUCs for high- and low-grade disease. FDA approval of this test is limited to use in the repeat biopsy setting. PCA3, like many of the other biomarkers discussed, performs best when used as part of a model that includes other clinical factors (such as PSA, percent free PSA, prostate volume, age, family history) .
Mi-Prostate score (MiPS)
The MiPS score (University of Michigan MLabs, Ann Arbor, MI, USA) combines the measurement of TMPRSS2:ERG and PCA3 in post-DRE urine, with clinical information from the Prostate Cancer Prevention Trial risk calculator (PCPTrc), to provide a score that quantifies the risk a biopsy will identify prostate cancer . In the validation study by Tomlins et al., multivariable logistic regression models were developed from a 733-specimen training cohort, of which 711 samples had analyzable RNA data . An exploratory analysis was performed to consider how the combination of TMPRSS2:ERG and PCA3, either alone or in combination with PSA or the PCPTrc would perform to predict the presence of prostate cancer on biopsy. Two-thirds of the validation cohort were Caucasian, with the remaining patients specified as non-white. It is not clear how the sample size was chosen (Item 18 on the STARD criteria), nor are adverse events, if any, discussed (Item 25). Compared with PSA/PCPTrc alone, or either combined with TMPRSS2:ERG or PCA3 alone, the MiPS score combining PSA/PCPTrc with both TMPRSS2:ERG and PCA3 expression levels provided a greater AUC for predicting both any cancer on biopsy (0.751/0.762) as well as high-grade cancer (0.772/0.779). A confidence interval is not provided for these findings (Item 24, estimates of diagnostic accuracy and their precision). Several authors of the study are employees of Hologic/Gen-Probe, which commercially provides the PCA3 test.
The 4K Score (OPKO Health, Miami, FL, USA) is a blood-based test that measures the levels of a four-kallikrein panel, specifically total PSA, free PSA, intact (single chain) PSA, and human kallikrein 2. This information is combined with DRE and a patient’s history of prior prostate biopsy into an algorithm that is used to generate a probability score between 0 and 100%, to predict the likelihood that a patient will have high-grade pathology on biopsy (defined as Gleason 7 or higher). Validation comes from several studies, including modeling by Vickers et al. performed in samples from the Rotterdam section of the ERSPC [28, 29] as well as the Prostate Testing for Cancer and Treatment (ProtecT) study . Parekh et al. provided the first prospective validation study in the US, in which 1012 patients from 26 centers were assessed with the test . Patients were referred for prostate biopsy and underwent blood draw prior to biopsy. A total of 1370 men were enrolled in the study, with 58 patients excluded due to delays in shipping of the blood samples as well as inclusion/exclusion criteria violations, and the first 300 patients were used to “calibrate” the test to the US population, although ultimately no modifications to the algorithm were made. Compared to version 2.0 of the Prostate Cancer Prevention Trial risk calculator (PCPTrc 2.0), which incorporates age, race, DRE, PSA, and prior biopsy to predict a probability of high-grade (Gleason 7 or higher) prostate cancer , the 4K Score provided an AUC of 0.82 compared with 0.74 for the PCPTrc 2.0 alone . The authors also perform a post hoc comparison of using the 4K Score among African American versus Caucasian men and found no difference in test performance.
Similar to other biomarker validation studies, there was no central pathology review of the prostate biopsies (the reference standard). The authors perform a post hoc exploratory analysis of various 4K Score cut-offs to identify which values could be used to detect a high number of Gleason 7 or higher cancers, while minimizing the number of missed high-grade tumors. Overall, however, the Parekh study had the best compliance with STARD 2015 reporting criteria, having deficiencies only in Item 4 (no hypothesis provided), Item 15 (how indeterminate data were handled), Item 25 (no reporting on adverse events), and Items 28 and 29 (trial registration and access to full study protocol). Funding was provided by OPKO Diagnostics, and the company was involved in study design, conduct, and data collection.
Prostate Health Index (PHI)
The Prostate Health Index (PHI) (Beckman-Coulter, Inc., Brea, CA, USA) combines total PSA, free PSA, and p2PSA via a formula to predict the likelihood of finding prostate cancer on a subsequent biopsy. The free PSA precursor p2PSA has been shown to represent in some cases up to 95% of the free PSA fraction in men with prostate cancer, compared with only up to 19% in biopsy-negative men . Catalona et al. published the key validation study for PHI in 2011, in which 892 men were included . Although the study describes itself as being a prospective, multi-institutional trial for men with no history of prostate cancer, the 892 patients actually comprised only 121 (13.6%) patients who were prospectively enrolled using the study’s protocol. The remaining patients included 743 patients who were “enrolled under separate protocols” and 28 “retrospective samples”. The paper does not clearly explain what is meant by “separate protocols”, where these protocols can be found (Item 29 of the STARD criteria), nor how or where the retrospective samples were obtained. Additionally, 27 patients with an unknown prior biopsy history were included in the study, despite an exclusion criterion being no prior history of prostate cancer.
Of the biomarker validation studies considered thus far in this review, this paper was the only one that clearly specified a hypothesis within the manuscript (Item 4 of the STARD criteria), noting specifically that the study was designed to assess a primary null hypothesis that PHI had no greater specificity than percent freePSA at 95% sensitivity. A sample size justification (Item 18) was also provided. Absent, however, were details on how indeterminate or missing data were handled (Items 15 and 16), a flow diagram (Item 19), any report on adverse events (Item 25), and study registration number (Item 28).
The authors report that the AUC for prostate cancer detection using the PHI was 0.703, compared with 0.525 for PSA alone . Risk ratios were estimated using PHI to determine the probability of detecting Gleason score 7 or higher prostate cancer; the authors note that at a cut-off PHI range between 25 and 34.9, the risk of Gleason 7 or higher disease is a modest 1.08 (95% CI 0.61, 19.2) .
Comment and conclusions
With the economic burden of prostate cancer estimated to be nearly $12 billion annually in the US , clinicians and their patients must make important decisions when selecting diagnostic tests, particularly those that are intended to decrease overdiagnosis of indolent disease. There are several key findings from this review. First, adherence to the Standards for Reporting Diagnostic Accuracy guidelines is limited, with notable deficiencies in providing published protocols (Item 29) or evidence of advance study registration (Item 28). Providing easy access to study protocols allows for independent verification of the purported findings, and insight into the testing process that can highlight any limitations that may exist when attempting to use a diagnostic test in an individual patient. Further, none of the published validation studies discuss performing a central review of biopsy pathology, an arguably significant limitation given that prostate biopsy is used as the reference standard for all of the index tests investigated. Given that many of the tests seek to differentiate Gleason 6 (“low-risk”) from Gleason 7 and higher disease (“high risk”), varying practice patterns of pathologic classification can introduce uncertainty into the validation process. This is particularly true when validation is being performed retrospectively using historical biopsy cohorts, when there has been a well-documented upgrading of cancers to Gleason ≥ 7 in contemporary pathologic assessments .
The absence of a flow diagram in many of the studies (Item 19) was another frequent finding observed that makes it harder for readers to see the breakdown of the patient population within the validation cohorts, as well as how many patients had indeterminate or missing test results and how these cases were handled (Items 15–17). Sample size calculations or justifications (Item 18) were also only rarely mentioned; providing this information is important in understanding whether a diagnostic accuracy test is valid for use in general practice. Most diagnostic accuracy tests are small and test results may therefore be imprecise [14, 37].
The reporting of adverse events was another consistent omission in all of the reviewed studies. Although the risk of adverse events is presumed to be low in blood and urine-based biomarker tests, the key role that genomic signatures play in several of the tests raises the potential for unanticipated consequences that should be explored . The low rate of participants of African descent in most of the validation studies, for example, may limit the generalizability of these tests in this patient population and potentially increase the false-negative rate.
It is worth noting that the key validation studies for the PCA3, 4K Score, and PHI tests were performed prior to the publication of the STARD 2015 reporting criteria that were used in this paper to evaluate them; however, the STARD guidelines themselves were originally published in 2003 , with the 2015 update primarily incorporating features to make the criteria easier to use, with only a few additions intended to harmonize the guidelines with others, including CONSORT .
As it stands currently, there is no published consensus for when or how these biomarkers should be used, including within the most recent American Urological Association guideline for the early detection of prostate cancer . This fact, along with the limitations noted herein, underscores the importance of additional investigation, including more rigorous adherence to reporting guidelines before these novel diagnostic tests can be routinely incorporated into standard clinical practice.
Grossman DC, Curry SJ, Owens DK et al (2018) Screening for prostate cancer. JAMA 319:1901. https://doi.org/10.1001/jama.2018.3710
Siegel RL, Miller KD, Jemal A (2016) Cancer statistics. CA Cancer J Clin 66:7–30. https://doi.org/10.3322/caac.21332
Mallett S, Halligan S, Thompson M et al (2012) Interpreting diagnostic accuracy studies for patient care. BMJ 345:e3999–e3999. https://doi.org/10.1136/bmj.e3999
Matulay JT, Wenske S (2018) Genetic signatures on prostate biopsy: clinical implications. Transl Cancer Res 7:S640. https://doi.org/10.21037/tcr.2018.03.26
Schmid M, Trinh Q-D, Graefen M et al (2014) The role of biomarkers in the assessment of prostate cancer risk prior to prostate biopsy: which markers matter and how should they be used? World J Urol 32:871–880. https://doi.org/10.1007/s00345-014-1317-2
Narayan VM, Konety BR, Warlick C (2017) Novel biomarkers for prostate cancer: an evidence-based review for use in clinical practice. Int J Urol 24:352–360. https://doi.org/10.1111/iju.13326
Zapała P, Dybowski B, Poletajew S, Radziszewski P (2018) What can be expected from prostate cancer biomarkers a clinical perspective. Urol Int 100:1–12. https://doi.org/10.1159/000479982
McGrath S, Christidis D, Perera M et al (2016) Prostate cancer biomarkers: are we hitting the mark? Prostate Int 4:130–135. https://doi.org/10.1016/j.prnil.2016.07.002
Bossuyt PM, Reitsma JB, Bruns DE et al (2015) STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ 351:h5527. https://doi.org/10.1136/bmj.h5527
Moher D, Hopewell S, Schulz KF et al (2010) CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340:c869–c869. https://doi.org/10.1136/bmj.c869
Narayan VM, Cone EB, Smith D et al (2016) Improved reporting of randomized controlled trials in the urologic literature. Eur Urol 70:1044–1049. https://doi.org/10.1016/j.eururo.2016.07.042
Han JL, Gandhi S, Bockoven CG et al (2017) The landscape of systematic reviews in urology (1998 to 2015): an assessment of methodological quality. BJU Int 119:638–649. https://doi.org/10.1111/bju.13653
Shea BJ, Reeves BC, Wells G et al (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008
Cohen JF, Korevaar DA, Altman DG et al (2016) STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open 6:e012799. https://doi.org/10.1136/bmjopen-2016-012799
Sathianathen NJ, Kuntz KM, Alarid-Escudero F et al (2018) Incorporating biomarkers into the primary prostate biopsy setting: a cost-effectiveness analysis. J Urol 2006:1215. https://doi.org/10.1016/j.juro.2018.06.016
Denzer K, Kleijmeer MJ, Heijnen HF et al (2000) Exosome: from internal vesicle of the multivesicular body to intercellular signaling device. J Cell Sci 113(Pt 19):3365–3374
McKiernan J, Donovan MJ, O’Neill V et al (2016) A novel urine exosome gene expression assay to predict high-grade prostate cancer at initial biopsy. JAMA Oncol 2:882. https://doi.org/10.1001/jamaoncol.2016.0097
Van Neste L, Hendriks RJ, Dijkstra S et al (2016) Detection of high-grade prostate cancer using a urinary molecular biomarker-based risk score. Eur Urol 70:740–748. https://doi.org/10.1016/j.eururo.2016.04.012
DeSantis CE, Siegel RL, Sauer AG et al (2016) Cancer statistics for African Americans, 2016: progress and opportunities in reducing racial disparities. CA Cancer J Clin 66:290–308. https://doi.org/10.3322/caac.21340
Leeflang MMG, Moons KGM, Reitsma JB, Zwinderman AH (2008) Bias in sensitivity and specificity caused by data-driven selection of optimal cutoff values: mechanisms, magnitude, and solutions. Clin Chem 54:729–737. https://doi.org/10.1373/clinchem.2007.096032
Harrell FE, Lee KL, Mark DB (1996) Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med 15:361–387. https://doi.org/10.1002/(SICI)1097-0258(19960229)15:4%3c361:AID-SIM168%3e3.0.CO;2-4
Ewald B (2006) Post hoc choice of cut points introduced bias to diagnostic research. J Clin Epidemiol 59:798–801. https://doi.org/10.1016/j.jclinepi.2005.11.025
Marks LS, Fradet Y, Lim Deras I et al (2007) PCA3 molecular urine assay for prostate cancer in men undergoing repeat biopsy. Urology 69:532–535. https://doi.org/10.1016/j.urology.2006.12.014
Aubin SMJ, Reid J, Sarno MJ et al (2010) PCA3 molecular urine test for predicting repeat prostate biopsy outcome in populations at risk: validation in the placebo arm of the dutasteride REDUCE trial. J Urol 184:1947–1952. https://doi.org/10.1016/j.juro.2010.06.098
de la Taille A, Irani J, Graefen M et al (2011) Clinical evaluation of the PCA3 assay in guiding initial biopsy decisions. J Urol 185:2119–2125. https://doi.org/10.1016/j.juro.2011.01.075
Cornu J-N, Cancel-Tassin G, Egrot C et al (2013) Urine TMPRSS2:ERG fusion transcript integrated with PCA3 score, genotyping, and biological features are correlated to the results of prostatic biopsies in men at risk of prostate cancer. Prostate 73:242–249. https://doi.org/10.1002/pros.22563
Tomlins SA, Day JR, Lonigro RJ et al (2016) Urine TMPRSS2:ERG plus PCA3 for individualized prostate cancer risk assessment. Eur Urol 70:45–53. https://doi.org/10.1016/j.eururo.2015.04.039
Vickers AJ, Cronin AM, Aus G et al (2008) A panel of kallikrein markers can reduce unnecessary biopsy for prostate cancer: data from the European Randomized Study of Prostate Cancer Screening in Göteborg Sweden. BMC Med 6:19. https://doi.org/10.1186/1741-7015-6-19
Vickers AJ, Cronin AM, Roobol MJ et al (2010) A four-kallikrein panel predicts prostate cancer in men with recent screening: data from the European Randomized Study of Screening for Prostate Cancer, Rotterdam. Clin Cancer Res 16:3232–3239. https://doi.org/10.1158/1078-0432.CCR-10-0122
Donovan J, Hamdy F, Neal D et al (2003) Prostate testing for cancer and treatment (ProtecT) feasibility study. Health Technol Assess 7:1–88
Parekh DJ, Punnen S, Sjoberg DD et al (2015) A multi-institutional prospective trial in the USA confirms that the 4K score accurately identifies men with high-grade prostate cancer. Eur Urol 68:464–470. https://doi.org/10.1016/j.eururo.2014.10.021
Ankerst DP, Hoefler J, Bock S et al (2014) Prostate cancer prevention trial risk calculator 2.0 for the prediction of low- vs high-grade prostate cancer. Urology 83:1362–1367. https://doi.org/10.1016/j.urology.2014.02.035
Mikolajczyk SD, Marker KM, Millar LS et al (2001) A truncated precursor form of prostate-specific antigen is a more specific serum marker of prostate cancer. Cancer Res 61:6958–6963
Catalona WJ, Partin AW, Sanda MG et al (2011) A multicenter study of [-2]pro-prostate specific antigen combined with prostate specific antigen and free prostate specific antigen for prostate cancer detection in the 2.0 to 10.0 ng/ml prostate specific antigen range. J Urol 185:1650–1655. https://doi.org/10.1016/j.juro.2010.12.032
Yabroff KR, Lund J, Kepka D, Mariotto A (2011) Economic burden of cancer in the United States: estimates, projections, and future research. Cancer Epidemiol Biomark Prev 20:2006–2014. https://doi.org/10.1158/1055-9965.EPI-11-0650
Brimo F, Montironi R, Egevad L et al (2013) Contemporary grading for prostate cancer: implications for patient care. Eur Urol 63:892–901. https://doi.org/10.1016/j.eururo.2012.10.015
Bachmann LM, Puhan MA, ter Riet G, Bossuyt PM (2006) Sample sizes of studies on diagnostic accuracy: literature survey. BMJ 332:1127–1129. https://doi.org/10.1136/bmj.38793.637789.2F
Foster MW, Royal CDM, Sharp RR (2006) The routinisation of genomics and genetics: implications for ethical practices. J Med Ethics 32:635–638. https://doi.org/10.1136/jme.2005.013532
Bossuyt PM, Reitsma JB, Bruns DE et al (2003) The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138:W1–12
Carter HB, Albertsen PC, Barry MJ et al (2013) Early detection of prostate cancer: AUA Guideline. J Urol 190:419–426. https://doi.org/10.1016/j.juro.2013.04.119
Conflict of interest
Vikram Narayan has no conflicts of interest to disclose.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Narayan, V.M. A critical appraisal of biomarkers in prostate cancer. World J Urol 38, 547–554 (2020). https://doi.org/10.1007/s00345-019-02759-x
- Prostate cancer
- Genomic tests
- Diagnostic test accuracy
- Critical appraisal