Advertisement

Machine Learning Techniques in Cancer Prognostic Modeling and Performance Assessment

  • Yiyi Chen
  • Jess A. Millar
Chapter

Abstract

Prognostic models for disease occurrence, tumor progression and survival are abundant for most types of cancers. Physicians and cancer patients are utilizing these models to make informed treatment decisions and corresponding arrangements. However, not all cancer prognostic models are built and validated rigorously. Some are more useful and reliable than others. In this chapter, we briefly introduce some popular machine learning methods for constructing cancer prognostic models, and discuss pros and cons of each. We also introduce the commonly used discrimination and calibration metrics for assessing predictive performance and validating the prognostic models. In the end, we outline several challenges of using prognostic models in the real world for clinical decision-making support, and propose related suggestions.

Keywords

Machine learning Prognostic model Cancer prediction Validation 

References

  1. 1.
    Ahmad A. Pathways to breast cancer recurrence. ISRN Oncol. 2013;2013:290568. doi: 10.1155/2013/290568.Google Scholar
  2. 2.
    Ahmad LG, Eshlaghy AT, Poorebrahimi A, et al. Using three machine learning techniques for predicting breast cancer recurrence. J Heal Med Inform. 2013;4:1000124. doi: 10.4172/2157-7420.1000124.Google Scholar
  3. 3.
    Altman DG, Royston P. What do we mean by validating a prognistic model? Stat Med. 2000;19:453–73.CrossRefGoogle Scholar
  4. 4.
    Ankerst DP, Hoefler J, Bock S, et al. Prostate cancer prevention trial risk calculator 2.0 for the prediction of low- vs high-grade prostate cancer. Urology. 2014;83:1362–7. doi: 10.1016/j.urology.2014.02.035.CrossRefGoogle Scholar
  5. 5.
    Bellaachia A, Guven E. Predicting breast cancer survivability using data mining techniques. SIAM Int Conf Data Min. 2006;6:1–4. doi: 10.1109/ICSTE.2010.5608818.Google Scholar
  6. 6.
    Bharathi A, Natarajan AM. Cancer classification using support vector machines and relevance vector machine based on analysis of variance features. J Comput Sci. 2011;7:1393–9.CrossRefGoogle Scholar
  7. 7.
    De Bin R, Sauerbrei W, Boulesteix A-L. Investigating the prediction ability of survival models based on both clinical and omics data: Two case studies. Stat Med. 2014;33:5310–29. doi: 10.1002/sim.6246.MathSciNetCrossRefGoogle Scholar
  8. 8.
    Boser BE, Guyon IM, Vapnik VN. A training algorithm for optimal margin classifiers. In: Proceedings of the 5th annual ACM workshop on computational learning theory. New York: ACM Press; 1992. p. 144–152.Google Scholar
  9. 9.
    Bottaci L, Drew PJ, Hartley JE, et al. Artificial neural networks applied to outcome prediction for colorectal cancer patients in separate institutions. Lancet. 1997;350:469–72. doi: 10.1016/S0140-6736(96)11196-X.CrossRefGoogle Scholar
  10. 10.
    Bou-Hamd I, Larocque D, Ben-Ameur H. A review of survival trees. Stat Surv. 2011;5:44–71. doi: 10.1214/09-SS047.MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Boulesteix A, Sauerbrei W. Added predictive value of high-throughput molecular data to clinical data and its validation. Brief Bioinform. 2011;12:215–29. doi: 10.1093/bib/bbq085.CrossRefGoogle Scholar
  12. 12.
    Burges CJC. A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov. 1998;2:121–67.CrossRefGoogle Scholar
  13. 13.
    Burke HB, Goodman PH, Rosen DB, et al. Artificial neural networks improve the accuracy of cancer survival prediction. Cancer. 1997;79:857–62.CrossRefGoogle Scholar
  14. 14.
    Chow E, Abdolell M, Panzarella T, et al. Predictive model for survival in patients with advanced cancer. J Clin Oncol. 2008;26:5863–9. doi: 10.1200/JCO.2008.17.1363.CrossRefGoogle Scholar
  15. 15.
    Chow E, James JL, Hartsell W, et al. Validation of a predictive model for survival in patients with advanced cancer: Secondary analysis of RTOG 9714. World J Oncol. 2011;2:181–90. doi: 10.4021/wjon325w.Google Scholar
  16. 16.
    Clark GM. Prognostic factors versus predictive factors: Examples from a clinical trial of erlotinib. Mol Oncol. 2008;1:406–12. doi: 10.1016/j.molonc.2007.12.001.CrossRefGoogle Scholar
  17. 17.
    Craven MW, Shavlik JW. Extracting tree-structured representations of trained networks. In: Advances in neural information processing systems. Denver: MIT Press; 1996. p. 24–30.Google Scholar
  18. 18.
    Delen D, Walker G, Kadam A. Predicting breast cancer survivability: A comparison of three data mining methods. Artif Intell Med. 2005;34:113–27. doi: 10.1016/j.artmed.2004.07.002.CrossRefGoogle Scholar
  19. 19.
    Dettling M, Bühlmann P. Boosting for tumor classification with gene expression data. Bioinformatics. 2003;19:1061–9. doi: 10.1093/bioinformatics/btf867.CrossRefGoogle Scholar
  20. 20.
    Faraggi D, LeBlanc M, Crowley J. Understanding neural networks using regression trees: an application to multiple myeloma survival data. Stat Med. 2001;20:2965–76. doi: 10.1002/sim.912.CrossRefGoogle Scholar
  21. 21.
    Freund Y, Schapire RE. A desicion-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci. 1997;55:119–39. doi: 10.1006/jcss.1997.1504.CrossRefzbMATHGoogle Scholar
  22. 22.
    Friedman JH, Meulman JJ. Multiple additive regression trees with application in epidemiology. Stat Med. 2003;22:1365–81. doi: 10.1002/sim.1501.CrossRefGoogle Scholar
  23. 23.
    Furey TS, Cristianini N, Duffy N, et al. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics. 2000;16:906–14.CrossRefGoogle Scholar
  24. 24.
    Ganesan N, Vankatesh K, Rama MA, Palani AM. Application of neural networks in diagnosing cancer disease using demographic data. Int J Comput Appl. 2010;1:76–85. doi: 10.5120/476-783.Google Scholar
  25. 25.
    Garson DG. Interpreting neural-network connection weights. Artif Intell Expert. 1991;6:46–51.CrossRefGoogle Scholar
  26. 26.
    Ge G, Wong GW. Classification of premalignant pancreatic cancer mass-spectrometry data using decision tree ensembles. BMC Bioinform. 2008;9:275. doi: 10.1186/1471-2105-9-275.CrossRefGoogle Scholar
  27. 27.
    Glare P. Clinical predictors of survival in advanced cancer. J Support Oncol. 2005;3:331–9.Google Scholar
  28. 28.
    Goh ATC. Back-propagation neural networks for modeling complex systems. Artif Intell Eng. 1995;9:143–51. doi: 10.1016/0954-1810(94)00011-S.CrossRefGoogle Scholar
  29. 29.
    Goldberg Y, Kosorok MR. Support vector regression for right censored data. 2012. arXiv 1202.5130v2.Google Scholar
  30. 30.
    Graf E, Schmoor C, Sauerbrei W, Schumacher M. Assessment and comparison of prognostic classification schemes for survival data. Stat Med. 1999;18:2529–45. doi: 10.1002/(SICI)1097-0258(19990915/30)18:17/18<2529:AID-SIM274>3.0.CO;2-5.CrossRefGoogle Scholar
  31. 31.
    Gupta S, Tran T, Luo W, et al. Machine-learning prediction of cancer survival: a retrospective study using electronic administrative records and a cancer registry. BMJ Open. 2014;4:e004007. doi: 10.1136/bmjopen-2013-004007.CrossRefGoogle Scholar
  32. 32.
    Guyon I, Weston J, Barnhill S, Vapnik V. Gene selection for cancer classification using support vector machines. Mach Learn. 2002;46:389–422.CrossRefzbMATHGoogle Scholar
  33. 33.
    Halabi S, Lin C-Y, Kelly WK, et al. Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014;32:671–7. doi: 10.1200/JCO.2013.52.3696.CrossRefGoogle Scholar
  34. 34.
    Harrell FE, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996;15:361–87.CrossRefGoogle Scholar
  35. 35.
    Henderson R, Jones M, Stare J. Accuracy of point predictions in survival analysis. Stat Med. 2001;20:3083–96. doi: 10.1002/sim.913.CrossRefGoogle Scholar
  36. 36.
    Henderson R, Keiding N. Individual survival time prediction using statistical models. J Med Ethics. 2005;31:703–6. doi: 10.1136/jme.2005.012427.CrossRefGoogle Scholar
  37. 37.
    Hofner B, Boccuto L, Göker M. Controlling false discoveries in high-dimensional situations: boosting with stability selection. BMC Bioinform. 2015;16:144. doi: 10.1186/s12859-015-0575-3.CrossRefGoogle Scholar
  38. 38.
    Hosmer DW, Lemeshow S, Sturdivant RX. Applied logistic regression. 3rd ed. New York: Wiley Interscience; 2013.CrossRefzbMATHGoogle Scholar
  39. 39.
    Ishwaran H, Kogalur UB, Blackstone EH, Lauer MS. Random survival forests. Ann Appl Stat. 2008;2:841–60. doi: 10.1214/08-AOAS169.MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Jonsdottir T, Hvannberg ET, Sigurdsson H, Sigurdsson S. The feasibility of constructing a predictive outcome model for breast cancer using the tools of data mining. Expert Syst Appl. 2008;34:108–18. doi: 10.1016/j.eswa.2006.08.029.CrossRefGoogle Scholar
  41. 41.
    Kass GV. An exploratory technique for investigating large quantities of categorical data. Appl Stat. 1980;29:119–27. doi: 10.2307/2986296.CrossRefGoogle Scholar
  42. 42.
    Katz MHG, Hu C-Y, Fleming JB, et al. A clinical calculator of conditional survival estimates for resected and unresected pancreatic cancer survivors. Arch Surg. 2012;147:513–9. doi: 10.1001/archsurg.2011.2281.CrossRefGoogle Scholar
  43. 43.
    Khan FM, Zubek VB. Support vector regression for censored data (SVRc): a novel tool for survival analysis. In: Eighth IEEE international conference on data mining. New York: IEEE; 2008. p. 863–868.Google Scholar
  44. 44.
    Kharya S. Using data mining techniques for diagnosis and prognosis of cancer disease. Int J Comput Sci Inf Technol. 2012;2:55–66. doi: 10.5121/ijcseit.2012.2206.Google Scholar
  45. 45.
    Laber EB, Zhao YQ. Tree-based methods for individualized treatment regimes. Biometrika. 2015;102:501–14. doi: 10.1093/biomet/asv028.MathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    Lancashire LJ, Lemetre C, Ball GR. An introduction to artificial neural networks in bioinformatics—application to complex microarray and mass spectrometry datasets in cancer studies. Brief Bioinform. 2009;10:315–29. doi: 10.1093/bib/bbp012.CrossRefGoogle Scholar
  47. 47.
    LeBlanc M, Crowley J. Relative risk tees for censored survival data. Biometrics. 1992;48:411–25.CrossRefGoogle Scholar
  48. 48.
    LeBlanc M, Kooperberg C. Boosting predictions of treatment success. Proc Natl Acad Sci USA. 2010;107:13559–60. doi: 10.1073/pnas.1008052107.CrossRefGoogle Scholar
  49. 49.
    Lisboa PJ, Taktak AFG. The use of artificial neural networks in decision support in cancer: a systematic review. Neural Netw. 2006;19:408–15. doi: 10.1016/j.neunet.2005.10.007.CrossRefzbMATHGoogle Scholar
  50. 50.
    Liu HX, Zhang RS, Luan F, et al. Diagnosing breast cancer based on support vector machines. J Chem Inf Comput Sci. 2003;43:900–7.CrossRefGoogle Scholar
  51. 51.
    Loh W-Y. Classification and regression trees. Wiley Interdiscip Rev Data Min Knowl Discov. 2011;1:14–23. doi: 10.1002/widm.8.CrossRefGoogle Scholar
  52. 52.
    Louie KS, Seigneurin A, Cathcart P, Sasieni P. Do prostate cancer risk models improve the predictive accuracy of PSA screening? A meta-analysis. Ann Oncol. 2015;26:848–64. doi: 10.1093/annonc/mdu525.CrossRefGoogle Scholar
  53. 53.
    Lowrance WT, Elkin EB, Jacks LM, et al. Comparative effectiveness of surgical treatments for prostate cancer: a population-based analysis of postoperative outcomes. J Urol. 2010;183:1366–72. doi: 10.1016/j.juro.2009.12.021.Comparative.CrossRefGoogle Scholar
  54. 54.
    Lundin M, Lundin J, Burke HB, et al. Artificial neural networks applied to survival prediction in breast cancer. Oncology. 1999;57:281–6.CrossRefGoogle Scholar
  55. 55.
    Mayr A, Hofner B, Schmid M. Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection. BMC Bioinform. 2016;17:288. doi: 10.1186/s12859-016-1149-8.CrossRefGoogle Scholar
  56. 56.
    Meads C, Ahmed I, Riley RD. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance. Breast Cancer Res Treat. 2012;132:365–77. doi: 10.1007/s10549-011-1818-2.CrossRefGoogle Scholar
  57. 57.
    Menéndez LÁ, de Cos Juez FJ, Lasheras SF, Riesgo JAÁ. Artificial neural networks applied to cancer detection in a breast screening programme. Math Comput Model. 2010;52:983–91. doi: 10.1016/j.mcm.2010.03.019.MathSciNetCrossRefzbMATHGoogle Scholar
  58. 58.
    Morgan JN, Sonquist JA. Problems in the analysis of survey data, and a proposal. J Am Stat Assoc. 1963;58:415–34. doi: 10.1080/01621459.1963.10500855.CrossRefzbMATHGoogle Scholar
  59. 59.
    Oberije C, De Ruysscher D, Houben R, et al. A validated prediction model for overall survival from stage III non-small cell lung cancer: toward survival prediction for individual patients. Int J Radiat Oncol Biol Phys. 2015;92:935–44. doi: 10.1016/j.ijrobp.2015.02.048.CrossRefGoogle Scholar
  60. 60.
    Parks CM. Prognoses should be based on proved indicators not intuition. BMJ. 2000;320:473. doi: 10.1136/bmj.320.7233.469.CrossRefGoogle Scholar
  61. 61.
    Penciana MJ, D’Agostino RB. Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation. Stat Med. 2004;23:2109–23. doi: 10.1002/sim.1802.CrossRefGoogle Scholar
  62. 62.
    Pölsterl S, Conjeti S, Navab N, Katouzian A. Survival analysis for high-dimensional, heterogeneous medical data: exploring feature extraction as an alternative to feature selection. Artif Intell Med. 2016;72:1–11. doi: 10.1016/j.artmed.2016.07.004.CrossRefGoogle Scholar
  63. 63.
    Royston P, Sauerbrei W. A new measure of prognostic separation in survival data. Stat Med. 2004;23:723–48. doi: 10.1002/sim.1621.CrossRefGoogle Scholar
  64. 64.
    Saritas I. Prediction of breast cancer using artificial neural networks. J Med Syst. 2012;36:2901–7. doi: 10.1007/s10916-011-9768-0.CrossRefGoogle Scholar
  65. 65.
    Sauerbrei W, Hübner K, Schmoor C, Schumacher M. Validation of existing and development of new prognostic classification schemes in node negative breast cancer. Breast Cancer Res Treat. 1997;42:149–63.CrossRefGoogle Scholar
  66. 66.
    Schapire RE, Freund Y. Boosting—foundations and algorithms. Cambridge: MIT Press; 2012.zbMATHGoogle Scholar
  67. 67.
    Schoop R, Graf E, Schumacher M. Quantifying the predictive performance of prognostic models for censored survival data with time-dependent covariates. Biometrics. 2008;64:603–10. doi: 10.1111/j.l541-0420.2007.00889.x.MathSciNetCrossRefzbMATHGoogle Scholar
  68. 68.
    Schwarzer G, Vach W, Schumacher M. On the misuses of artificial neural networks for prognostic and diagnostic classification in oncology. Stat Med. 2000;19:541–61. doi: 10.1002/(SICI)1097-0258(20000229)19:4<541:AID-SIM355>3.0.CO;2-V.CrossRefGoogle Scholar
  69. 69.
    Scutari M, Denis J-B. Bayesian networks: with examples in R. Boca Raton: CRC Press; 2014.zbMATHGoogle Scholar
  70. 70.
    Sesen MB, Nicholson AE, Banares-Alcantara R, et al. Bayesian networks for clinical decision support in lung cancer care. PLoS ONE. 2013;8:e82349. doi: 10.1371/journal.pone.0082349.CrossRefGoogle Scholar
  71. 71.
    Shivaswamy PK, Chu W, Jansche M. A support vector approach to censored targets. In: Seventh IEEE international conference on data mining. New York: IEEE; 2007. p. 655–660.Google Scholar
  72. 72.
    Steyerberg EW, Harrell FE, Borsboom GJJM, et al. Internal validation of predictive models: efficiency of some procedures for logistic regression analysis. J Clin Epidemiol. 2001;54:774–81. doi: 10.1016/S0895-4356(01)00341-9.CrossRefGoogle Scholar
  73. 73.
    Steyerberg EW, Vickers AJ, Cook NR, et al. Assessing the performance of prediction models: a framework for some traditional and novel measures. Epidemiology. 2010;21:128–38. doi: 10.1097/EDE.0b013e3181c30fb2.Assessing.CrossRefGoogle Scholar
  74. 74.
    Sweilam NH, Tharwat AA, Moniem NKA. Support vector machine for diagnosis cancer disease: a comparative study. Egypt Inform J. 2010;11:81–92. doi: 10.1016/j.eij.2010.10.005.CrossRefGoogle Scholar
  75. 75.
    Van Belle V, Pelckmans K, Van Huffel S, Suykens JAK. Support vector methods for survival analysis: A comparison between ranking and regression approaches. Artif Intell Med. 2011;53:107–18.CrossRefGoogle Scholar
  76. 76.
    van Gerven MAJ, Taal BG, Lucas PJF. Dynamic Bayesian networks as prognostic models for clinical patient management. J Biomed Inform. 2008;41:515–29. doi: 10.1016/j.jbi.2008.01.006.CrossRefGoogle Scholar
  77. 77.
    van Stiphout RGPM, Postma EO, Valentini V, Lambin P. The contribution of machine learning to predicting cancer outcome. Artif Intell. 2010;350:400.Google Scholar
  78. 78.
    Vapnik VN. Statistical learning theory. New york: Wiley Interscience; 1998.zbMATHGoogle Scholar
  79. 79.
    Wang SJ, Wissel AR, Luh JY, et al. An interactive tool for individualized estimation of conditional survival in rectal cancer. Ann Surg Oncol. 2011;18:1547–52. doi: 10.1245/s10434-010-1512-3.CrossRefGoogle Scholar
  80. 80.
    Williams TGS, Cubiella J, Griffin SJ, et al. Risk prediction models for colorectal cancer in people with symptoms: a systematic review. BMC Gastroenterol. 2016;16:63. doi: 10.1186/s12876-016-0475-7.CrossRefGoogle Scholar
  81. 81.
    Yosefian I, Mosa Farkhani E, Baneshi MR. Application of random forest survival models to increase generalizability of decision trees: a case study in acute myocardial infarction. Comput Math Methods Med. 2015;2015:576413. doi: 10.1155/2015/576413.MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  1. 1.OHSU-PSU School of Public Health, Knight Cancer InstituteOregon Health & Science UniversityPortlandUSA
  2. 2.Fariborz Maseeh Department of Mathematics and StatisticsPortland State UniversityPortlandUSA

Personalised recommendations