Skip to main content

Decision Making with Machine Learning in Our Modern, Data-Rich Health-Care Industry

  • Chapter
  • First Online:
Decision Making in a World of Comparative Effectiveness Research

Abstract

Recent innovation in the health-care industry has given us an abundance of data with which we can compare the efficacy of alternative treatments, drugs, and other health interventions. Machine learning has proven to be particularly adept at finding intricate relationships within large datasets. In this chapter we emphasize the potential for machine learning to help us digest and use health-care data effectively. We first provide an introduction to machine learning algorithms, particularly neural network and ensemble algorithms. We then discuss machine learning applications in three areas of the health-care industry. Learning algorithms have been used within the lab as a method of automation to complement problem solving and decision making in the workplace. They have been used to compare the effectiveness of alternative interventions, such as drugs taken together. Given the rise in genomic data, they have been used to develop new treatments and drugs. Taken together, these trends suggest there is vast potential for the expanded application of these algorithms in health care.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the future, biometric and wearable patient identification devices will potentially automate patient identification and data entry [3]. In addition, wearable devices are being developed that will provide information on patient and consumer vital signs, weight, glucose levels, and respiratory function [4].

  2. 2.

    At the outset, the weights are usually initialized with random values drawn from a probability distribution [8].

  3. 3.

    Deep neural networks were not particularly popular until successful attempts were reported in the mid-2000s, when deep belief networks were introduced and related deep learning algorithms proposed in 2006 [9,10,11].

  4. 4.

    Modern applications of image recognition are far more impressive than the number recognition example here. For example, Chen et al. use a deep learning algorithm for automated glaucoma classification, and Gao et al. use a convolutional-recursive network to grade cataracts [14, 15].

  5. 5.

    Hidden neurons sharing the same weights are collectively called a feature map. The repeated application of the same set of weights across the input image is, mathematically speaking, a convolution. This gives these networks their name.

  6. 6.

    Feedback is present in almost all parts of the nervous system (Freeman 1975). The number of feedback connections between different areas in the brain is at least as large as the number of feedforward connections [21]. For example, the primary visual cortex receives (feedforward) signals from the retina through the lateral geniculate nucleus (LGN). The number of signals in the opposite direction, from V1 to the LGN, is approximately ten times as large [17]. Visual cortex area V2 also sends signals back to V1 and may even play a role during immediate recognition [12].

  7. 7.

    “Bagging,” for example, involves constructing k different datasets, each the size of the original dataset. Each of these different datasets is constructed by sampling with replacement from the original dataset. Model i is then trained on dataset i. The differences between which examples are included in each dataset result in differences between the trained models [23].

  8. 8.

    For example, note that the US Food and Drug Administration enabling law (FDC Act, as amended in 1962) does not require an assessment of comparative effectiveness to support its decisions [31].

  9. 9.

    If the two examples above did not provide you with sufficient context to understand the size of the genomic data, consider another example. It has been estimated that the entire printed collection of the Library of Congress is approximately 10 terabytes (terabyte = 1012 bytes); the raw genomic data corresponding to a single cohort of one million patients would require approximately 5700 terabytes [32, 33].

  10. 10.

    The older term “personalized medicine” is sometimes used interchangeably with precision medicine. While some do, we do not make a distinction here between the two terms.

  11. 11.

    Many modern machine learning algorithms can be trained in parallel, i.e., across multiple processors simultaneously. The first major application of deep belief networks, in speech recognition, was possible because fast and easy to program GPUs allowed researchers to train the networks up to 20 times faster. Similarly, the recent success of ConvNets can partly be attributed to the efficient use of GPUs. Whereas training deep ConvNet architectures with 10–20 layers would have taken weeks two years ago, advances in hardware, software, and parallelization have reduced this time to a few hours [39].

  12. 12.

    White et al. also examined adverse effects due to drug pairing, but used data drawn from queries entered into search engines, e.g., Google [41]. Their analysis of this large quantity of data revealed prescription drug side effects before they were found by the US FDA’s warning system. Although White et al. did not use them, machine learning algorithms could be used in this application as they were in Tatonetti et al. [40].

References

  1. Morris I (2016) Apple watch saves man’s life. Forbes. Available from: http://www.forbes.com/sites/ianmorris/2016/03/28/apple-watch-saves-mans-life/#7eda2e275783. Accessed 18 May 2016

  2. Snowdon W (2016) Apple watch saved Alberta man’s life, makes international headlines. CBC News. Available from: http://www.cbc.ca/news/canada/edmonton/apple-watch-saved-alberta-man-s-life-makes-international-headlines-1.3495397. Accessed 18 May 2016

  3. Thrall JH (2012) Look ahead: the future of medical imaging. RSNA News 25(8):4–6

    Google Scholar 

  4. Berger ML, Doban V (2014) Big data, advanced analytics and the future of comparative effectiveness research. J Comp Eff Res 3(2):167–176

    Article  PubMed  Google Scholar 

  5. Baker M (2010) Next-generation sequencing: adjusting to data overload. Nat Methods 7(7):495–499

    Article  CAS  Google Scholar 

  6. Stephens ZD, Lee SY, Faghri F, Campbell RH, Zhai C, Efron MJ et al (2015) Big data: astronomical or genomical? PLoS Biol 13(7):e1002195

    Article  PubMed  PubMed Central  Google Scholar 

  7. Mitchell TM (1997) Machine learning. Machine learning. McGraw-Hill, New York

    Google Scholar 

  8. Günther F, Fritsch S (2010) Neuralnet: training of neural networks. R J 2(1):30–38

    Google Scholar 

  9. Hinton GE, Osindero S, Teh Y-W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  PubMed  Google Scholar 

  10. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Schölkopf B, Platt J, Hoffman T (eds)., Advances in Neural Information Processing Systems 19 (NIPS’06), p. 153–60. Available from: http://www.iro.umontreal.ca/~lisa/pointeurs/BengioNips2006All.pdf

  11. Ranzato MA, Poultney C, Chopra S, LeCun Y (2006) Efficient learning of sparse representations with an energy-based model. Nips 1:1137–1144

    Google Scholar 

  12. Serre T, Kreiman G, Kouh M, Cadieu C, Knoblich U, Poggio T (2007) A quantitative theory of immediate visual recognition. Prog Brain Res 165:33–56

    Article  PubMed  Google Scholar 

  13. Nielsen MA (2015) Neural networks and deep learning. Determination Press

    Google Scholar 

  14. Chen X, Xu Y, Wong DWK, Wong TY, Liu J (2015) Glaucoma detection based on deep convolutional neural network. Conf Proc Annu Int Conf IEEE Eng Med Biol Soc IEEE Eng Med Biol Soc Annu Conf 2015:715–718

    Google Scholar 

  15. Gao X, Lin S, Wong TY (2015) Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Trans Biomed Eng 62(11):2693–2701

    Article  PubMed  Google Scholar 

  16. Hubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160:106–154

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Haykin SS (2009) Neural networks and learning machines. Pearson Education, Upper Saddle River

    Google Scholar 

  18. Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer-Verlag New York, Inc., Secaucus

    Google Scholar 

  19. Phillips J, Gully SM (2013) Organizational behavior. Tools for success, 2nd edn. South-Western Cengage Learning, Mason, p xxvii, 574

    Google Scholar 

  20. Serre T (2015) Hierarchical models of the visual system. Encycl Comput Neurosci 1309–18

    Google Scholar 

  21. Churchland PS, Sejnowski TJ (1992) The computational brain. MIT Press, Cambridge, MA

    Google Scholar 

  22. Encyclopædia Britannica Inc. Machine learning|Artificial intelligence. In: Britannica.com. Encyclopædia Britannica, Inc. (2016). Available from: http://www.britannica.com/technology/machine-learning. Accessed 19 May 2016

  23. Goodfellow I, Bengio Y, Courville A. Deep learning. In: book in preparation for MIT Press. MIT Press. 2016. Available from: http://www.deeplearningbook.org/. Accessed 19 May 2016

  24. Liaw A, Wiener M (2002) Classification and regression by randomForest. R News 2(3):18–22

    Google Scholar 

  25. Cheng J-Z, Chou Y-H, Huang C-S, Chang Y-C, Tiu C-M, Chen K-W et al (2010) Computer-aided US diagnosis of breast lesions by using cell-based contour grouping. Radiology 255(3):746–754

    Article  PubMed  Google Scholar 

  26. Hua K-L, Hsu C-H, Hidayati SC, Cheng W-H, Chen Y-J (2015) Computer-aided classification of lung nodules on computed tomography images via deep learning technique. Onco Targets Ther 8:2015–2022

    PubMed  PubMed Central  Google Scholar 

  27. Ciompi F, de Hoop B, van Riel SJ, Chung K, Scholten ET, Oudkerk M et al (2015) Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box. Med Image Anal 26(1):195–202

    Article  PubMed  Google Scholar 

  28. Gwynne P (2013) Next-generation scans: seeing into the future. Nature 502(7473):S96–S97

    Article  PubMed  Google Scholar 

  29. American Recovery and Reinvestment Act of 2009 (2009) Available from: https://www.gpo.gov/fdsys/pkg/PLAW-111publ5/html/PLAW-111publ5.htm. Accessed 19 May 2016

  30. Pear R (2009) U.S. to study effectiveness of treatments. The New York Times. A1. Available from: http://www.nytimes.com/2009/02/16/health/policy/16health.html?_r=0. Accessed 19 May 2016

  31. IJzerman M, Manca A, Keizer J, Ramsey S (2015) Implementation of comparative effectiveness research in personalized medicine applications in oncology: current and future perspectives. Comp Eff Res 5:65

    Google Scholar 

  32. Huser V, Cimino JJ (2015) Impending challenges for the use of big data. Int J Radiat Oncol Biol Phys. doi:10.1016/j.ijrobp.2015.10.060

    PubMed  Google Scholar 

  33. Bunn J (2012) How big is a petabyte, exabyte, zettabyte, or a yottabyte? In: High scalability. Todd Hoff. Available from: http://highscalability.com/blog/2012/9/11/how-big-is-a-petabyte-exabyte-zettabyte-or-a-yottabyte.html. Accessed 19 May 2016

  34. National Research Council (US) Committee on A Framework for Developing a New Taxonomy of Disease (2011) Toward precision medicine: building a knowledge network for biomedical research and a new taxonomy of disease. National Academies Press (US), Washington, DC

    Google Scholar 

  35. Farhat MR, Sultana R, Iartchouk O, Bozeman S, Galagan J, Sisk P, Stolte C, Nebenzahl-Guimaraes H, Jacobson K, Sloutsky A, Kaur D, Posey J, Kreiswirth BN, Kurepina N, Rigouts L, Streicher EM, Victor TC, Warren RM, van Soolingen D, Murray M (2016) Genetic determinants of drug resistance in mycobacterium tuberculosis and their diagnostic value. Am J Respir Crit Care Med. 194(5):621–630. doi: 10.1164/rccm.201510-2091OC

  36. The White House (2015) Precision medicine initiative. The White House. Available from: https://www.whitehouse.gov/precision-medicine. Accessed 19 May 2016

  37. National Institutes of Health (NIH) (2015) Precision Medicine Initiative. National Institutes of Health. U.S. Department of Health and Human Services. Available from: https://www.nih.gov/precision-medicine-initiative-cohort-program. Accessed 19 May 2016

  38. Chen RC, Gabriel PE, Kavanagh BD, McNutt TR (2015) How will big data impact clinical decision making and precision medicine in radiation therapy? Int J Radiat Oncol Biol Phys. doi:10.1016/j.ijrobp.2015.10.052

    Google Scholar 

  39. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  CAS  PubMed  Google Scholar 

  40. Tatonetti NP, Fernald GH, Altman RB (2012) A novel signal detection algorithm for identifying hidden drug-drug interactions in adverse event reports. J Am Med Inform Assoc 19(1):79–85

    Article  PubMed  Google Scholar 

  41. White RW, Tatonetti NP, Shah NH, Altman RB, Horvitz E (2013) Web-scale pharmacovigilance: listening to signals from the crowd. J Am Med Inform Assoc 20(3):404–408

    Article  PubMed  PubMed Central  Google Scholar 

  42. Liu S, Tang B, Chen Q, Wang X (2016) Drug-drug interaction extraction via convolutional neural networks. Comput Math Methods Med 2016:1–8

    Google Scholar 

  43. Leung MKK, Delong A, Alipanahi B, Frey BJ (2016) Machine learning in genomic medicine: a review of computational problems and data sets. Proc IEEE 104(1):176–197

    Article  Google Scholar 

  44. Jamali AA, Ferdousi R, Razzaghi S, Li J, Safdari R, Ebrahimie E (2016) DrugMiner: comparative analysis of machine learning algorithms for prediction of potential druggable proteins. Drug Discov Today. doi:10.1016/j.drudis.2016.01.007

    PubMed  Google Scholar 

  45. Prachayasittikul V, Worachartcheewan A, Shoombuatong W, Prachayasittikul V, Nantasenamat C (2015) Classification of P-glycoprotein-interacting compounds using machine learning methods. EXCLI J 14:958–970

    PubMed  PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lisa Pinheiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter

Dadson, N., Pinheiro, L., Royer, J. (2017). Decision Making with Machine Learning in Our Modern, Data-Rich Health-Care Industry. In: Birnbaum, H., Greenberg, P. (eds) Decision Making in a World of Comparative Effectiveness Research. Adis, Singapore. https://doi.org/10.1007/978-981-10-3262-2_21

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-3262-2_21

  • Published:

  • Publisher Name: Adis, Singapore

  • Print ISBN: 978-981-10-3261-5

  • Online ISBN: 978-981-10-3262-2

  • eBook Packages: MedicineMedicine (R0)

Publish with us

Policies and ethics