Skip to main content

Advertisement

Log in

Multi-label text mining to identify reasons for appointments to drive population health analytics at a primary care setting

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

While much research has been conducted regarding population health analytics (PHA), there is limited research related to text mining in that area. In this research, a novel multi-label text mining model is developed to analyze and categorize the reasons for medical appointments at a primary medical center serving a rural population. The model converts an unstructured, unsupervised text corpus to a structured supervised multi-label text corpus by using look-up wordlists defined through expert domain knowledge (EDK). The text dataset contains the reasons patients made appointments in 2019. The appointment reasons were grouped into 27 categories. Each appointment reason text is tagged to its associated group (label) using associated look-up wordlists. Then, the tagged corpus is used to develop a multi-label text classification model using machine learning algorithms. Two resampling models (balanced classifiers and SMOTE) are considered to adjust for the unbalanced created labels. The classifiers and models are tested in three steps using validation, testing, and implementation datasets. Both models performed well, but the SMOTE model is more generalizable, reliable, and consistent than the balanced classifiers. The label-set performance measures are equal to or greater than 77.9% for the balanced classifiers and greater than 80% for the SMOTE model. The label-based testing performance measures using both models and all classifiers are generally greater than 90% for all labels. Finally, the PHA showed that the follow-up and well-check (WC) physical patients are the largest populations (32% and 16.54%, respectively). Besides, the populations are different based on some factors such as age, insurance, show rate, punctuality rate, and scheduling type, while the populations are very similar based on other factors such as ethnicity and gender.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Nash DB, Fabius RJ, Skoufalos A, Clarke JL (2015) Population health: Creating a culture of wellness. Jones & Bartlett Publishers

  2. Shaban-Nejad A, Michalowski M, Buckeridge DL (2018) Health intelligence: how artificial intelligence transforms population and personalized health. NPJ Digital Med 1(53):1–2. https://doi.org/10.1038/s41746-018-0058-9

    Article  Google Scholar 

  3. Just E (2017) How to use text analytics in healthcare to improve outcomes—why you need more than nlp. Health Catalyst. https://www.healthcatalyst.com/how-to-use-text-analytics-in-healthcare-to-improve-outcomes. Accessed 23 Feb 2017

  4. Raja U, Mitchell T, Day T, Hardin JM (2008) Text mining in healthcare. Applications and opportunities. J Healthc Inf Manag 22(3):52–56

    Google Scholar 

  5. Torii M, Tilak SS, Doan S, Zisook DS, Fan JW (2016) Mining health-related issues in consumer product reviews by using scalable text analytics. Biomed Inform Insights 8:BII-S3779. https://doi.org/10.4137/BII.S37791

    Article  Google Scholar 

  6. Nguyen T, Larsen ME, O’Dea B, Nguyen DT, Yearwood J, Phung D, Venkatesh D, Christensen H (2017) Kernel-based features for predicting population health indices from geocoded social media data. Decis Support Syst 102:22–31. https://doi.org/10.1016/j.dss.2017.06.010

    Article  Google Scholar 

  7. Simmons M, Singhal A, Lu Z (2016) Text mining for precision medicine: bringing structure to EHRs and biomedical literature to understand genes and health. Springer, Singapore, pp 139–166

    Google Scholar 

  8. Francis RS, Dernoncourt F (2016) Improving patient cohort identification using natural language processing. Secondary analysis of electronic health records. Springer, Cham, pp 405–417

  9. Lindberg DA, Humphreys BL, McCray AT (1993) The unified medical language system. Methods Inf Med 32(4):281

    Article  Google Scholar 

  10. Kim JC, Chung K (2019) Associative feature information extraction using text mining from health big data. Wireless Pers Commun 105(2):691–707

    Article  MathSciNet  Google Scholar 

  11. Zheng L, Wang Y, Hao S et al (2016) Web-based real-time case finding for the population health management of patients with diabetes mellitus: A prospective validation of the natural language processing–based algorithm with statewide electronic medical records. JMIR Med Inform 4(4):e37

    Article  Google Scholar 

  12. Hammond KW, Ben-Ari AY, Laundry RJ, Boyko EJ, Samore MH (2015) The feasibility of using large-scale text mining to detect adverse childhood experiences in a VA-treated population. J Trauma Stress 28(6):505–514. https://doi.org/10.1002/jts.22058

    Article  Google Scholar 

  13. Wu CS, Kuo CJ, Su CH, Wang S, Dai HJ (2020) Using text mining to extract depressive symptoms and to validate the diagnosis of major depressive disorder from electronic health records. J Affect Disord 260:617–623. https://doi.org/10.1016/j.jad.2019.09.044

    Article  Google Scholar 

  14. Lazard AJ, Scheinfeld E, Bernhardt JM, Wilcox GB, Suran M (2015) Detecting themes of public concern: a text mining analysis of the centers for disease control and prevention’s Ebola live Twitter chat. Am J Infect Control 43(10):1109–1111. https://doi.org/10.1016/j.ajic.2015.05.025

    Article  Google Scholar 

  15. Wakamiya S, Kawai Y, Aramaki E (2018) Twitter-based influenza detection after flu peak via tweets with indirect information: text mining study. JMIR Public Health Surveill 4(3):e65. https://doi.org/10.2196/publichealth.8627

    Article  Google Scholar 

  16. Demner-Fushman D, Elhadad N (2016) Aspiring to unintended consequences of natural language processing: a review of recent developments in clinical and consumer-generated text processing. Yearb Med Inform 25(01):224–233

    Article  Google Scholar 

  17. Velupillai S, Suominen H, Liakata M et al (2018) Using clinical natural language processing for health outcomes research: overview and actionable suggestions for future advances. J Biomed Inform 88:11–19. https://doi.org/10.1016/j.jbi.2018.10.005

    Article  Google Scholar 

  18. Pedregosa F, Varoquaux G, Gramfort A (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830

    MathSciNet  MATH  Google Scholar 

  19. W. McKinney W (2010) Data Structures for Statistical Computing in Python. In: Proceedings of the 9th Python in science conference 445:51–56

  20. Oliphant TE (2006) A guide to NumPy. Trelgol Publishing, USA

    Google Scholar 

  21. Bird S, Klein E, Loper E (2009) Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media Inc, California

    MATH  Google Scholar 

  22. Waskom M, Botvinnik O, Ostblom J et al (2020) Mwaskom/seaborn. Zenodo V0(10):1. https://doi.org/10.5281/zenodo.3767070

    Article  Google Scholar 

  23. Hunter JD (2007) Matplotlib: a 2D graphics environment. IEEE Ann Hist Comput 9(03):90–95. https://doi.org/10.1109/MCSE.2007.55

    Article  Google Scholar 

  24. Zhang Y, Jin R, Zhou ZH (2010) Understanding bag-of-words model: a statistical framework. Int J Mach Learn Cybern 1:43–52. https://doi.org/10.1007/s13042-010-0001-0

    Article  Google Scholar 

  25. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intel Res 16:321–357. https://doi.org/10.1613/jair.953

    Article  MATH  Google Scholar 

  26. Szymański P, Kajdanowicz T (2017) A scikit-based Python environment for performing multi-label classification. arXiv preprint arXiv:1702.01460

  27. SpolaôR N, Cherman EA, Monard MC, Lee HD (2013) A comparison of multi-label feature selection methods using the problem transformation approach. Electron Notes Theor Comput Sci 292:135–151. https://doi.org/10.1016/j.entcs.2013.02.010

    Article  Google Scholar 

  28. Ojala M, Garriga GC (2010) Permutation tests for studying classifier performance. J Mach Learn Res 11(6)

  29. Twomey JM, Smith AE (1998) Bias and variance of validation methods for function approximation neural networks under conditions of sparse data. IEEE Transa Syst Man Cybern Part C Appl Rev 28(3):417–430. https://doi.org/10.1109/5326.704579

    Article  Google Scholar 

  30. Charte F, Rivera AJ, Del Jesus MJ, Herrera F (2015) Addressing imbalance in multilabel classification: measures and random resampling algorithms. Neurocomputing 163:3–16. https://doi.org/10.1016/j.neucom.2014.08.091

    Article  Google Scholar 

  31. Read J, Pfahringer B, Holmes G, Frank E (2011) Classifier chains for multi-label classification. Mach Learn 85(3):333. https://doi.org/10.1007/s10994-011-5256-5

    Article  MathSciNet  Google Scholar 

  32. Lemaître G, Nogueira F, Aridas CK (2017) Imbalanced-learn: a python toolbox to tackle the curse of imbalanced datasets in machine learning. J Mach Learn Res 18(1):559–563

    Google Scholar 

  33. Abu Lekham L, Wang Y, Hey E, Lam SS, Khasawneh MT (2021) A multi-stage predictive model for missed appointments at outpatient primary care settings serving rural areas. IISE Trans Healthcare Syst Eng 11(2):79–94. https://doi.org/10.1080/24725579.2020.1858210

    Article  Google Scholar 

  34. Powers DM (2010) Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061

  35. Tsoumakas G, Katakis L (2007) Multi-label classification: an overview. Int J Data Warehous Min (IJDWM) 3(3):1–13. https://doi.org/10.4018/jdwm.2007070101

    Article  Google Scholar 

  36. Fawcett T (2006) An introduction to ROC analysis. Pattern Recogn Lett 27(8):861–874. https://doi.org/10.1016/j.patrec.2005.10.010

    Article  MathSciNet  Google Scholar 

  37. Flach PA, Kull M (2015) Precision-recall-gain curves: PR analysis done right. NIPS 15

  38. Davis J, Goadrich M (2006) The relationship between precision-recall and ROC curves. In: the 23rd international conference on machine learning 233–240. https://doi.org/10.1145/1143844.1143874

  39. Brodersen KH, Ong CS, Stephan KE, Buhmann JM (2010) The balanced accuracy and its posterior distribution. In: 2010 20th international conference on pattern recognition 3121–3124. https://doi.org/10.1109/ICPR.2010.764

  40. Kubat M, Matwin S (1997) Addressing the curse of imbalanced training sets: one-sided selection. Icml 97:179–186

    Google Scholar 

  41. Han J, Pei J, Yin Y, Mao R (2004) Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min Knowl Disc 8(1):53–87. https://doi.org/10.1023/B:DAMI.0000005258.31418.83

    Article  MathSciNet  Google Scholar 

  42. Agrawal R, Srikant R (1994) Fast algorithms for mining association rules. In: Proceedings of 20th international conference on very large data bases, VLDB 1215:487–499

  43. Lancet T (2019) Cardiology’s problem women. Lancet 393(10175):959. https://doi.org/10.1016/S0140-6736(19)30510-0

    Article  Google Scholar 

  44. Giardina EG (2000) Heart disease in women. Int J Fertil Womens Med 45(6):350–357

    Google Scholar 

  45. Abu Lekham L, Wang Y, Hey E, Khasawneh MT (2022) Multi-criteria text mining model for COVID-19 testing reasons and symptoms and temporal predictive model for COVID-19 test results in rural communities. Neural Comput Appl 34(10):7523–7536. https://doi.org/10.1007/s00521-021-06884-w

    Article  Google Scholar 

Download references

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Wang.

Ethics declarations

Conflict on interest

There is no competing or conflict of interest associated with this study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abu Lekham, L., Wang, Y., Hey, E. et al. Multi-label text mining to identify reasons for appointments to drive population health analytics at a primary care setting. Neural Comput & Applic 34, 14971–15005 (2022). https://doi.org/10.1007/s00521-022-07306-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07306-1

Keywords

Navigation