Skip to main content

Prediction of Future Chronic Opioid Use Among Hospitalized Patients



Opioids are commonly prescribed in the hospital; yet, little is known about which patients will progress to chronic opioid therapy (COT) following discharge. We defined COT as receipt of ≥ 90-day supply of opioids with < 30-day gap in supply over a 180-day period or receipt of ≥ 10 opioid prescriptions over 1 year. Predictive tools to identify hospitalized patients at risk for future chronic opioid use could have clinical utility to improve pain management strategies and patient education during hospitalization and discharge.


The objective of this study was to identify a parsimonious statistical model for predicting future COT among hospitalized patients not on COT before hospitalization.


Retrospective analysis electronic health record (EHR) data from 2008 to 2014 using logistic regression.


Hospitalized patients at an urban, safety net hospital.

Main Measurements

Independent variables included medical and mental health diagnoses, substance and tobacco use disorder, chronic or acute pain, surgical intervention during hospitalization, past year receipt of opioid or non-opioid analgesics or benzodiazepines, opioid receipt at hospital discharge, milligrams of morphine equivalents prescribed per hospital day, and others.

Key Results

Model prediction performance was estimated using area under the receiver operator curve, accuracy, sensitivity, and specificity. A model with 13 covariates was chosen using stepwise logistic regression on a randomly down-sampled subset of the data. Sensitivity and specificity were optimized using the Youden’s index. This model predicted correctly COT in 79% of the patients and no COT correctly in 78% of the patients.


Our model accessed EHR data to predict 79% of the future COT among hospitalized patients. Application of such a predictive model within the EHR could identify patients at high risk for future chronic opioid use to allow clinicians to provide early patient education about pain management strategies and, when able, to wean opioids prior to discharge while incorporating alternative therapies for pain into discharge planning.


The USA is facing an unprecedented opioid epidemic. According to data from the 2015 National Survey of Drug Use and Health, over two million people had a prescription opioid use disorder.1 People who were uninsured, unemployed, or had lower family incomes reported higher rates of opioid use, misuse, or opioid use disorder.2 Opioid prescribing for chronic pain can be challenging to clinicians who have little training in addiction or managing patients who misuse their opioid medications.3 Risk factors for opioid misuse among people on chronic opioid therapy (COT) include a history of substance use disorder, younger age, increased healthcare utilization, and depression or anxiety.4, 5 Predictive tools identify patients at risk of aberrant drug-related behaviors6,7,8 and assist with diagnosing addiction in patients on COT.9 Tools to identify patients at risk of becoming chronic opioid users for both acute and chronic pain are lacking. This is particularly important in the hospital where opioids are commonly prescribed for pain.10 Opioid receipt at hospital discharge has been shown to be associated with an increased risk of chronic opioid use.11 Predictive tools to identify hospitalized patients at risk for future COT may have clinical utility to improve hospital-based pain management with a focus on limiting opioid prescribing when non-opioid analgesics, or other non-pharmaceutical options, may be effective for pain control.

There are several approaches to develop a predictive tool. Traditional models, such as logistic regression, have been used to identify risk factors for COT.5, 12,13,14 Modern methods for prediction include non-parametric and tree-based methods that can handle large amounts of data available in the electronic health record (EHR).15, 16 These methods are comparable to parametric approaches with respect to their prediction performance.16,17,18 This study aimed to identify a parsimonious predictive model of future COT among hospitalized patients not on COT in the 1 year preceding their hospitalization. We used EHR data from an urban, safety net hospital to develop and compare various algorithms with respect to their accuracy, sensitivity, and specificity to predict progression to COT one year following hospital discharge.


Study Design and Setting

This was a retrospective cohort study of all hospital discharges from Denver Health Medical Center, an integrated safety net health system in Denver, Colorado, between 2008 and 2014. Patients accessed care at a 477-bed hospital, an emergency department, an urgent care center, community health centers, subspecialty clinics, and a public health department.19 The majority of patients had incomes < 185% of the Federal Poverty Level and 70% were ethnic minorities.20 This study was approved by the Colorado Multiple Institutional Review Board and adhered to the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) statement on reporting predictive models.21

Data Source and Participants

All data were queried from the Denver Health data warehouse which pools demographic, pharmacy, laboratory data, and International Classification of Diseases Diagnosis and Procedural Codes, Ninth Revision (ICD-9 CM) obtained during patient care. Pharmacy data included inpatient opioid prescribing (dosing and type) and outpatient pharmacy data (Table 1). The first hospital discharge for each patient over the study period was categorized as the “index discharge.” From these patients, we excluded patients on COT (defined below) or opioid agonist therapy (methadone or buprenorphine/naloxone) in the 1 year preceding their index discharge. We excluded patients < 15 or > 85 years old, in prison, jail, or police custody, who died following their index discharge, had < 2 healthcare visits to Denver Health three years preceding their index discharge, were undocumented persons receiving emergent hemodialysis, or were obstetric patients. We excluded patients with < 2 healthcare visits to Denver Health, because they were less likely to receive follow-up health care in this system. We excluded incarcerated patients, because ongoing health care and medication dispensing occurs within the correctional system. We excluded obstetric patients, because they tended to be younger and healthier than the medical and surgical patients in the study sample and were not reflective of the overall patient population. Lastly, we excluded subsequent hospital discharges to ensure our dataset only included each patient’s first hospitalization during the study period. We did not exclude patients with malignancy diagnoses, because, increasingly, individuals with cancer are surviving to remission. These patients may develop chronic pain due to cancer burden, exposure to cancer treatment, or other medical comorbidities. While opioid therapy is widely accepted to manage malignancy-related pain, the use of opioids to manage chronic pain in patients who are cancer free following treatment is not grounded in broad consensus.22, 23

Table 1 Patient Demographic Characteristics and Distribution of Potential Predictors of Chronic Opioid Use


The study outcome was COT one year following the index discharge. We defined COT as “a 90-day or greater supply of oral opioids with less than a 30-day gap in supply within a 180-day period or receipt of ≥10 opioid prescriptions over one year following the index discharge.”24, 25

Predictor Selection

Predictors were selected based on clinical experience and were informed by the literature.6,7,8,9, 11, 14, 26 We were interested in identifying modifiable factors associated with future COT related to the index discharge that were available in the EHR. Data were identified from previous encounters prior to the index discharge. We identified gender, race/ethnicity, age, and insurance status from registration data collected at the index hospitalization. Insurance status was classified as discount payment plan (Child Health Plan Plus; Colorado Indigent Care Program; Denver Health Financial Assistance Program), Medicare, Medicaid, commercial, or unknown/other/self-pay.

We obtained medical and mental health diagnoses and substance use disorders (alcohol, stimulant, and tobacco) by querying patient encounters in the three years preceding the index discharge using ICD-9 CM codes (Table 1, Online Appendix 1).27 From these diagnoses, we calculated a Charlson Comorbidity Index.28 Discharge diagnoses of acute pain, chronic pain, and neoplasm were reported (Table 1). Surgery during the index hospitalization was determined by ICD-9 CM procedural codes (Online Appendix 1). Cannabis use was not examined, because medical marijuana was legalized in Colorado in 2010, and was inconsistently documented in the EHR.29

We identified opioid analgesics, neuropathic agents, non-steroidal anti-inflammatory drugs (NSAIDs), other analgesics (topical capsaicin, lidocaine), tricyclic antidepressants, and benzodiazepines filled at Denver Health pharmacies in the one year preceding the index discharge (Online Appendix 2). Other data captured included opioid receipt within three days of hospital discharge, milligrams of morphine equivalents (MMEs) administered daily during the hospitalization, length of hospital stay, the number of healthcare encounters one year preceding the index discharge, and the number of healthcare encounters in the one year post index discharge.

Statistical Analysis and Model Development

Patients with and without chronic opioid use were compared with respect to their demographic characteristics using t tests for continuous variables, chi-squared or Fisher’s exact tests for categorical variables, and Cochran-Mantel-Haenszel tests for ordinal variables (Table 1). We examined the relationship between continuous predictors and the probability of COT using locally weighted scatterplot smoother (lowess).30 Variables examined included age, MMEs, Charlson Comorbidity Index, hospital length of stay (LOS), past year opioid and/or non-opioid analgesic receipt (NSAIDs, neuropathic agents, topical capsaicin, lidocaine), the number of healthcare encounters one year preceding the index discharge, and the number of healthcare encounters in the one year post index discharge. Age, MMEs, LOS, and healthcare encounters were non-linearly related to the probability of COT. The relationship between age and COT was quadratic. Hospital length of stay was loglinearly related to COT. Daily MMEs in the hospital and the numbers of past year opioid prescriptions filled preceding the index hospitalization were categorized into clinically significant groups to meaningfully interpret their relationship to COT.31, 32 The Charlson Comorbidity Index was linearly related to COT.

We compared the prediction performance of various binary classification algorithms to determine which method best predicted COT in this population. These algorithms included random forests, least absolute shrinkage and selection operator (lasso), and stepwise logistic regression. We used a temporal split of the data, where the models were trained on years 2008–2011 (65%) and tested on years 2012–2014 (35%). This method aligns with the Tripod prediction model Type 2B.16, 21 Due to the extreme imbalance in the outcome variable, COT (only 5% prevalence), we down-sampled the majority class (no COT) in the training data to create an equal number of COT versus no COT. To down-sample, we took a random sample of the majority class which was equal in size to the minority class. This resulted in 1,061 with COT and 1,061 with no COT. This method has been shown to improve prediction performance in imbalanced data.33 We compared the three algorithms (random forests, lasso, and stepwise logistic regression) with respect to accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and area under the receiver operating curve (AUC) by fitting them to the down-sampled training data and testing them on the hold-out set. All three performed similarly, with slightly higher prediction performance achieved by logistic regression (data not shown). We proceeded with logistic regression, because it performed well, it can be easily implemented in the EHR, and it provides interpretable associations between the explanatory variables and COT.

The multiple logistic regression model was used to estimate associations between the explanatory factors (Table 2) and COT and to predict COT in the test set. The best subset of factors was determined using stepwise selection, with a cutoff of ρ < 0.05. Although p < 0.05 would ordinarily be high for a sample size this large, because we first down-sampled the majority class, we felt that ρ < 0.05 was appropriate. An optimal cutoff for classifying the fitted probabilities from the logistic regression model was estimated using the Youden’s index.34 All analyses were performed in the R programming language.35

Table 2 Logistic Regression Parameter Estimates from the Model Selected Using Stepwise Regression on the Down-Sampled Training Set


From January 1, 2008 to December 31, 2014, there were 159,574 hospital admissions. After applying our exclusion criteria, 27,705 (17.4%) patients remained (Fig. 1). Of these patients, 1457 (5.3%) were on COT one year following their index discharge. Table 1 lists patient demographic and clinical characteristics at the index hospitalization and compares demographic and clinical characteristics of patients with and without chronic opioid use one year following discharge. Patients who progressed to COT were most frequently between the ages 45–54 years old (ρ < 0.0001). Future COT was associated with a history of tobacco use (ρ < 0.0001), a history of acute or chronic pain (ρ < 0.0001), a three-year history of a mental health diagnosis (ρ < 0.0001), and a discharge diagnosis of chronic pain (ρ < 0.0001). Patients who progressed to COT one year post discharge had a higher Charlson Comorbidity Index on hospital admission compared to patients who did not progress to COT (mean 2.4, [standard deviation (SD) 2.5] versus 1.9, [SD 2.2]); ρ < 0.0001). Receipt of opioids, benzodiazepines, NSAIDs, or neuropathic prescriptions one year preceding hospitalization was more common among patients with future COT (p < 0.0001) as was receipt of an opioid at discharge (ρ < 0.0001). Mean MMEs per hospital day was greater among patients who progressed to COT (64.4 mg [SD 76.7 mg)]) versus those who did not (36.2 mg [SD 64.4]) (ρ < 0.0001). Increasing length of hospital stay was also associated with future COT (p = 0.0003).

Fig. 1
figure 1

Flow diagram for study participants. See TIFF file attached.

In the logistic regression model, 13 variables were selected using stepwise selection (Table 2). Patients prescribed opioids at discharge had more than twice the odds of developing COT than those who were not prescribed opioids at discharge (adjusted odds ratio (AOR) 2.33, 95% confidence interval (CI) [1.78, 3.04], Table 2). Increasing MMEs prescribed per day during the index hospitalization was associated with increased odds of developing COT one year post hospital discharge (Table 2). Past year receipt of non-opioid analgesics (AOR 1.92, 95% CI [1.49, 2.48]) and past year receipt of a benzodiazepine (AOR 1.89, 95% CI [1.26, 2.82]) was associated with COT one year post discharge. Prior history of chronic pain (AOR 1.79, 95% CI [1.41, 2.26]) and number of subsequent hospitalizations post index discharge (AOR 1.51, 95% CI [1.39, 1.64) was also associated with future COT.

The multiple logistic regression model correctly predicted 79% of the COT patients and 78% of the no COT patients. The ideal cut point for the fitted probability when optimizing the AUC was 0.45. The accuracy of this model was 78%, and the AUC was 0.86. For comparison, we also fit a stepwise multiple logistic regression model to the full training dataset (before down-sampling the majority class) and used the Youden’s index to determine the ideal cut point, which was estimated to be 0.07. This produced slightly higher specificity (83%), slightly lower sensitivity (74%), and slightly higher accuracy (83%) than the model fit to the down-sampled data; however, the final model contained 20 variables (as compared to 13 in the down-sampled data). Therefore, we chose the model fit to the down-sampled data because it achieved higher sensitivity with fewer variables.


In this study we developed a statistical model for prediction of COT among hospitalized patients at an urban, safety net hospital. This model is unique from other opioid risk assessment tools,6, 8, 36 because it accessed data available in the EHR and did not require additional data gathering and documentation from clinicians or other healthcare providers. A predictive model created from EHR data, when incorporated into a clinical workflow, has the potential to rapidly identify high-risk patients and provide real-time alerts to clinicians for decision-making when prescribing opioids during hospitalization and discharge.

In the logistic regression model, noteworthy risk factors for COT included more than 10 mg of morphine equivalents prescribed per day during hospitalization, two or more opioid prescriptions filled in the year preceding the index hospitalization, past year receipt of non-analgesic pain medications, and past year receipt of benzodiazepines in the one year preceding the index hospitalization (Table 2). The association between past year opioid and non-opioid analgesic receipt with future COT has also been shown in patients undergoing bariatric surgery37 and total hip arthroplasty.38 Past year benzodiazepine receipt was also associated with an increased odds of COT. This is concerning, because coadministration of these medications produces a defined increase in rates of adverse events, overdose, and death.39 In our findings, Charlson Comorbidity Index was associated with COT. Patients with functional limitations and greater disease burden often have diagnoses known to be associated with chronic pain, including osteoarthritis,40 fibromyalgia,41 and low back pain.42 Surgery at the index hospitalization was not associated with COT. This is likely because acute surgical pain often resolves and opioids are not needed long term. Finally, opioid receipt at discharge was predictive of COT, an association which has been previously reported.11 This variable is modifiable, and clinicians should consider this relationship when prescribing opioids to high-risk patients at discharge.

This study demonstrates the benefit of using a common generalized linear model technique combined with sampling to access data available in the EHR to create a clinically relevant prediction model. When compared to other prediction models developed using large datasets and machine learning techniques to predict the development of diabetes,43 pancreatitis severity,44 heart failure readmissions,44, 45 and sepsis18 (sensitivities 0.74, 0.87, and 0.92, respectively, and AUCs 0.78, 0.82, and 0.86, respectively), results of our model are encouraging and could benefit clinical practice. While no prediction model has been published to identify hospitalized patients at high-risk of future COT, prediction tools to assess the patient’s risk of opioid misuse have been developed and validated. Such tools include the Screener and Opioid Assessment for Patients with Pain (SOAPP-R; sensitivity 0.81; AUC 0.81),36 the Current Opioid Misuse Measure (COMM; sensitivity 0.77; AUC 0.81),6 and the Opioid Risk Tool (ORT; c = 0.82).8 These tools have not been validated in the hospital setting and administration and scoring of these tools can be time consuming; thus, their feasibility of use in a busy hospital-based practice is limited.

This model addresses an area of medicine which is critically important—the long-term effect of opioid prescribing among hospitalized patients. Accessing electronically available data to develop and integrate prediction models into an EHR offers a promising, time-saving method to address the risk of future chronic opioid use in a fast-paced hospital practice. A predictive tool integrated into an EHR allows for a real-time screening to identify high-risk patients. Other EHR-based tools have reduced morbidity and mortality from thromboembolic disease, sepsis, and infections46,47,48,49,50,51 and illustrate the benefit of implementing predictive tools linked to the EHR to inform clinical practice.


We were limited in creating our model based upon the data available in our EHR. There were inherent limitations to using administrative data which include variability in data collection, i.e., where, when, how, and by whom, as well as any policy changes in the hospital which may affect data collection. These limitations would contribute to under ascertainment bias. However, given the intent of creating a real-time predictive model, we believe using data available in the EHR was appropriate. In our model development, we included variables we felt were most likely to be associated with COT and we may have inadvertently left out other predictive variables. We were unable to capture patients who filled prescriptions at non-affiliated pharmacies or patients who used opioids without a prescription, which may cause selection bias for cohort categorization and under ascertainment bias for COT after one year. Our dataset comes from an urban, safety net healthcare system where the majority of patients are ethnic minorities and are insured by Medicaid; thus, these study results may not be generalizable to the general population. Finally, healthcare systems across the USA use a variety of EHRs which have a range of capabilities with varied practices in generating problems lists, accessing prescription data, and/or accessing health records from other healthcare institutions. This study accessed EHR data from one healthcare institution and may not be generalizable to an institution that utilizes different EHRs.


We demonstrated that a generalized linear model that corrects for imbalanced data can be used to predict future COT among hospitalized patients. Our model identified 79% of the future COT patients, which is much better than chance. The model could be easily integrated into the clinical workflow to alert physicians when a patient is at high-risk for COT. Early identification allows for targeted patient education and clinician prompts to modify pain management strategies and opioid prescribing when appropriate.


  1. Center for Behavioral Health Statistics and Quality, Results from the 2015 National Survey on Drug Use and Health 2016, Substance Abuse and Mental Health Services Administration: Rockville, MD. Available from: 12/11/2017. Accessed 12/11/2017.

  2. Han B., et al., Prescription opioid use, misuse, and use disorders in US adults: 2015 National Survey on Drug Use and Health. Ann Intern Med, 2017. 167(5): p. 293–301.

    Article  PubMed  Google Scholar 

  3. Wasan A.D., J. Wootton, and R. N. Jamison, Dealing with difficult patients in your pain practice. Reg Anesth Pain Med, 2005. 30(2): p. 184–92.

    Article  PubMed  Google Scholar 

  4. Edlund M. J., et al., Risk factors for clinically recognized opioid abuse and dependence among veterans using opioids for chronic non-cancer pain. Pain, 2007. 129(3): p. 355–62.

    Article  PubMed  Google Scholar 

  5. Sullivan M. D., et al., Association between mental health disorders, problem drug use, and regular prescription opioid use. Arch Intern Med, 2006. 166(19): p. 2087–93.

    Article  PubMed  Google Scholar 

  6. Butler S. F., et al., Development and validation of the current opioid misuse measure. Pain, 2007. 130(1–2): p. 144–56.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Akbik H., et al., Validation and clinical application of the Screener and Opioid Assessment for Patients with Pain (SOAPP). J Pain Symptom Manage, 2006. 32(3): p. 287–93.

    Article  PubMed  Google Scholar 

  8. Webster L.R. and R.M. Webster, Predicting aberrant behaviors in opioid-treated patients: preliminary validation of the Opioid Risk Tool. Pain Med, 2005. 6(6): p. 432–42.

    Article  PubMed  Google Scholar 

  9. Compton P., J. Darakjian, and K. Miotto, Screening for addiction in patients with chronic pain and “problematic” substance use: evaluation of a pilot assessment tool. J Pain Symptom Manag, 1998. 16(6): p. 355–63.

    Article  CAS  Google Scholar 

  10. Herzig S.J., et al., Opioid utilization and opioid-related adverse events in nonsurgical patients in US hospitals. J Hosp Med, 2014. 9(2): p. 73–81.

    Article  PubMed  Google Scholar 

  11. Calcaterra S.L., et al., Opioid prescribing at hospital discharge contributes to chronic opioid use. J Gen Intern Med, 2016. 31(5): p. 478–85.

    Article  PubMed  Google Scholar 

  12. Sun E.C., et al., Incidence of and risk factors for chronic opioid use among opioid-naive patients in the postoperative period. JAMA Intern Med, 2016. 176(9): p. 1286–93.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Bateman B.T., et al., Persistent opioid use following cesarean delivery: patterns and predictors among opioid-naive women. Am J Obstet Gynecol, 2016. 215(3): p. 353.e1–353.e18.

    Article  Google Scholar 

  14. Thielke S.M., et al., A prospective study of predictors of long-term opioid use among patients with chronic noncancer pain. Clin J Pain, 2017. 33(3): p. 198–204.

    PubMed  Google Scholar 

  15. Iwashyna T.J. and V. Liu, What’s so different about big data?. A primer for clinicians trained to think epidemiologically. Ann Am Thorac Soc, 2014. 11(7): p. 1130–5.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Chekroud A. M., et al., Cross-trial prediction of treatment outcome in depression: a machine learning approach. Lancet Psychiatry, 2016. 3(3): p. 243–50.

    Article  PubMed  Google Scholar 

  17. Churpek M.M., et al., Multicenter comparison of machine learning methods and conventional regression for predicting clinical deterioration on the wards. Crit Care Med, 2016. 44(2): p. 368–74.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Taylor R.A., et al., Prediction of in-hospital mortality in emergency department patients with sepsis: a local big data-driven, machine learning approach. Acad Emerg Med, 2016. 23(3): p. 269–78.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Gabow P., S. Eisert, and R. Wright, Denver Health: a model for the integration of a public hospital and community health centers. Ann Intern Med, 2003. 138(2): p. 143–9.

    Article  PubMed  Google Scholar 

  20. Gabow P. A. and P. S. Mehler, A broad and structured approach to improving patient safety and quality: lessons from Denver Health. Health Aff (Millwood), 2011. 30(4): p. 612–8.

    Article  Google Scholar 

  21. Collins G. S., et al., Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (tripod): the tripod statement. Ann Intern Med, 2015. 162(1): p. 55–63.

    Article  PubMed  Google Scholar 

  22. Paice J. A., et al., Management of chronic pain in survivors of adult cancers: American Society of Clinical Oncology Clinical Practice Guideline. J Clin Oncol, 2016. 34(27): p. 3325–3345.

    Article  PubMed  Google Scholar 

  23. Chow R., et al., Needs assessment of primary care physicians in the management of chronic pain in cancer survivors. Support Care Cancer, 2017. 25(11): p. 3505–3514.

    Article  PubMed  Google Scholar 

  24. Vanderlip E. R., et al., National study of discontinuation of long-term opioid therapy among veterans. Pain, 2014. 155(12): p. 2673–2679.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Von Korff M., et al., De facto long-term opioid therapy for noncancer pain. Clin J Pain, 2008. 24(6): p. 521–7.

    Article  PubMed  Google Scholar 

  26. Amit Y. and D. Geman, Shape quantization and recognition with randomized trees. Neural Comput, 1997. 9(7): p. 1545–1588.

    Article  Google Scholar 

  27. CDC, Classification of Diseases, Functioning, and Disability. 2013. Available from: Accessed 12/11/2017.

  28. Charlson M. E., et al., A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis, 1987. 40(5): p. 373–83.

    Article  PubMed  CAS  Google Scholar 

  29. Colorado, S.O., Colorado Amendment 20. Legalization of Medicinal Marijuana. 2000. Available from: Accessed 12/11/2017.

  30. Cleveland W. S. and S. J. Devlin, Locally weighted regression: an approach to regression analysis by local fitting. J Am Stat Assoc, 1988. 83(403): p. 596–610.

    Article  Google Scholar 

  31. Von Korff M., et al., Defacto long-term opioid therapy for non-cancer pain. Clin J Pain, 2008. 24(6): p. 521–527.

    Article  PubMed  Google Scholar 

  32. Herzig S. J., et al., Opioid utilization and opioid-related adverse events in non-surgical patients in US hospitals. J Hosp Med, 2014. 9(2): p. 73–81.

    Article  PubMed  Google Scholar 

  33. Lin W.-J. and J. J. Chen, Class-imbalanced classifiers for high-dimensional data. Brief Bioinform, 2013. 14(1): p. 13–26.

    Article  PubMed  Google Scholar 

  34. Ruopp M. D., et al., Youden Index and optimal cut-point estimated from observations affected by a lower limit of detection. Biom J, 2008. 50(3): p. 419–30.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Foundation, T.R. What is R? [cited 2017 11/3/2017]; Available from: Accessed 12/11/2017.

  36. Butler S. F., et al., Validation of the revised Screener and Opioid Assessment for Patients with Pain (SOAPP-R). J Pain, 2008. 9(4): p. 360–72.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Raebel M. A., et al., Chronic opioid use emerging after bariatric surgery. Pharmacoepidemiol Drug Saf, 2014. 23(12): p. 1247–57.

    Article  PubMed  Google Scholar 

  38. Inacio M. C., et al., Risk factors for persistent and new chronic opioid use in patients undergoing total hip arthroplasty: a retrospective cohort study. BMJ Open, 2016. 6(4): p. e010664.

  39. Gudin J. A., et al., Risks, management, and monitoring of combination opioid, benzodiazepines, and/or alcohol use. Postgrad Med, 2013. 125(4): p. 115–130.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Guccione A. A., et al., The effects of specific medical conditions on the functional limitations of elders in the Framingham Study. Am J Public Health, 1994. 84(3): p. 351–8.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  41. Schmaling K. B. and K. L. Betterton, Neurocognitive complaints and functional status among patients with chronic fatigue syndrome and fibromyalgia. Qual Life Res, 2016. 25(5): p. 1257–63.

    Article  PubMed  Google Scholar 

  42. Murray C. J., et al., The state of US health, 1990-2010: burden of diseases, injuries, and risk factors. JAMA, 2013. 310(6): p. 591–608.

    Article  PubMed  CAS  Google Scholar 

  43. Casanova R., et al., Prediction of incident diabetes in the Jackson Heart Study using high-dimensional machine learning. PLoS One, 2016. 11(10): p. e0163942.

  44. Pearce C. B., et al., Machine learning can improve prediction of severity in acute pancreatitis using admission values of APACHE II score and C-reactive protein. Pancreatology, 2006. 6(1–2): p. 123–31.

    Article  PubMed  CAS  Google Scholar 

  45. Shameer K., et al., Predictive modeling of hospital readmission rates using electronic medical record-wide machine learning: a case-study using Mount Sinai Heart Failure Cohort. Pac Symp Biocomput, 2016. 22: p. 276–287.

    PubMed Central  Google Scholar 

  46. Alsolamy S., et al., Diagnostic accuracy of a screening electronic alert tool for severe sepsis and septic shock in the emergency department. BMC Med Inform Decis Mak, 2014. 14: p. 105.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Kucher N., et al., Electronic alerts to prevent venous thromboembolism among hospitalized patients. N Engl J Med, 2005. 352(10): p. 969–977.

    Article  PubMed  CAS  Google Scholar 

  48. Ng C. K., et al., Clinical and economic impact of an antibiotics stewardship programme in a regional hospital in Hong Kong. Qual Saf Health Care, 2008. 17(5): p. 387–92.

    Article  PubMed  CAS  Google Scholar 

  49. McNeil V., M. Cruickshank, and M. Duguid, Safer use of antimicrobials in hospitals: the value of antimicrobial usage data. Med J Aust, 2010. 193(8 Suppl): p. S114–7.

    PubMed  Google Scholar 

  50. MacDougall C. and R. E. Polk, Antimicrobial stewardship programs in health care systems. Clin Microbiol Rev, 2005. 18(4): p. 638–56.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Fishman N., Antimicrobial stewardship. Am J Med, 2006. 119(6 Suppl 1): p. S53–61; discussion S62-70.

    Article  PubMed  Google Scholar 

Download references


Funders: The authors would like to acknowledge the University of Colorado, Department of Medicine, Division of General Internal Medicine Small Grants Program for their generous grant which funded this project. Dr. Binswanger was supported by the National Institute On Drug Abuse of the National Institutes of Health under Award Numbers R34DA035952 and R01DA042059. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Prior Presentations: This work was presented at the National Society of General Internal Medicine Conference on April 21, 2017.

Author information

Authors and Affiliations


Ethics declarations

Compliance with Ethical Standards

This study was approved by the Colorado Multiple Institutional Review Board and adhered to the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) statement on reporting predictive models.21

Conflict of Interest

The authors declare that they do not have a conflict of interest.

Electronic Supplementary Material

Appendix 1

(DOCX 14 kb)

Appendix 2

(DOCX 13 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Calcaterra, S.L., Scarbro, S., Hull, M.L. et al. Prediction of Future Chronic Opioid Use Among Hospitalized Patients. J GEN INTERN MED 33, 898–905 (2018).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • hospital medicine
  • statistical modeling
  • prediction rules