Introduction

The individual needs of patients are central to decision making in hospital care [1]. Nevertheless, reducing complexity of individual episodes through the identification of common patterns of needs facilitates an efficient organisation of care [2].

The identification of common patterns of needs and the prediction of relevant aspects of patient care were found to be more complex in hospital psychiatry than in other medical disciplines [3,4,5]. Reasons put forward for this were less distinct diagnostic concepts [6,7,8], less standardisation of care [9] and a broader spectrum of acceptable therapeutic regimes [10].

Machine learning is a potent approach to identify and quantify multidimensional patterns in patient and hospital service data [11]. It has gained increasing attention in health care by achieving impressive results, for instance, in early prediction and diagnosis of breast cancer [12], acute kidney injury [13], skin cancer [14], prostate cancer [15], diabetic retinopathy [16] and depression [17]. Other studies applied machine learning to aspects relevant to the organisation of hospital care, such as predicting patient volume in emergency departments [18,19,20], the management of acute sepsis [21,22,23] and the daily costs per psychiatric inpatient [24].

The actual use of machine learning applications in routine clinical care often lags behind prominent achievements in research projects. Most published clinical prediction models are never used in clinical practice [25]. A common problem is the availability of useful data at the point of decision making [26, 27].

Previous research has often included a broad set of medical, psychometric and sociodemographic variables of which many should usually not be available at admission of patients at many hospitals [3, 28]. High administrative workload in clinical staff and overall time constraints are prevalent in many health care systems [29]. Therefore, required feature variables should be routinely available at the point of decision making without further curation.

There is currently a lack of evidence informing the performance and usefulness of machine learning applications based on routine data [30]. Our study addresses this lack of evidence by restricting predictive modelling to a set of routinely available feature variables.

The aim of the present study was to use routine data readily available at admission to predict aspects relevant to the organization of psychiatric hospital care. A further aim was to compare the results of a machine learning approach with those obtained through a traditional method and those obtained through a naive baseline classifier.

Methods

The present study included all inpatient episodes that were admitted to one of nine psychiatric hospitals in Hesse, Germany, and that were discharged between 1st of January 2017 and 31st of December 2018. An inpatient episode was defined as a patient’s stay at the hospital between admission and formal discharge. We excluded patients that were not in the billing class of adult psychiatry of the German lump-sum payment system for psychiatric hospital care, such as child and adolescent psychiatry and patients with mainly psychosomatic ailments. Missing data in outcome variables was addressed with listwise deletion. The study was approved by the ethics commission of the Medical Council Hesse, record number FF116/2017.

Three different modelling approaches were compared: The chosen machine learning approach was a stochastic gradient boosting algorithm implemented in the CARET package in R based on the gradient boosting machine (GBM) by Friedman [31,32,33], The traditional method was logistic regression with the full set of feature variables used in the machine learning approach. The naive baseline classifier was obtained by using only basic diagnostic groups in a logistic regression. The basic diagnostic groups were F0/G3 Organic mental disorders, F1 Substance-related mental disorders, F2 Schizophrenia, schizotypal and delusional disorders, F3 Affective Disorders and Others.

The required data were obtained from routinely documented information in the electronic medical records and patient administration databases. A restricted set of feature variables was used that should be available in most hospitals at admission of patients. These were 1. the one-dimensional Global Assessment of Functioning Scale (GAF) [34], 2. age, 3. gender, 4. mode and time of admission and 5. a basic diagnostic grouping (F0/G3 Organic mental disorders, F1 Substance-related mental disorders, F2 Schizophrenia, schizotypal and delusional disorders, F3 Affective Disorders and Others).

We used these features to predict the probability of 1. non-response to therapy as defined by failing to reach the next ten-point-interval of the GAF-scale at discharge (e.g. from 21 to 28 was considered as non-response and from 28 to 31 was considered as response), 2. the need for coercive treatment, 3. the need for 1:1 observation, 4. the need for crisis intervention, 5. long length of stay (LOS) above the 85th percentile and 6. short LOS below the 15th percentile.

We divided data into a training set, i.e. patients discharged in the first three quarters of 2017, a validation set, i.e. patients discharged in the last quarter of 2017 and a test set, i.e. patients discharged in 2018. We engineered features and tuned hyperparameters on the basis of the trained models’ performance in the validation data set. The continuous features, i.e. the GAF sore at admission and patients’ age at admission, were standardised to a mean of zero and a standard deviation of one by subtracting the mean of respective variables from each value and dividing the results by the respective standard deviation. The hyperparameter tuning was carried out using the built-in tuning process in the Caret package, modifying each of the four tuning parameters, i.e. boosting iterations, max depth of trees, shrinkage and minimal terminal node size, until a maximum performance was reached in the validation sample. The performance of the final models was assessed in the held-out test data (patients discharged in 2018) to assess performance in future episodes. Thereby, we had trained nine different models, each holding-out one study-site, and used these models to predict the outcomes of patients from the held-out study site to restrict assessment of performance to hospitals not involved in the training process.

We used the area under the Receiver Operating Characteristic curve (ROC) and Precision and Recall plots (PR-Plots) to compare predictive performance. We calculated 95% DeLong confidence intervals for the area under the ROC [35]. Furthermore, we defined different cut-off values for the operationalisation of the models that maximised sensitivity at a minimum precision of 0.2, 0.25 and 0.33, representing 4, 3, and 2 false positives for each true positive prediction, respectively. We chose a threshold of 0.2 to be the minimum for a clinically meaningful application based on previous work of Tomašev et al. [13]. Furthermore, we defined a sensitivity of 0.2 as the minimum threshold for clinically meaningful application.

Results

The study included 45.388 inpatient episodes. After addressing missing data in outcome variables with listwise deletion, 40.614 episodes were included in further analyses (89.5%). There were no missing data in feature variables after this step. Table 1 shows the characteristics of included episodes.

Table 1 Characteristics of inpatient episodes

Figure 1 compares the possible combinations of sensitivity, i.e. the proportion of correctly predicted actual positives, and specificity, i.e. the proportion of correctly predicted actual negatives, that were reached by the different classifications. The area under the curve is provided for each outcome and classification and 95% confidence intervals were estimated. Furthermore, Fig. 1 shows the operational points at the curves that maximize sensitivity at a minimum precision of 0.2, 0.25 and 0.33, respectively. Measured by the area under the curve, the models for coercive treatment, 1:1 observation, long LOS and crisis intervention achieved a relatively good performance between 0.83 and 0.74.

Fig. 1
figure 1

Receiver Operating Characteristic Curves, A = Precision at least 33%, B = Precision at least 25%, C=Precision at least 20%, CI = 95% Confidence Interval. Crossed circles show cut-off values that maximise sensitivity at different minimum thresholds of precision. Grey areas are not clinically meaningful because of a sensitivity of less than 0.2. Cut-off points in grey areas are not shown

Figure 2 compares the possible combinations of recall, a synonym for sensitivity, and precision, i.e. the proportion of actual positives among all positive predictions. Despite a relatively high area under the curve in Fig. 1, the models for the outcomes 1:1 observation and crisis intervention showed a poor performance in the comparison of precision and recall without clinically meaningful combinations. Table 2 provides additional measures of classification performance for the remaining outcomes based on the potentially meaningful operational points.

Fig. 2
figure 2

Precision and Recall Plot, A = Precision at least 33%, B = Precision at least 25%, C=Precision at least 20%. Dashed horizontal line shows the prevalence of the outcome. Crossed circles show cut-off values that maximise sensitivity at different minimum thresholds of precision. Grey areas are not clinically meaningful because of a precision or recall of less than 0.2. Cut-off points in grey areas are not shown. Actual precision could be more than minimum precision

Table 2 Perfomance Measures

As mentioned above, we trained each final model on patients discharged in 2017, leaving out one site in each training round, and evaluated each model’s predictive performance in patients discharged in 2018 only from the study site not included in the training. Figure 3 shows the differences in predictive performance measured by the area under the curve between the study sites. The models for coercive treatment, long LOS, short LOS and non-response to treatment showed relatively low variance in predictive performance. The models for crisis intervention and 1:1 observation performed very well in some study sites and very close to pure random classification, or worse, in others.

Fig. 3
figure 3

Performance in different study sites. One point represents one study site. The diamond represents the mean using the sites as units. ROC = Receiver operating characteristic. AUC = Area under the curve. LOS = Length of stay

Figure 4 shows the top ten feature variables ordered by their importance in predicting the outcome variables in the GBM model. Variable importance is a dimensionless measure that represents the influence of each feature on the predictive performance relative to the other variables (the method is described in detail in 31). GAF at admission, age at admission and a basic diagnostic grouping at admission were important variables in most outcomes.

Fig. 4
figure 4

Importance of variables in predictions. F0/G3 = Organic mental disorders, F1 = Substance-related mental disorders, F2 = Schizophrenia, schizotypal and delusional disorders, F3 = Affective Disorders. GAF = Global Assessment of Functioning, Adm. = Admission, GP = General Practitioner

Discussion

Key findings

A common problem in the application of machine learning is availability of data at the point of decision making. The present study aimed at using routine data readily available at admission to predict aspects relevant to the organisation of psychiatric hospital care. A further aim was to compare the results of a machine learning approach to those obtained using a traditional method and those obtained using a naive baseline classifier.

The models’ performance, as measured by the area under the ROC, varied strongly between the predicted outcomes, with relatively high performance in the prediction of coercive treatment and 1:1 observations and relatively poor performance in the prediction of short LOS and non-response to treatment. The GBM performed slightly better than logistic regression. Both approaches were substantially better than a naive prediction based solely on basic diagnostic grouping.

The present results confirm previous studies suggesting inadequacy of the area under the ROC as a measure for predictive performance in unbalanced data, in our case data with many more negatives than positives [36]. The area under the ROC gave a misleadingly positive impression of the models for 1:1 observation and crisis intervention, while the precision and recall plots revealed a lack of sufficient precision for a clinically meaningful application.

Furthermore, we found relatively large differences in the areas under the ROCs between different hospital sites (Fig. 3). As described in the methods section, we trained each model in 8 hospital and used it to predict outcomes of patients from the left-out ninth hospital. Therefore, if the remaining ninth hospital had very different patients or provided care in a different way, the model would perform worse in this hospital than in the other hospitals that were more common to each other. This was probably the reason for the very low performance of the naive baseline classifier in predicting 1:1-observations in two study sites with areas under the curves below 0.5 (0.4 and 0.1, respectively), where the incidences of this event were very low (0.3 and 0.07%).

It is still unclear, which predictive performance is sufficient for beneficial application in routine clinical practice and this was out of the scope of the present study [37, 38]. Furthermore, different clinical applications might require their own trade-off decisions between reducing false alerts and increasing coverage of actual positives. We have chosen different configurations for comparison of model performance, borrowing from Tomašev et al. in their prediction of acute kidney injuries [13]. For instance, our GBM model for the prediction of coercive treatment, operationalised with a precision of at least 0.2 (see Table 2), gave a warning for 26% of all episodes at admission. Thereof, four false alerts were caused for each true alert and warnings were given in advance for 73% of all actual positive cases. The same model could be operationalised at a precision of 0.25, which gave a warning for 13% of all episodes, resulting in three false alerts for each positive alert and a warning for 48% of all actual positive cases.

Just because we can predict future events does not mean we should [39]. Very few clinical prediction models have undergone formal impact analysis, i.e. studying the impact of using the predictions on patient outcomes [40]. Patients’ benefit depends on how predictions are translated into effective decision making [41]. Predictions must be reasonably included in the clinical processes to create an actual benefit from better informed decisions. This requires a range of steps at the individual hospital level, such as integration into current IT, human resources and financial investment systems.

Even a perfectly integrated model must be used responsibly in clinical practice, and the exact framework for such application is currently under a broad discussion [42,43,44]. For instance, caregivers have to be trained in using the provided results, patients‘access to care has to remain equitable, real-world performance must be constantly scrutinised and responsibilities in case of errors have to be clear. Furthermore, predictions must not become self-fulfilling. Instead, a warning at admission for coercive treatment could be used to intensify non-invasive care with the aim to avoid coercive approaches, for instance.

Our study in comparison to previous research

Less distinct diagnostic concepts [6,7,8], less standardization of care [9] and a broader spectrum of acceptable therapeutic regimes [10] make the prediction of outcomes in psychiatry more complex than in other medical disciplines [3,4,5]. An infamous example for these difficulties was the failure of the Medicare DRG system for psychiatry due to the inability to predict length of stay and associated hospital costs [45, 46]. Recent studies have often used a broad range of feature variables in studies restricted to specific settings and patients. Leigthon et al. [47] predicted remission after 12 months in 79 patients with first episode of psychosis with a wide range of demographic, socioeconomic and psychometric feature variables and reached an area under the ROC of 0.65. Koutsouleris et al. [48] also investigated remission in first episode of psychosis and reached a sensitivity of 71%, a specificity of 72% and a precision of 93% in 108 unseen patients with their top ten demographic, socioeconomic and psychometric predictor variables. Lin et al. [49] tried to distinguish treatment responders from non-responders prior to antidepressant therapy in 455 patients with major depression. They used single nucleotide polymorphisms from genetic analyses and other clinical data and reached an area under the AUC of 0.82. Common traits of these studies were the restriction to specific patient groups and the relatively small sample sizes. Furthermore, they mainly used data that might not be available during routine patient admission.

Strengths and weaknesses of our study

A strength of this study was the large sample size over two distinct years and at nine study sites. This allowed us to include a broad range of the present spectrum of psychiatric inpatients and to develop models that should be applicable in most hospitals. Furthermore, we were able to test our models in patients that were treated in another calendar year and a different hospital and thereby reduce information leakage. A further strength of the present study was the restrictive inclusion of only feature variables that should be available at admission in most hospitals. Therefore, it should be possible to implement the present models in many hospitals without additional documentation effort.

A potential weakness of our study was the retrospective use of administrative routine data which entails potential validity concerns. The validity of routine hospital data for health services research is a frequently discussed topic [50, 51], and studies found both low [52] and high validity of such data [53]. However, the development of models for application in routine clinical practice necessitated the use of routinely generated data including the inherent caveats. A further limitation was the lack of time stamps for the diagnostic groupings. Patients were grouped in one of five basic diagnostic groups at admission and these groupings remained stable during an episode. However, we were not able to entirely rule out that these groupings might have been changed during the stay by staff in rare cases. A further limitation was the restriction to hospitals from one large provider of inpatient psychiatric services in the region of Hesse, Germany, which raises the question whether the predictive performance of our models would remain stable if applied in psychiatric hospitals with different circumstances.

Conclusion

The present study has shown that administrative routine data can be used to predict aspects relevant to the organisation of psychiatric hospital care. Such predictions could be applied to efficiently support hospital staff in their very own decision making and thereby increase quality of care. Future research should investigate the predictive performance that is necessary for a tool to be accepted by care givers and provide an effective assistance in the care process for the benefit of both staff and patients.