Introduction

Clinical records are an important type of electronic health record (EHR) data and often contain detailed and valuable patient information and clinical experiences of doctors. As a basic task of natural language processing, text classification plays an critical role in clinical records retrieval and organization, it can also support clinical decision making and cohort identification [1, 2].

Existing clinical text classification studies often use different forms of knowledge sources or rules for feature engineering [37]. But most of the studies could not learn effective features automatically, while deep learning methods have shown powerful feature learning capability recently in the general domain [8].

In this study, we propose a new method which combines rule-based feature engineering and knowledge-guided deep learning techniques for disease classification. We first identify trigger phrases using rules, then use these trigger phrases to predict classes with very few examples, and finally train a convolutional neural network (CNN) on the trigger phrases with word embeddings and Unified Medical Language System (UMLS) [9] Concept Unique Identifiers (CUIs) with entity embeddings. We evaluated our method on the 2008 Integrating Informatics with Biology and the Bedside (i2b2) obesity challenge [10], a multilabel classification task focused on obesity and its 15 most common comorbidities (diseases). The experimental results show that our method outperforms state-of-the-art methods for the challenge.

Related Work

Clinical text classification

A systematic literature review of clinical coding and classification systems has been conducted by Stanfill et al. [11]. Some challenge tasks in biomedical text mining also focus on clinical text classification, e.g., Informatics for Integrating Biology and the Bedside (i2b2) hosted text classification tasks on determining smoking status [10], and predicting obesity and its co-morbidities [12]. In this work, we focus on the obesity challenge [12]. Among the top ten systems of obesity challenge, most are rule-based systems, and the top four systems are purely rule-based.

Many approaches for clinical text classification rely on biomedical knowledge sources [3]. A common approach is to first map narrative text to concepts from knowledge sources like Unified Medical Language System (UMLS), then train classifiers on document representations that include UMLS Concept Unique Identifiers (CUIs) as features [6]. More knowledge-intensive approaches enrich the feature set with related concepts [4] for apply semantic kernels that project documents that contain related concepts closer together in a feature space [7]. Similarly, Yao et al. [13] proposed to improve distributed document representations with medical concept descriptions for traditional Chinese medicine clinical records classification.

On the other hand, some clinical text classification studies use various types of information instead of knowledge sources. For instance, effective classifiers have been designed based on regular expression discovery [14] and semi-supervised learning [15, 16]. Active learning [17] has been applied in clinical domain, which leverages unlabeled corpora to improve the classification of clinical text.

Although these methods used rules, knowledge sources or different types of information in many ways. They seldom use effective feature learning methods, while deep learning methods are recently widely used for text classification and have shown powerful feature learning capabilities.

Deep learning for clinical data mining

Recently, deep learning methods have been successfully applied to clinical data mining. Two representative deep models are convolutional neural networks (CNN) [18, 19] and recurrent neural networks (RNN) [20, 21]. They achieve state of the art performances on a number of clinical data mining tasks. Beaulieu-Jones et al. [22] designed a neural network approach to construct phenotypes for classifying patient disease status. The model performed better than decision trees, random forests and Support Vector Machines (SVM). They also showed to successfully learn the structure of high-dimensional EHR data for phenotype stratification. Gehrmann et al. [23] compared CNN to the traditional rule-based entity extraction systems using the cTAKES and Logistic Regression (LR) with n-gram features. They tested ten different phenotyping tasks on discharge summaries. CNN outperformed other phenotyping algorithms on the prediction of the ten phenotypes, and they concluded that deep learning-based NLP methods improved the patient phenotyping performance compared to other methods. Luo et al. applied both CNN, RNN, and Graph Convolutional Networks (GCN) to classify the semantic relations between medical concepts in discharge summaries from the i2b2-VA challenge dataset [24] and showed that CNN, RNN and GCN with only word embedding features can obtain similar or better performances compared to state-of-the-art systems by challenge participants with heavy feature engineering [2527]. Wu et al. [28] applied CNN using pre-trained embeddings on clinical text for named entity recognization. They showed that their models outperformed the conditional random fields (CRF) baseline. Geraci et al. [29] applied deep learning models to identify youth depression in unstructured text notes. They obtained a sensitivity of 93.5% and a specificity of 68%. Jagannatha et al. [30, 31] experimented with RNN, long short-term memory (LSTM), gated recurrent units (GRU), bidirectional LSTM, combinations of LSTM with CRF, to extract clinical concepts from texts. They demonstrated that all RNN variants outperformed the CRF baseline. Lipton et al. [32] evaluated LSTM in phenotype prediction using multivariate time series clinical measurements. They showed that their model outperformed multi-layer perceptron (MLP) and LR. They also concluded that combining MLP and LSTM leads to the best performance. Che et al. [33] also applied deep neural networks to model time series in ICU data. They introduced a Laplacian regularization process on the sigmoid layer based on medical knowledge bases and other structured knowledge. In addition, they designed an incremental training procedure to iteratively add neurons to the hidden layer. They then used causal inference to analyze and interpret hidden layer representations. They showed that their method improved the performance of phenotype identification, the model also converges faster and has better interpretation.

Although deep learning techniques have been well studied in clinical data mining, most of these works do not focus on long clinical text classification (e.g., an entire clinical note) or utilize knowledge sources, while we propose a novel knowledge-guided deep learning method for clinical text classification.

Obesity challenge

The objective of the i2b2 2008 obesity challenge [12] is to assess text classification methods for determining patient disease status with respect to obesity and 15 of its comorbidities: Diabetes mellitus (DM), Hypercholesterolemia, Hypertriglyceridemia, Hypertension, atherosclerotic cardiovascular disease (CAD), Heart failure (CHF), Peripheral vascular disease (PVD), Venous insufficiency, Osteoarthritis (OA), Obstructive sleep apnea (OSA), Asthma, Gastroesophageal reflux disease (GERD), Gallstones, Depression, and Gout. Our goal is to label each document as either Present (Y), Absent (N), Questionable (Q) or Unmentioned (U) for each disease. Macro F1 score is the primary metric for evaluating and ranking classification methods.

The challenge consists of two tasks, namely textual task and intuitive task. The textual task is to identify explicit evidences of the diseases, while the intuitive task focused on the prediction of the disease status when the evidence is not explicitly mentioned. Thus, the Unmentioned (U) class label was excluded from the intuitive task. The classes are distributed very unevenly: there are only few N and Q examples in textual task data set and few Q examples in intuitive task data set, as shown in Table 1. There exist classes even without training example. For instance, there is no training example with Q and N label for Depression in textual task, and there is no training example with Q label for Gallstones in intuitive task. The details of the datasets can be found in [12].

Table 1 The class distribution in the obesity challenge datasets

Method

Our method contains three steps: (1). identifying trigger phrases; (2). predicting classes with very few examples using trigger phrases; (3). learning a knowledge-guided CNN for more populated classes. Our implementation is available at https://github.com/yao8839836/obesity. We use Solt’s system [5] to recognize trigger phrases and predict classes with very few examples. Solt’s system is a very powerful rule-based system. It ranked the first in the intuitive task and the second in the textual task and overall the first in the obesity challenge. Solt’s system can identify very informative trigger phrases with different contexts (positive, negative or uncertain). We use the Perl implementation: https://github.com/yao8839836/obesity/tree/master/perl_classifier of Solt’s system provided by the authors.

Trigger phrases identification

We recognize trigger phrases following Solt’s system [5]. We first conduct the same preprocessing like abbreviation resolution and family history removing. We then use the disease names (class names), their directly associated terms and negative/uncertain words to recognize trigger phrases. The trigger phrases are disease names (e.g., Gallstones) and their alternative names (e.g., Cholelithiasis) with/without negative or uncertain words.

Predicting classes with very few examples using trigger phrases

As the classes in obesity challenge are very unbalanced, and some classes even don’t have training examples, we could not make prediction for these classes using machine learning methods and resort to rules defined in Solt’s system [5]. We exclude classes with very few examples in training set of each disease. Specifically, we remove examples with Q label in intuitive task and remove examples with Q or N label for textual task. Then for examples in test set, we use trigger phrases to predict their labels. As Solt’s system [5], we assume positive trigger phrases (disease names and alternatives without uncertain or negative words) are prior to negative trigger phrases, and negative trigger phrases are prior to uncertain trigger phrases. Therefore, if a clinical record contains uncertain trigger phrases and dosen’t contain positive or negative trigger phrases, we label it as Q. Similarly, if a clinical record contains negative trigger phrases and dosen’t contain positive trigger phrases, we label it as N.

Knowledge-guided convolutional neural networks

After excluding classes with very few examples, only two classes remain in the training set of each disease (Y and N for intuitive task, Y and U for textual task). We learn a CNN on positive trigger phrases and UMLS CUIs in training records, then classify test examples using the trained CNN model. CNN is a powerful deep learning model for text classification, and it performs better than recurrent neural networks in our preliminary experiment. The test phase of our method is given in Fig. 1. If a record in test set is labeled Q or N by Solt’s system, we trust Solt’s system. Otherwise, we use the CNN to predict the label of the record.

Fig. 1
figure 1

The test phase of our method

For each disease, we feed its positive trigger phrases with word2vec [34] word embeddings to CNN. We employed the 200 dimensional pre-trained word embeddings learned from MIMIC-III [35] clinical notes. We experimented with 100, 200, 300, 400, 500 and 600 dimensional word embeddings, and found using 200 dimensional word embeddings achieves the best performance.

We also utilize medical knowledge base to enrich the CNN model input. We link the full clinical text to CUIs in UMLS [9] via MetaMap [36]. Each clinical record is represented as a bag of CUIs after entity linking. We feed 13 types of CUIs which are closely connected to diseases as the input entities of CNN: Body Part, Organ, or Organ Component (T023), Finding (T033), Laboratory or Test Result (T034), Disease or Syndrome (T047), Mental or Behavioral Dysfunction (T048), Cell or Molecular Dysfunction (T049), Laboratory Procedure (T059), Diagnostic Procedure (T060), Therapeutic or Preventive Procedure (T061), Pharmacologic Substance (T121), Biomedical or Dental Material (T122), Biologically Active Substance (T123) and Sign or Symptom (T184). We list these CUIs types with type unique identifier (TUI) in Table 2. We found using the subset of CUIs achieves better performances than using all CUIs. We employ pre-trained CUIs embeddings made by [37] as the input entity representations of CNN.

Table 2 The types of CUIs we used

Our CNN architecture is given in Fig. 2. The input layer looks up word embeddings of positive trigger phrases and entity embeddings of selected CUIs in each clinical record. w0,w1,w2,…,wn are words in positive trigger phrases and e0,e1,e2,…,en are CUIs in a record. A one dimensional convolution layer is built on the word embeddings and entity embeddings. We use max pooling to select the most prominent feature with the highest value in the convolutional feature map, then concatenate the max pooling results of word embeddings and entity embeddings. The concatenated hidden representations are fed into a fully-connected layer, then a dropout and a ReLU activation layer. Lastly, a fully-connected layer is fed to a softmax layer, whose output is the multinomial distribution over labels.

Fig. 2
figure 2

Our knowledge-guided convolutional neural network architecture

We implement our knowledge-guided CNN model using TensorFlow [38], a popular deep learning framework. We set the following parameters for our CNN model: the convolution kernel size: 5, the number of convolution filters: 256, the dimension of hidden layer in the fully connected layer: 128, dropout keep probability: 0.8, the number of learning epochs: 30, batch size: 64, learning rate: 0.001. We also experimented with other settings of the parameters but didn’t find much difference. We use softmax cross entropy loss and Adam optimizer [39].

Results

Tables 3 and 4 show Macro F1 scores and Micro F1 scores of our method and Solt’s system. We report results of both the Solt’s paper [5] and the Perl implementation because we base our method on the Perl implementation and we found there are some differences between the paper’s results and Perl implementation’s results. This is likely due to further feature engineering that are not reflected when Solt et al. submitted classification output to the challenge. For completeness of the results, we show the performances from both Solt’s paper and code. We also report the results of our method when using only word embeddings as CNN input.

Table 3 Macro F1 scores and Micro F1 scores of Solt’s system [5] (paper) and our method with word and entity embeddings
Table 4 Macro F1 scores and Micro F1 scores of Solt’s system [5] (code) and our method with word embeddings only

From the two tables, we can note that the Perl implementation performs slightly better than the paper, the authors might not submit their best results to the obesity challenge. We can also see that CNN model with word embeddings only performs better than the Perl implementation in intuitive task, which means using a deep learning model can learn effective features for better classification. The input trigger phrases for CNN are the same as the trigger phrases for Y/U (textual task) or Y/N (intuitive task) labeling in the Perl code. The results in the textual task are not improved when using word embeddings only, because the textual task needs explicit evidences to label the records, and the positive trigger phrases contain enough information, therefore CNN with word embeddings only may not be particularly helpful. Nevertheless, after adding CUIs embeddings as additional input, more scores for different diseases are improved, and the overall F1 scores are higher than Solt’s system in the two tasks. This is likely due to the fact that the disambiguated CUIs are closely connected to diseases and their embeddings have more semantic information, which is beneficial for disease classification. To the best of our knowledge, we have achieved the highest overall F1 scores in intuitive task so far.

Note that the F1 scores of Solt’s paper and Perl implementation remain the same, while our model produces slightly different F1 scores in different runs. We run our model 10 times and observed that the overall Macro F1 scores and Micro F1 scores are significantly higher than Solt’s paper and implementation (p value <0.05 based on student t test). We checked the cases our method failed to predict correctly. and found the most error cases are caused by using Solt’s positive trigger phrases. For many error cases, our method predicted N or U when no positive trigger phrases are identified, but the real labels are Y. For some other cases, our method predicted Y when positive trigger phrases are identified, but the real labels are N or U. For some diseases, our proposed method and Solt’s system achieved a very high Micro F1 but a low Macro F1. This is due to the fact that there are only a few Q or N records for these diseases (i.e., imbalanced class ratio), and we could not identify effective negative/uncertain trigger phrases using Solt’s rules. The regular expressions in Solt’s system can be further enriched so that we can identify trigger phrases more accurately.

We also compared our method with two commonly used classifiers: Logistic Regression and linear kernel support Vector Machine (SVM). We use LogisticRegression and LinearSVC class in scikit-learn as our implementations. For fair comparison, we use the same training set as knowledge-guided CNN. We represent a record as a binary vector, each dimension means whether an unique word is in its positive trigger phrases. For test examples, we also use Solt’s system to predict Q and N. If a test example is not labeled Q or N by Solt’s system, we use Logistic Regression or SVM to predict the label. Table 5 shows the results, we can observe that the results are similar to our method with word embeddings only, which means positive trigger phrases themselves are informative enough, while word embeddings could not help to improve the performances. Nevertheless, we run our model 10 times and observed that the overall Macro F1 scores and Micro F1 scores are significantly higher than SVM and Logistic Regression (p value <0.05 based on student t test), which verifies the effectiveness of CUIs embeddings again.

Table 5 Macro F1 scores and Micro F1 scores of Logistic Regression and SVM

Discussion

We note that the knowledge features part does not improve much. In fact, we think MetaMap will indeed introduce some noisy and unrelated CUIs, as previous studies also showed. To remedy this, following Weng et al. [40], we only kept CUIs from selected semantic types that are considered most relevant to clinical tasks. We found that filtering CUIs based on semantic types did lead to moderate performance improvement over using all CUIs. In another related computational phenotyping study [41], we found that manually curated CUI set resulted in significant performance improvement. We believe that improving entity recognition and integrating word/entity sense disambiguation will improve the performance, and plan to explore such directions in future work.

Conclusion

In this work, we propose a novel clinical text classification method which combines rule-based feature engineering and knowledge-guided deep learning. Specifically, we use rules to identify trigger phrases which contain diseases names, their alternative names and negative or uncertain words, then use these trigger phrases to predict classes with very limited examples, and finally train a knowledge-guided CNN model with word embeddings and UMLS CUIs entity embeddings. The evaluation results on the obesity challenge demonstrate that our method outperforms state-of-the-art methods for the challenge. We showed that CNN model is powerful for learning effective hidden features, and CUIs embeddings are helpful for building clinical text representations. This shows integrating domain knowledge into CNN models is promising. In our future work, We plan to design more principled methods and evaluate our methods on more clinical text datasets.