Automated multi-model deep neural network for sleep stage scoring with unfiltered clinical data
To develop an automated framework for sleep stage scoring from PSG via a deep neural network.
An automated deep neural network was proposed by using a multi-model integration strategy with multiple signal channels as input. All of the data were collected from one single medical center from July 2017 to April 2019. Model performance was evaluated by overall classification accuracy, precision, recall, weighted F1 score, and Cohen’s Kappa.
Two hundred ninety-four sleep studies were included in this study; 122 composed the training dataset, 20 composed the validation dataset, and 152 were used in the testing dataset. The network achieved human-level annotation performance with an average accuracy of 0.8181, weighted F1 score of 0.8150, and Cohen’s Kappa of 0.7276. Top-2 accuracy (the proportion of test samples for which the true label is among the two most probable labels given by the model) was significantly improved compared to the overall classification accuracy, with the average being 0.9602. The number of arousals affected the model’s performance.
This research provides a robust and reliable model with the inter-rater agreement nearing that of human experts. Determining the most appropriate evaluation parameters for sleep staging is a direction for future research.
KeywordsPolysomnography (PSG) Obstructive sleep apnea (OSA) Sleep staging Deep learning
Obstructive sleep apnea (OSA) is a disease characterized by recurrent partial or complete upper airway collapse obstruction during sleep, which can cause repeated apnea and hypopnea, often accompanied by hypoxemia, sleep disturbance, hypertension, coronary heart disease, and diabetes. OSA is the source of various cardiovascular and cerebrovascular diseases, endocrine diseases, and throat diseases. Epidemiological studies revealed that 936 million people worldwide suffer from moderate to severe OSA, and the number of people affected in China is among the highest in the world, causing a substantial social and economic burden . Furthermore, studies suggest that 80%–90% of cases remain undiagnosed . Therefore, it is crucial to improve the efficacy of diagnosis of OSA.
The diagnosis of OSA relies on overnight polysomnography (PSG) and manual data analysis in sleep laboratories. Sleep stage scoring criteria are standardized and follow the latest updates from the American Academy of Sleep Medicine (AASM) . However, sleep stage scoring still relies on manual interpretation from skillful technicians. Thus, the traditional PSG scoring is time consuming , and therefore an automated sleep staging system would assist sleep experts and provide great clinical utility.
Deep learning, as a field in machine learning research, has undergone an expansion of its application space in recent years, promoting rapid analysis of complex image data; assisting in the screening, diagnosis, and follow-up of related diseases; and significantly shortening the diagnostic time with limited medical resources. Electroencephalography (EEG) is a nonstationary signal and has a low signal-to-noise ratio (SNR), but new ways are needed to improve EEG processing to achieve better generalization capabilities and more flexible application. Recently, deep learning (DL) has shown great promise in identifying EEG signals due to its capacity to learn good feature representations from raw data. The majority of studies tackling this issue adopt convolutional neural networks (CNNs), recurrent neural networks (RNNs), or a CNN + RNN as the neural network architecture for sleep staging , and an accuracy rate greater than 87% has been reached .
In clinical settings, the scoring of sleep staging is complicated because the PSG processing could be confronted with challenging conditions, such as electrode shedding, signal artifacts, and noise. In this study, we use unfiltered clinical data and deep learning to develop automated analysis algorithms and validate them and to explore the scope of application in clinical practice.
Materials and methods
This retrospective study was approved by the institutional review board of Beijing Tongren Hospital (TRECKY2017–032).
Demographics and characteristics of datasets
Number of participants/epochs
Sex (male: female)
Age (median, range)
BMI (kg/m2) (median, range)
TST (min) (median, range)
AHI (median, range)
Sleep stage (n, %)
Minimum SpO2 (%)
Number of arousals
Overnight, PSG was performed on all of the participants by the Philips Respironics G3 sleep diagnostic system, including a 2-channel electroencephalography (EEG) (C3/A2, C4/A1), 2-channel electrooculography (EOG), anterior tibial electromyogram (EMG), electrocardiogram (ECG), 2-channel airflow measurement with nasal cannula pressure, recording of respiratory (thoracic and abdominal) movements, and pulse oximetry for oxygen saturation (SpO2). All of the ECG and EOG channels were captured at a 200 Hz sampling frequency and displayed with a 0.3–35 Hz band-pass filter. Anterior tibial EMG had a sampling rate of 200 Hz, and the band-pass filter was 10–100 Hz.
Two highly trained, experienced (more than 10 years) PSG technologists scored sleep stages and respiratory events in 30 s epoch in accordance with the American Association of Sleep Medicine (AASM 2012) guidelines . The apnea–hypopnea index (AHI) was defined as the number of apnea and hypopnea events per hour of sleep and was used to indicate the severity of sleep apnea (normal: AHI < 5; mild OSA, 5 ≤ AHI < 15; moderate OSA, 15 ≤ AHI < 30; severe OSA, AHI ≥ 30).
According to the AASM standard, the central band of the EEG signal is concentrated below 35 Hz, while the sampling rate is 200 Hz. Instead of getting more information from the excessive sampling frequency, we only get high-frequency noise. Therefore, we first filtered the signal at 66 Hz and then downsampled the signal sampling frequency to 66 Hz (which is one-third of the original sampling frequency) to remove the influence of high-frequency noise while ensuring that no spectral aliasing occurs and to reduce the amount of data.
(The details are in the supplementary materials)
(The details are in the supplementary materials)
(The details are in the supplementary materials)
The REM stage is exceptional in EEG staging. Although the REM stage has specific characteristics, rapid eye movements do not occur within every 30-s epochs. However, it is quite difficult for the model to determine whether these epochs with no rapid eye movements are in the REM stage because it relies on prior knowledge of the current stage. Therefore, we checked each epoch’s next eight epochs: if there was a REM stage epoch, we forcibly converted this epoch to the REM stage, thus ensuring the continuity of the REM period.
Model evaluation and statistical analysis
The performance of sleep stage prediction was measured by overall classification accuracy, precision, recall, weighted F1 score, and Cohen’s Kappa. Top-2 accuracy was applied, which means that the two most probable predictions for the model prediction were considered “correct.”
The confusion matrix was applied to the visualization of the performance of algorithms.
Statistical analysis was performed using SPSS 25 software (SPSS Inc., Chicago, IL). The Shapiro–Wilk test was used to verify normal value distribution. Differences in variables were analyzed by Student’s t-test or Mann–Whitney U test. All of the P values were 2-sided, and P values less than 0.05 were considered to be significant.
Cross dataset experiments
To further evaluate the performance of our method, we evaluated it on a public dataset named Sleep-EDF. In order to compare our method with others, we used the 2013 version, which contains two sets of subjects from two studies: age effect in healthy subjects (SC) and Temazepam effects on sleep (ST). Two PSGs of about 20 h each were recorded during two subsequent day–night periods at the subjects’ homes. Well-trained technicians manually scored corresponding hypnograms (sleep patterns) according to the Rechtschaffen and Kales manual. As AASM recommends, N3 and N4 of the sleep-EDF dataset were merged in this study. Twenty in-bed SC subjects (age 28.7 ± 2.9) were used. Each PSG recording contained 2 scalp-EEG signals (Fpz-Cz and Pz-Cz), 1 EOG (horizontal), 1 EMG, and 1 oral–nasal respiration signal. All EEG and EOG had the same sampling rate of 100 Hz. The SC dataset was divided into five folds for training and independent validation.
The numbers of PSG subjects in the training dataset, the validation dataset, and the testing dataset were 122, 20, and 152, respectively. Of the three datasets, males accounted for the vast majority. No significant differences were detected in sex, BMI, total sleep time, AHI, sleep stage distribution, minimum SpO2, and number of arousals, suggesting that the samples in the three datasets were homogeneous. The only significant difference was detected in age.
Comparative study to choose the best algorithm
Model performance with different training algorithms
Weighted F1 score
Without the 3-epoch splice
Without noise detection
Without expert rules
The proposed model
Model performance on testing dataset according to AHI
Model performance on sleep staging of testing dataset
Number of epochs
Top-2 accuracy on sleep stage scoring
Model performance on testing dataset
Average increase rate
The average performance
Distribution of sleep staging with two most significant predicted probabilities for each epoch (without expert rules)
Number of epochs (%*)
The second largest probability of prediction
Maximum probability of prediction (model output)
1392 (9.1 × 10−4%)
Evaluation of cross dataset experiments
Comparation of other methods to the proposed method
Supratak A et al. 
Supratak A et al. 
Tsinalis O et al. 
Sun Y, et al. 
The proposed model
In this study, the model performed robustly under different levels of AHI and performed slightly better in the healthy population than in patients with severe OSA. As the AHI increased, the accuracy and F1 values gradually decreased. In patients with severe OSA, the lowest value is considered to be related to fragmented sleep, and the EEG is relatively complicated. Cohen’s Kappa was to evaluate the inter-rater variability between the model and the technician’s scoring. The literature suggests that there is inter-rater variability between different human technicians, and both N1 and N3 are relatively low, ranging from 20% to70% [9, 10, 11, 12]. The average Cohen’s Kappa of this study was 0.7276, indicating a substantial agreement with human technicians. Similar to the previous pieces of literature, the model displayed a low consistency in the N1 and N3 stages. Such a result considers that the waveform characteristics of the low amplitude in the N1 stage are not prominent, and the model may confuse N1 with N2 during scoring (like when the EEG is not typical and thus a technician confuses N1 and N2). However, the agreement of N3 is weak due to the high proportion of OSA patients in the training dataset, which may lead to the number of N3 periods being inadequate, accounting for only 2% of the total number of epochs. In clinical practice, the number of sleep stages in clinical data is imbalanced. Compared with healthy people, sleep fragmentation in OSA patients has more W and N1 stages and fewer N3 stages. In this study, because the unfiltered data was closer to the clinical situation, the imbalanced sample categories will result in too few features and too diminutive a sample size to extract the data pattern, or in over-fitting problems because of limited samples. For the test of the public dataset, the metrics were significantly improved in N3 stages
To determine the final model architecture, this study conducted a comparative study on the same testing dataset. In the clinical PSG, there may be a decrease in signal quality due to sweating, intolerance to the environment, limb movement, and so forth. Therefore, the model design of this study considers the possibility of abnormal signal acquisition during overnight sleep PSG. Second, since there are transitional rules associated with the sleep staging, Markov models, CNNs, and RNNs have been used in recognition of sleep EEG in recent years [13, 14, 15, 16, 17]. This research innovatively applied the method of three-epoch splicing to simulate the technician recognition of EEG, so that if there is an epoch with atypical or severe interference, technicians could refer to the previous and following epochs of the EEG. Another innovation in this study is the addition of expert rules. In clinical practice, the identification of REM stages mainly includes rapid eye movement, low-tension diaphragmatic electromyography, sawtooth waves, and transient myoelectric activity. The tonic mode of REM sleep should not have any apparent ocular activity so that the model does not make erroneous judgments. Expert rules can substantially avoid erroneous judgments.
To explore the analysis process of the model, this study innovatively introduced the concept of top-2 accuracy. As a result, the overall accuracy was dramatically improved. Through the analysis of the predicted value of the second probability of the model, this study finds that the model will have a certain degree of confusion when distinguishing between the W and N1 stages, between the N1 and N2 stages, and between N2 and N3 stages; this is consistent with the most common differences in sleep scoring by human experts . A previous study pointed out that the definition of divergence and the K complex wave lacks specificity and is related to the existence of spindle wave identification . Since the lack of a clear “absolute truth value” for sleep stage scoring, the substantial increase in top-2 accuracy indicates that the model output is reasonable. This study found that arousal affects the accuracy of sleep staging; this may be due to the number of arousals being positively correlated with the number of N1 stages . Moreover, the model performance of N1 is lower than that of other sleep stages. This study proposes a future direction for the evaluation of deep learning algorithms by analyzing the top two rankings of maximum probability values for sleep staging. For sleep staging, which relies on manual scoring and must consider inter-rater variability, it is worthwhile to study which parameters are chosen to evaluate model performance. Three classifications (awakening, NREM, and REM) or four classifications (awakening, shallow sleep (N1 + N2), deep sleep (N3), and REM)) make sense in clinical practice.
There are some limitations to this study. First, the clinical data of this study is imbalanced, and the number of N3 stages in this study is small. Compared with other studies, the recognition of N3 is lower. Second, the clinical dataset used in this study was derived from retrospective data of a single center, lacking analysis of homogeneity with the published dataset sleep-EDF. Additionally, the study applied independent and homogeneous training sets and testing sets without cross-validation, and thus there may be deficiencies in the assessment of the generalization capabilities of the model.
In conclusion, this research provides a robust and reliable model in which the inter-rater agreement nears that of human experts. In future research, it is essential to address the abovementioned limitations, explore the evaluation criteria for neural network models, and develop a lightweight version of the model to make it work in wearable devices and smart devices. Eventually, this work can have a positive impact on population health and healthcare expenditures.
This research was supported by the National Key Research & Development Program of China (2017YFC0112500), Beijing Municipal Administration of Hospitals’ Mission Plan (SML20150201), and Beijing Municipal Administration of Hospitals Incubating Program (PX 2019005). Xiaoqing Zhang and Mingkai Xu contributed equally to this work. The first draft of the manuscript was written by Xiaoqing Zhang and Mingkai Xu. Demin Han and Xingjun Wang are co-corresponding authors of this paper.
The first draft of the manuscript was written by Xiaoqing Zhang and Mingkai Xu. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
This research was supported by the National Key Research & Development Program of China (2017YFC0112500), Beijing Municipal Administration of Hospitals’ Mission Plan (SML20150201), and Beijing Municipal Administration of Hospitals Incubating Program (PX 2019005).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent was obtained from all individual participants’ guardians included in the study.
- 1.Benjafield AV, Ayas NT, Eastwood PR, Heinzer R, Ip MSM, Morrell MJ, Nunez CM, Patel SR, Penzel T, Pepin JL, Peppard PE, Sinha S, Tufik S, Valentine K, Malhotra A (2019) Estimation of the global prevalence and burden of obstructive sleep apnoea: a literature-based analysis. Lancet Respir Med 7(8):687–698. https://doi.org/10.1016/S2213-2600(19)30198-5 CrossRefPubMedGoogle Scholar
- 5.Berry RB, Budhiraja R, Gottlieb DJ, Gozal D, Iber C, Kapur VK, Marcus CL, Mehra R, Parthasarathy S, Quan SF, Redline S, Strohl KP, Davidson Ward SL, Tangredi MM, American Academy of Sleep M (2012) Rules for scoring respiratory events in sleep: update of the 2007 AASM manual for the scoring of sleep and associated events. Deliberations of the sleep apnea definitions task force of the American Academy of sleep medicine. J Clin Sleep Med 8(5):597–619. https://doi.org/10.5664/jcsm.2172 CrossRefPubMedPubMedCentralGoogle Scholar
- 6.Tsinalis O, Matthews PM, Guo Y, Zafeiriou S (2016) Automatic sleep stage scoring with single-channel EEG using convolutional neural networks. arXiv preprint arXiv:161001683Google Scholar
- 8.Sun Y, Wang B, Jin J, Wang X (2018) Deep convolutional network method for automatic sleep stage classification based on neurophysiological signals. In: 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). IEEE, pp 1–5Google Scholar
- 13.Patanaik A, Ong JL, Gooley JJ, Ancoli-Israel S, Chee MWL (2018) An end-to-end framework for real-time automatic sleep stage classification. Sleep 41(5). https://doi.org/10.1093/sleep/zsy041
- 15.Zhang L, Fabbri D, Upender R, Kent D (2019) Automated sleep stage scoring of the sleep heart health study using deep neural networks. Sleep. https://doi.org/10.1093/sleep/zsz159
- 16.Allocca G, Ma S, Martelli D, Cerri M, Del Vecchio F, Bastianini S, Zoccoli G, Amici R, Morairty SR, Aulsebrook AE, Blackburn S, Lesku JA, Rattenborg NC, Vyssotski AL, Wams E, Porcheret K, Wulff K, Foster R, Chan JKM, Nicholas CL, Freestone DR, Johnston LA, Gundlach AL (2019) Validation of 'Somnivore', a machine learning algorithm for automated scoring and analysis of polysomnography data. Front Neurosci 13:207. https://doi.org/10.3389/fnins.2019.00207 CrossRefPubMedPubMedCentralGoogle Scholar
- 19.Warby SC, Wendt SL, Welinder P, Munk EG, Carrillo O, Sorensen HB, Jennum P, Peppard PE, Perona P, Mignot E (2014) Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods. Nat Methods 11(4):385–392. https://doi.org/10.1038/nmeth.2855 CrossRefPubMedPubMedCentralGoogle Scholar
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.