ISSN: 2455-5479
Archives of Community Medicine and Public Health
Research Article       Open Access      Peer-Reviewed

Classification of Sleep EEG Data in the COVID-19 Pandemic

Bin Zhao1* and Jinming Cao2

1School of Science, Hubei University of Technology, Wuhan, Hubei, China
2School of Information and Mathematics, Yangtze University, Jingzhou, Hubei, China
*Corresponding author:Dr. Bin Zhao, School of Science, Hubei University of Technology, Wuhan, Hubei, China, Tel/Fax: +86 130 2851 7572; E-mail: zhaobin835@nwsuaf.edu.cn
Received: 04 February, 2021 | Accepted: 20 March, 2021 | Published: 22 March, 2021
Keywords: Sleep EEG; Deep learning; Softmax function; Adam algorithm; Multiple classifications problem

Cite this as

Zhao B, Cao J (2021) Classifi cation of Sleep EEG Data in the COVID-19 Pandemic. Arch Community Med Public Health 7(1): 051-054. DOI: 10.17352/2455-5479.000134

Sleep is an important part of the body’s recuperation and energy accumulation, and the quality of sleep also has a significant impact on people’s physical and mental state in the COVID-19 Pandemic. It has attracted increasing attention on how to improve the quality of sleep and reduce the impact of sleep-related diseases on health in the COVID-19 Pandemic.

The EEG (electroencephalogram) signals collected during sleep belong to spontaneous EEG signals. Spontaneous sleep EEG signals can reflect the body’s changes, which is also an basis for diagnosis and treatment of related diseases.

Therefore, the establishment of an effective model for classifying sleep EEG signals is an important auxiliary tool for evaluating sleep quality, diagnosing and treating sleep-related diseases.

In this paper, outliers of each kind of original data were detected and deleted by using the principle of 3 Sigma and k-means clustering + Euclidean distance detection method. Then, using the Adam algorithm with adaptive learning rate constructs the Softmax multi-classification BP neural network the model, and relatively high accuracy and AUC ( Area Under Curve ) values were finally obtained in the COVID-19 Pandemic.

Introduction

The sleep process is a complex process of dynamic changes. According to R&K, the international standard for the interpretation of sleep stages, there are different states during sleep.

In addition to the awake period, the sleep cycle consists of two alternate sleep states, namely REM(Rapid Eye Movement), and non-REM.

In non-REM, according to the gradual change of sleep state from shallow to deep, it is further divided into sleep, I, II, III, and sleep IV. Sleep stage III and sleep stage IV can be combined with a deep sleep stage.

Figure 1 shows the time series of Electroencephalogram (EEG) signals corresponding to different sleep stages, from top to bottom, namely, wakefulness, sleep I, sleep II, deep sleep and REM.

Figure 1 shows the characteristics of EEG signals vary in different sleep stages in the COVID-19 Pandemic.

Automatic staging based on EEG signals can reduce the artificial burden on expert physicians and it is a useful auxiliary tool for assessing sleep quality, diagnosing and treating sleep-related diseases. In this paper, Python is used to build a neural network, and design a sleep staging prediction model. Based on as few training samples as possible, it can obtain relatively high prediction accuracy.

Overview of Bp Neural Network

An artificial neural network gets widely used in some aspects, including pattern recognition, function approximation, data compression, data classification, data prediction, etc. [1-6]. BP neural the network is an algorithm in ANN. Figure 2 shows the basic structure of the BP neural network.

Introduction to activation function and algorithm:

ReLU function: ReLU=max(0,x) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOuaiaabwgacaqGmbGaaeyvaiabg2da9iaab2gacaqGHbGaaeiEaiaacIcacaaIWaGaaiilaiaadIhacaGGPaaaaa@40EB@

Softmax function: S i = e i j e j MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaam4uamaaBaaaleaacaWGPbaabeaakiabg2da9maalaaabaGaamyzamaaCaaaleqabaGaamyAaaaaaOqaamaaqababaGaamyzamaaCaaaleqabaGaamOAaaaaaeaacaWGQbaabeqdcqGHris5aaaaaaa@3FE0@

Figure 3 shows the operating principle.

Adam algorithm

m 0 v 0 t0 MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyBamaaBaaaleaacaaIWaaabeaakiaadAhadaWgaaWcbaGaaGimaaqabaGccaWG0bGaeyiKHWQaaGimaaaa@3D5B@ Initialize 1st, 2st moment vedor and time step do while: tt+1 MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiDaiabgcziSkaadshacqGHRaWkcaaIXaaaaa@3B6A@

Computing the gradient: m t β 1 m t1 +( 1 β 1 ) g t MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyBamaaBaaaleaacaWG0baabeaakiabgcziSkabek7aInaaBaaaleaacaaIXaaabeaakiabgwSixlaad2gadaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaey4kaSYaaeWaaeaacaaIXaGaeyOeI0IaeqOSdi2aaSbaaSqaaiaaigdaaeqaaaGccaGLOaGaayzkaaGaeyyXICTaam4zamaaBaaaleaacaWG0baabeaaaaa@4DA1@

Update biased first moment estimate: v t β 2 v t1 +( 1 β 2 ) g t 2 MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamODamaaBaaaleaacaWG0baabeaakiabgcziSkabek7aInaaBaaaleaacaaIYaaabeaakiabgwSixlaadAhadaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaey4kaSYaaeWaaeaacaaIXaGaeyOeI0IaeqOSdi2aaSbaaSqaaiaaikdaaeqaaaGccaGLOaGaayzkaaGaeyyXICTaam4zamaaDaaaleaacaWG0baabaGaaGOmaaaaaaa@4E72@

Upgrade biased second moment estimate: m ^ t m t ( 1 ( β 1 ) t ) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGabmyBayaajaWaaSbaaSqaaiaadshaaeqaaOGaeyiKHW6aaSaaaeaacaWGTbWaaSbaaSqaaiaadshaaeqaaaGcbaWaaeWaaeaacaaIXaGaeyOeI0YaaeWaaeaacqaHYoGydaWgaaWcbaGaaGymaaqabaaakiaawIcacaGLPaaadaahaaWcbeqaaiaadshaaaaakiaawIcacaGLPaaaaaaaaa@44B9@

Compute bias-corrected first moment estimate: v ^ t v t ( 1 ( β 2 ) t ) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGabmODayaajaWaaSbaaSqaaiaadshaaeqaaOGaeyiKHW6aaSaaaeaacaWG2bWaaSbaaSqaaiaadshaaeqaaaGcbaWaaeWaaeaacaaIXaGaeyOeI0YaaeWaaeaacqaHYoGydaWgaaWcbaGaaGOmaaqabaaakiaawIcacaGLPaaadaahaaWcbeqaaiaadshaaaaakiaawIcacaGLPaaaaaaaaa@44CC@

Compute bias-corrected second draw moment estimate: θ t θ t-1 -α m ^ t v ^ t +ε MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiUde3aaSbaaSqaaiaadshaaeqaaOGaeyiKHWQaeqiUde3aaSbaaSqaaiaadshacaqGTaGaaGymaaqabaGccaqGTaGaeqySdeMaeyyXIC9aaSaaaeaaceWGTbGbaKaadaWgaaWcbaGaamiDaaqabaaakeaadaGcaaqaaiqadAhagaqcamaaBaaaleaacaWG0baabeaaaeqaaOGaey4kaSIaeqyTdugaaaaa@4ABD@

Upgrade parameters: Where is the step length, β 1 β 2 [0,1) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOSdi2aaSbaaSqaaiaaigdaaeqaaOGaeSy==7MaeqOSdi2aaSbaaSqaaiaaikdaaeqaaOGaeyicI4Saai4waiaaicdacaGGSaGaaGymaiaacMcaaaa@4347@ is the moment estimation of the exponential decay rate, andis the random objective function of the parameter .

In this paper, the whole training process of the improved BP neural network model is:

Step 1: Parameter initialization. Determine the node number of the network input layer, hidden layer and output layer, and initialize the weight, bias between each layer, then initialize learning rate.

Step 2: Calculate the output of the hidden layer. The hidden layer output is calculated by the weight and bias between the input vector and the connection layer and the ReLU activation function.

Step 3: Calculate the output of the output layer. through the hidden layer output and connection weights and bias and the Softmax activation function calculate the predicted output.

Step 4: Calculate Softmax cross-entropy as cost function according to predicted output and real label.

Step 5: Back propagation, and this paper use the adaptive learning rate Adam algorithm [7] to update the weight and bias.

Step 6: Determine whether the cost reaches the error range or the number of iterations. If not, return step 2.

Data description and preprocessing

Data were collected from 3000 sleep EEG samples and their labels are taken from different healthy adults during overnight sleep. The first is a “known label,” which represents the different sleep stages in digital form: stage wake (6), rapid eye movement (5), sleep I (4), sleep II (3), and deep sleep (2); The second to fifth columns are the characteristic parameters calculated from the original time sequence, successively including “Alpha”, “Beta”, “Theta” and “Delta”, which correspond to the energy proportion of EEG signals in the frequency range of “8-13Hz”, “14-25Hz”, “4-7Hz” and “0.5-4Hz” respectively. The unit of characteristic parameters is the percentage.

This paper gives raw data stage wake (6), and REM. (5), sleep I (4), sleep II (3), deep sleep (2), four brain electrical signal energy proportion of five sleep stages of brain electrical signal energy proportion, but the original data are generally given there are some abnormal data outliers or missing value, therefore we to each index of the five sets of data make a boxplot graph, the result is as follows in Figure 4.

Five sleep period by Figure 4 shows, there are some outliers, namely, these all belong to the original data of abnormal points, this paper uses the principle of 3 sigmas [8], will each table of data deletion, then after the processing of five tables to merge, and then using the K-means clustering + Euclidean distance outlier test [9], to find and remove outliers, as shown in Figure 5, a total of 2883 samples after pretreatment.

Model training and prediction

We divided the data into a training set and test set in a ratio of 2:8. We trained and tested the data using the traditional decision tree model [10] (DT) and Support Vector Machine Model (SVM), and compared the classification effect with the accuracy rate and AUC value as evaluation indexes. The results are as follows:

As can be seen from Table 1, the accuracy of Adam-BPNNet in several traditional methods is relatively high. Figure 6 shows the ROC curve of each classification method.

Table 2 shows that in the Adam-BPNNet model, fewer training sets will still have a better classification effect.

The prediction result is the best classification effect obtained after many experiments. In the early stage of experiment, the classification accuracy is low. After repeated debugging of the number of hidden layers and nodes, the best AUC value of this experiment is 0.83.

Conclusion

This study is mainly based on theoretical research and combines theory with practice. This paper uses BP neural network based on an adaptive learning rate Adam algorithm for data classification. Also, this paper selects Softmax as the activation function in the output layer, enabling the model to have good self-learning and self-adaptive ability. The most important thing is that the network has good generalization ability. When designing the classifier, it should consider whether the network can correctly classify the objects it needs to classify, and whether the network can correctly classify the unseen or noise-polluted patterns after training. The classification AUC value of this study is 0.83, which is scientific to a certain extent and can be used as auxiliary tool for the evaluation of sleep quality, diagnosis and treatment of sleep-related diseases.

This work was supported by the Philosophical and Social Sciences Research Project of Hubei Education Department (19Y049), and the Staring Research Foundation for the Ph.D. of Hubei University of Technology (BSQD2019054), Hubei Province, China.

  1. Qi Y (2018) Experimental Study on NDVI Inversion Using GPS-R Remote Sensing Based on BP Neural Network. Xuzhou: China University of Mining and Technology.
  2. Wang J (2016) Study of Data Acquistion System for Electric Stair-Climbing Wheelchair Seat Position Regulating Mechanism. Tianjin: Hebei University of Technology.
  3. Cao Y, Zhao Y (2017) Research on Computer Intelligent Image Recognition echnology based on GA-BP Neural Network. Applied Laser 37: 139-143.
  4. Marzabadi FR, Masdari M, Soltani MR (2020) Application of Artificial Neural Network in Aerodynamic Coefficient Prediction of Subducted Airfoil. Journal of Research in Science and Engineering 2: 13-17.
  5. Wang X, Chen R, Qiao B (2020) Application of BP Neural Network in Tea Disease Classification and Recognition. Guizhou Science 38: 93-96.
  6. Behmanesh R, Rahimi I (2020) The Optimized Regression Neural Network Combined with Experimental Design and Regression for Control Chart Prediction. 2: 8-12.
  7. Kingma DP, Adam BJ (2015) A Method for Stochastic Optimization. 3rd International Conference for Learning Representations. San Diego.
  8. Lou L (2018) Design and Implementation of Anomaly Detection System of Web User Behaviors.      Zhejiang University.
  9. Jiang H, Ji F, Wang H, Wang X, Luo Y (2018) Improved K-means Algorithm for Ocean Data Anomaly Detection. Computer Engineering and Design 39: 3132-3136.
© 2021 Zhao B, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.