Advertisement

The Electroencephalogram as a Biomarker Based on Signal Processing Using Nonlinear Techniques to Detect Dementia

  • Luis A. Guerra
  • Laura C. Lanzarini
  • Luis E. Sánchez
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 94)

Abstract

Dementia being a syndrome caused by a brain disease of a chronic or progressive nature, in which the irreversible loss of intellectual abilities, learning, expressions arises; including memory, thinking, orientation, understanding and adequate communication, of organizing daily life and of leading a family, work and autonomous social life; leads to a state of total dependence; therefore, its early detection and classification is of vital importance in order to serve as clinical support for physicians in the personalization of treatment programs. The use of the electroencephalogram as a tool for obtaining information on the detection of changes in brain activities. This article reviews the types of cognitive spectrum dementia, biomarkers for the detection of dementia, analysis of mental states based on electromagnetic oscillations, signal processing given by the electroencephalogram, review of processing techniques, results obtained where it is proposed the mathematical model about neural networks, discussion and finally the conclusions.

Keywords

Biomarker Dementia Electroencephalogram Signal processing Neuronal network 

1 Introduction

The understanding of how the brain works with the appearance of symptoms of syndromes of progressive pathological deterioration, physiological aging and brain diseases will allow the development of new therapeutic and rehabilitative approaches; therefore, it is necessary to focus the research in this field in order to contribute in the identification of quantitative and specific biomaterials [1], which will allow understanding the development of cognitive processes, and therefore know the link between structural, functional changes and brain dysfunction [2]. Dementia is a degenerative disease of the central nervous system, which can be described clinically as a syndrome of progressive pathological deterioration that causes a decrease in the cognitive domain centered on attention, memory, executive function, visual-spatial ability and language [3].

Currently, the interest of the research with the support of the electroencephalogram has allowed the detection of cortical anomalies associated with cognitive deterioration and dementia [6, 7]. An electroencephalogram marker provides signals to be processed and analyzed by nonlinear techniques [3] such as support vector machines and neural network.

1.1 Types of Dementia and Cognitive Spectrum

Dementia occurs when the brain has been affected by a specific disease or condition that causes cognitive impairment [8]. According to its cause, there are different types of dementia; Alzheimer’s disease, Lewy body, fronto temporal dementia, Parkinson’s disease [4, 9], vascular dementia [10, 11]. In Fig. 1, the advance of cognitive deterioration is illustrated until reaching the spectrum of dementia, which can be seen as a sequence in the cognitive domain that starts from mild cognitive impairment and ends with severe dementia, and the period in which the brain is at risk of reaching cognitive impairment not dementia [3].
Fig. 1.

Spectrum of dementia

In reference to Fig. 1, clinically, mild cognitive impairment is the transition stage between early normal cognition and late severe dementia and is considered heterogeneous because some patients with mild cognitive impairment develop dementia, although others remain as deteriorating patients cognitive mild for many years [12].

1.2 Biomarkers for the Detection of Dementia

Biomarker is an objective measure, which is related to molecules, biological fluids, anatomical or physiological variables that are concentrated in the brain, which help diagnose and evaluate the progression of the disease, as well as, the response to therapies. They analyze the pathogenesis of dementia, predict or evaluate the risk of disease by providing a clinical diagnosis [13]. For the early detection of dementia, biomarkers include studies divided into four categories: biochemistry [4, 15, 16, 17], genetics [4, 5, 9, 18], neuroimaging [19], and neurophysiology [4, 5, 9, 14].

1.3 Analysis of Mental States in Function of Electromagnetic Oscillations

Alpha oscillations appear in healthy adults while they are awake, relaxed, and with their eyes closed. It fluctuates in a frequency range from 8 Hz to 13 Hz with a voltage range of 30 μV to 50 μV. It decreases when the eyes are opened, by the hearing of unknown sounds, anxiety or mental concentration [22, 23]. The alpha rhythm is composed of sub spectral units [24, 25]. It is defined in the occipital1 region, occipital2 [26] and on the frontal cortex.

The frequency of the beta waves oscillates between 13 Hz and 30 Hz, with a range of voltage of 5 μV to 30 μV. They appear by excitation of the central nervous system, increasing when the patient puts attention and control, is associated with thought and active attention, in the solution of concrete problems, reaches frequencies close to 50 Hz; they replace the alpha wave during cognitive decline [22]. They are delimited in the parietal7, parietal8 and frontal7, frontal8 [26] regions.

The frequency of the theta waves oscillates between 4 Hz to 7 Hz. They predominate during sleep, emotional stress, creative inspiration states in children and adults, deep meditation. They are recorded in temporal region7, temporal8 and parietal7 region, pariental8 with an amplitude of 5 μV to 20 μV [22]. In adults, two types of theta wave are presented according to their activity; the first is associated with decreased alertness, drowsiness, deterioration and dementia; the second is linked to activities of mental effort, attention and stimulation [26].

The frequency range of the gamma wave varies from 30 Hz to 100 Hz [22]. It is recorded in the somatosensory cortex, reflects the mechanism of consciousness, states of short-term memory to recognize objects, sounds, tactile sensation, particularly when it is related to the theta wave [26].

The frequency of the delta waves oscillates between 0.5 Hz to 4 Hz, its amplitude goes from 20 μV to 200 μV. It is generated during deep sleep, waking state and with serious diseases of the brain, never reach zero because that would mean brain death. This wave tends to confuse signals from the artifacts caused by the muscles of the neck and jaw, which is because the muscles are close to the surface of the skin, while the signal that is of interest originates from the inside of the brain. Before entering the dream state the brain waves pass successively from beta to alpha, theta and finally delta [22].

2 Materials and Method

Figure 2 shows the steps of the processing of the signals given by the electroencephalogram; the acquisition of biological signals, reduction of artifacts, extraction of characteristics, classification and presentation of the signal.
Fig. 2.

Stages of signal processing

In Fig. 2, the electroencephalogram needs to record successive stages of biological signal processing to extract significant markers from patients with dementia, so that these markers reflect the pathological changes in the brain.

The stage of signal acquisition reflects the electrical activity of neurons in the brain [29]. The referencial assembly of the electroencephalogram is done based on the international 10/20 system [30, 31, 32, 33, 34, 35].

The artifact reduction stage verifies that the recorded signal is affected by the noise factors. The artifacts are superimposed on the frequencies of the electroencephalogram signals. These artifacts are divided into physiological: muscle activity, pulse and ocular flicker [37, 38, 39] and non-physiological artifacts: interference noise from the electric line, sweat [38, 40]: background noise. Processing techniques allow us to overcome this problem and extract relevant information from the recorded signal. We disseminate the methods used for the elimination of artifacts: analysis of independent components [41, 42], wavelet transform [43, 44, 45], combined technique of component analysis with the wavelet transform [46, 47].

The stage of extraction and selection of the characteristics separates the useful information by means of linear and non-linear techniques. Linear techniques based on coherence [52] and spectral calculations [36, 48, 49, 50, 51] facilitate finding the anomalies provided by the electroencephalogram [28]. Dynamic non-linear techniques analyze dynamic information [53, 54, 55, 56]. The non-linear methods used are: the correlation dimension and the Lyapunov exponents [57, 58], to quantify the number of independent variables [59]; the fractal dimension, in terms of the waveform used to measure the structure of the signal [20, 60]; Lempel-Ziv-Welch [61], metric to evaluate the complexity of the signal and its recurrence rate; Entropy methods [20, 21, 56, 62], analyze the ability of the system to create information.

The techniques for classifying dementia are scenarios that predict the qualitative properties of the mental state. In Fig. 3, vectors of similar characteristics extracted from the previous stage are classified into three categories: cognitive impairment, not dementia, mild cognitive impairment, and dementia. Vectors with similar characteristics are analyzed before being applied to the classifier to avoid overloading the classifier and reducing computational time, increasing the accuracy of the classification. These vectors are processed using dimensionality reduction techniques based on the analysis methods: principal and independent components [63, 64, 65].
Fig. 3.

Scenario about classification processing

In Fig. 3, the efficiency of the classification is related to the extracted characteristics, the classifiers and the reduction of dimensionality. Classifiers: linear discriminant analysis and support vector machines are efficient methods for classifying brain disorders, such as dementia and epilepsy [63, 64]. The linear discriminant [66] creates a new variable that combines the original predictors by finding a hyperplane that separates the data points representing different classes.

On the other hand, support vector machines, integrate feature vectors with many components [3], is a problem of convex optimization. They are based on an algorithm that establishes a hyperplane that optimally separates the points of one class from the other that have been previously projected in a space of superior dimensionality [67], has the ability to build the model with a subset of training data. The support vector machines maximize the marginal discriminant, formulating a solution by Lagrange methods, the output of its classifier has the following expression.
$$ y = \text{sgn} (\sum\limits_{i = 1}^{N} {\alpha_{i} y_{i} } k(x_{i} ,x_{j} ) + b)$$
(1)
Where, it \( y \) represents the exit; sgn is the signum function; Σ is the summation; N constitutes a set of training patterns; \( i \) is the input unit; \( (x_{i} ,y_{i} ) \) are training samples, with input vectors \( x_{i} \) and classes \( y_{i} = [ - 1,1]; \)\( \alpha_{i} \ge 0, \) are Lagrange multipliers; b is the bias; \( k(x_{i} ,x_{j} ) \) is the Kernel function. The two classes are mapped with kernel methods in a new space of greater dimension characteristics through non-linear measurements [67].
$$ k(x_{i} ,x_{j} ) = \exp \left( {\frac{{ - \left\| {x_{i} - x_{j} } \right\|}}{{2\sigma^{2} }}^{2} } \right) $$
(2)

Where, \( k(x_{i} ,x_{j} ) \) is the function of the Gaussian kernel; exp is the exponential function; \( \left\| {x_{i} - x_{j} } \right\|^{2} \) is the Euclidean distance squared; \( \sigma \) is the kernel extension parameter, which expresses a measure of similarity between vectors.

The technique of support vector machines is based on the principle of structural risk minimization, where the mathematical kernel classification function tends to minimize the error by separating the data, minimizing the error in the classification, and maximizing the margin of separation. This technique is used for the classification of signals, due to its high precision which makes it insensitive to over training and dimensionality [64, 68].

The neural network technique is used to: develop non-linear classification boundaries, information processing, resolution of classification problems, modeling, association, mapping, interpretation of spectra, calibration and pattern recognition. In Fig. 4, the artificial neural network is presented.
Fig. 4.

Neural network of multilayer type

Figure 4 is based on multilayer models: the one is an input unit (E1, E2, E3, E4, E5) and no mathematical operations are performed on this unit, it distributes the information to the first hidden layer, the next one is the hidden layer (O1, O2, O3, O4, O5) and the last one is the output layer (S). The mathematical technique of multilayer modeling of the neural network is explained in the results section obtained.

To evaluate the capacity of neural networks, it is permissible to use architectures corresponding to a multilayer model with a single hidden layer. First, some different transfer functions should be used in the hidden layer node, two sigmoidal functions, a logarithmoid function, and the linear function. Secondly, the use of different numbers of neurons (from one to three) should be analyzed.

Figure 5 shows the Architecture of the technique of a Neural Network, the performance of the network is dependent, among other variables, on the choice of the processing elements (neurons), the architecture and the learning algorithm.
Fig. 5.

Architecture of the neural network technique

In relation to the architecture of Fig. 5, the objective of the neural network technique is to provide a tool that can be used to select the optimal design, which is obtained by optimizing the neural network itself through the implementation of the algorithm backpropagation based on available training data.

3 Results

As work in the future, for the electroencephalogram project based on the processing of biological signals, we will focus the experimental phase on the development of the backpropagation type neural network model, the data of the frequencies of the biological signals alpha, beta, theta, delta, the range provided by the electroencephalogram will act as training inputs for the neural network.

The mathematical technique of multilayer modeling of the neural network of Fig. 5, depends on the values considered as input patterns provided by the electroencephalogram, which enter each of the neurons that make up the input unit, the unit that distributes the information to the next layer called hidden and the last one is the output layer. Result of the mathematical modeling study, we distinguish the following steps that demonstrate the validity of the training algorithm: (1) initialization of the weights of the network with small random values, (2) reading of an input pattern \( x_{p} :(x_{p1} ,x_{p2} , \ldots ,x_{pN} ) \) and specification of the desired output that it must generate the network \( d:(d_{1} ,d_{2} , \ldots ,d_{M} ) \) associated with said input, (3) calculation of the current output of the network for the input presented; to do so, the inputs to the network are presented and the output of each layer is calculated until reaching the output layer, this is the output of the network; the substeps to be followed are described; the hidden net entries from the input units are calculated for a hidden neuron \( O_{j} \), with the following formula.
$$ net_{pj}^{h} = \sum\limits_{i = 0}^{N} {w_{ji}^{h} } x_{pi} + \theta_{j}^{h} $$
(3)
Where, \( net \) is the function that is transformed to obtain the output signal of the i-th node by means of the sigmoid transfer function; h are the magnitudes of the hidden layer; is the p-th training vector; it is the j-th occult neuron; it is the \( \sum \) summation; it \( N \) constitutes a set of training patterns being the number of processing units of the hidden layer, it is the \( i \) input unit, it is the \( w_{ji} \) synaptic weight of the connection between \( E_{i} \) and \( O_{j} \); is the \( x_{pi} \) input pattern of the connection between \( E_{i} \) and the p-th training vector and is the \( \theta_{j} \) minimum threshold that the neuron must reach for its activation, acting as an input unit. Based on these inputs, the outputs \( y \) of the hidden neurons are calculated using the sigmoid activation function \( f \) to minimize the error:
$$ y_{pj}^{h} = f_{j}^{h} (net_{pj}^{h} ) $$
(4)

Where, \( y \) is the transfer function of the output units of the neuron; \( p \) is the p-th training vector; it \( j \) is the j-th occult neuron; they \( h \) are the magnitudes of the hidden layer; \( f_{j}^{h} \) sigmoid activation function; \( net \) is the function that is transformed to obtain the output signal of the i-th node by means of the sigmoid transfer function.

Analyzing is that the activation function integrates:
$$ f(net_{jk} ) = \frac{1}{{1 + e^{{ - net_{jk} }} }} $$
(5)

Equation (5), is the sigmoid function, calculates that the output neurons are binary; it is \( net_{jk} \) the neuron, where it is \( j \) the j-th occult neuron; it is \( k \) the neuron of the output layer; \( e \) mathematical constant of irrational numbers equal to 2,71828.

To obtain the results of each neuron in the output layer, the same is done:
$$ net_{pk}^{o} = \sum\limits_{j = 1}^{L} {w_{kj}^{o} } y_{pj} + \theta_{k}^{o} $$
(6)

Where, the function of transformation \( net_{pk}^{o} \) from the p-th training vector and the output of the neuron of the hidden layer, is calculated by adding the multiplication between the weight between the neuron of the output layer \( k \) and the j-th is neuron is hidden \( w_{kj}^{o} \) by the transfer function which integrates the output units \( y_{pj} \) of the neuron; L is a set of training patterns being the number of processing units of the hidden layer, \( j \) is the j-th occult neuron; plus the minimum threshold \( \theta_{k}^{o} \) that the neuron of the hidden layer must reach for its activation.

Based on these inputs, the outputs of the hidden neurons \( y \) are calculated using the sigmoid activation function to minimize the error \( f \):
$$ y_{pk}^{o} = f_{k}^{o} (net_{pk}^{o} ) $$
(7)

In which, the transformation function \( y_{pk}^{o} \) from the p-th training vector and the output of the neuron from the hidden layer, is calculated by multiplying the sigmoidal activation function \( f_{k}^{o} \); is \( net_{pk}^{o} \) the function that is transformed to obtain the output signal from the p-th training vector and the output of the neuron from the hidden layer.

Once all the neurons have an activation value for a given input pattern, (4) the algorithm continues calculating the error for each neuron, except those of the input layer. For the neuron \( k \) in the output layer, if the answer is \( (y_{1} ,y_{2} , \ldots ,y_{M} ) \), the error \( \delta \) is expressed as:
$$ \delta_{pk}^{o} = (d_{pk} - y_{pk} )f_{k}^{{o^{{\prime }} }} (net_{pk}^{o} ) $$
(8)

Where, it \( \delta_{pk}^{o} \) calculates the error in the output layer; \( (d_{pk} - y_{pk} ) \) represents the linear output; the \( f_{pk}^{o} \) function is derivable; it’s \( net_{pk}^{o} \) the neuron in the output layer.

For the neuron \( k \) in the output layer with respect to the sigmoid function in particular:
$$ \delta_{pk}^{o} = (d_{pk} - y_{pk} )y_{pk} (1 - y_{pk} ) $$
(9)

Where, is \( y_{pk} (1 - y_{pk} ) \) the result of the derivative of the function \( f_{k}^{{o^{{\prime }} }} \).

If the neuron \( j \) is not an output, then the partial derivative of the error can not be calculated directly. Therefore, the result is obtained from known values and others that can be evaluated. The resulting formula is:
$$ \delta_{pj}^{h} = x_{pi} (1 - x_{pi} )\sum\limits_{k} {\delta_{pk}^{o} } w_{kj}^{o} $$
(10)

Where, the calculation of the error between the p-th training vector and the j-th occult neuron \( \delta_{pj}^{h} \) is obtained by multiplying \( x_{pi} (1 - x_{pi} ) \) which is the derivative of the function \( f_{k}^{{o^{{\prime }} }} \) by the sum of the multiplication between the error of the output layer \( \delta_{pk}^{o} \) by the weight comprised between the neuron \( k \) of the output layer and the j-th hidden neuron \( w_{kj}^{o} \).

Also, the error in the hidden layers depends on all the error terms of the output layer; so it is determined that the error must be propagated from the output layer to the input layer, which is possible using the backpropagation algorithm.

The error in a hidden neuron is proportional to the sum of the known errors that occur in the neurons connected to its output, each multiplied by the weight of the connection. The internal thresholds for the neurons adapt in a similar way, considering that they are connected with weights of auxiliary inputs with constant value.

To update the weights, (5) a recursive algorithm is used, starting with the output neurons and working backward until the input layer is reached. These weights are adjusted in the neurons of the output layer with:
$$ w_{kj}^{o} (t + 1) = w_{kj}^{o} (t) + \Delta w_{kj}^{o} (t + 1) $$
(11)
Where, is \( w_{kj}^{o} \) the weight of the neuron from the output layer \( k \) to the j-th neuron of the hidden layer; it is \( (t + 1) \) the entry from the last layer; represents \( (t) \) the current entry; is \( \Delta w_{kj}^{o} \) the variation of the weight of the output layer \( k \) towards the j-th hidden neuron, which is calculated as follows:
$$ \Delta w_{kj}^{o} (t + 1) = \alpha \delta_{pk}^{o} y_{pj} $$
(12)

Where, the variation of the weight of the output layer \( \Delta w_{kj}^{o} \) is calculated by multiplying the learning rate \( \alpha \) by the sigmoid function \( \delta_{pk}^{o} \) by the transfer function \( y_{pj} \) which integrates the output units of the neuron.

The weights of the neurons of the hidden layer are calculated with:
$$ w_{ji}^{h} (t + 1) = w_{ji}^{h} (t) + \Delta w_{ji}^{h} (t + 1) $$
(13)
Where, is \( w_{ji}^{h} \) the weight of the neuron of the j-th neuron of the hidden layer towards the input unit \( (i) \); is \( (t + 1) \) the entry from the j-th neuron of the hidden layer; represents \( (t) \) the current entry; is \( \Delta w_{ji}^{h} \) the variation of the weight of the neuron of the j-th neuron of the hidden layer towards the input unit \( (i) \), which is calculated as follows:
$$ \Delta w_{ji}^{h} (t + 1) = \alpha \delta_{pj}^{h} x_{pi} $$
(14)

Where, the variation of the weight of the output layer \( \Delta w_{ji}^{h} \) is calculated by multiplying the learning rate \( \alpha \) by the sigmoid function \( \delta_{pj}^{h} \) by the input unit \( x_{pi} \).

In both cases, to accelerate the learning process, a learning rate \( \alpha \) equal to 1 is included, and to correct the error address, the following term is used for the case of an output neuron:
$$ \gamma (w_{kj} (t) - w_{kj} (t - 1)) $$
(15)
And to correct the address of the error of a hidden layer, with a rate \( \gamma \) less than 1, the following term is used:
$$ \gamma (w_{ji} (t) - w_{ji} (t - 1)) $$
(16)
This process (6) is repeated until the error term \( E_{p} \) is acceptably small for each of the learning patterns:
$$ E_{p} = \frac{1}{2}\sum\limits_{k = 1}^{M} {} \delta_{pk}^{2} $$
(17)

Where, it \( M \) constitutes a set of patterns that accelerates the learning process; is \( p \) the p-th training vector; it \( k \) is the neuron of the output layer.

4 Discussion

From the analysis of the biological signals emitted by the electroencephalogram with respect to the mental states, it is concluded that until adulthood, the frequencies of the delta and theta waves decrease, while those of the alpha and beta waves increase linearly [27].

We have investigated functional connectivity with electroencephalogram and brain networks in patients with neurological diseases in general, arriving to determine that certain patients show different characteristic patterns of functional connectivity and alterations of the network; more specifically, in the future we will focus on the application of the algorithm based on the massive parallelization of neural networks to model and process the data obtained with the electroencephalogram, in such a way that the extracted information contained in the signal can be compared to classify the signal between two or more classes, which allows to determine patterns that facilitate the identification of signs of Alzheimer’s disease called dementia.

From results obtained, what can be evidenced is that the computational learning method of vector support machines allows finding a hyperplane that separates multidimensional information perfectly into two classes. However, because the data is not generally linearly separable, this learning method introduces the notion of kernel-induced characteristic space, which transforms the information into a higher dimensional space where the data is separable. The key to vector support machines is that this higher dimensional space does not need to be treated directly, so the resulting error must be minimized.

On the other hand, the development of neural network models of backpropagation type, is based on a structure of neurons joined by nodes that transmit information to other neurons, which give a result by means of mathematical functions. The neural network learns from existing information through some training, a process by which its weights are adjusted, in order to provide an approximate result close to the desired one. In addition, it is noted that the backpropagation algorithm uses the descent method by the gradient and performs an adjustment of the weights starting with the output layer, according to the error committed, and proceeds by propagating the error to the previous layers, from back to forward, until you reach the layer of the input units, denoting your ability to organize the knowledge of the hidden layer so that any correspondence between the input unit and the output layer can be achieved.

5 Conclusions

To understand or design a learning process in any of the cases, method of vector support machines or neuron network, it is necessary: (1) a learning paradigm supported by the information provided by the electroencephalogram, (2) learning rules that govern the process of weight modification, (3) a learning algorithm.

The vector support machines are constructed based on a convex function, so that a global optimum is obtained and allows the construction of its dual formulation. Another great advantage is the ability to model non-linear phenomena by using a transformation of the space of origin to a larger one. On the other hand, it has limitations, when working with numerical data it is necessary to transform the nominal attributes to a numerical format. There is no kernel function that is better than all. The use of different kernel functions can determine different solutions, so it is necessary to be determined to solve each particular problem.

Result of the analysis of the readings on neural networks as a work in the future, the backpropagation algorithm will be applied, with a layer of five input units, a hidden layer with five neurons and an output layer with a neuron. For the tasks in the hidden and output layers, the classifier of the neural network will be integrated by bipolar and unipolar sigmoid functions, respectively, as a decision function; on the other hand, we will normalize the weights and the entries. We will determine the most effective set as well as the optimal length of the vector for the high precision classification. By minimizing the error, we optimize the number of neurons in the hidden layer to five. The weights and the slope of the sigmoid function will be trained. Appropriate times and an appropriate learning rate will be considered, in order to reach an optimal classification accuracy, that are within the ranges of the frequencies of the biological signals alpha, beta, theta, delta, gamma.

References

  1. 1.
    Griffa, A.: Structural Connectomics in Brain Diseases. Neuroimage. 80, 515–526 (2013)CrossRefGoogle Scholar
  2. 2.
    Sporns, O., Tononi, G., Kötter, R.: The human connectome: a structural description of the human brain. PLoS Comput. Biol. 1(4), e42 (2005)CrossRefGoogle Scholar
  3. 3.
    Al-Qazzaz, N.K.: Role of EEG as biomarker in the early detection and classification of dementia. Sci. World J. 2014, 16 (2014)CrossRefGoogle Scholar
  4. 4.
    Cedazo-Minguez, A., Winblad, B.: Biomarkers for Alzheimer’s disease and other forms of dementia: clinical needs, limitations and future aspects. Exp. Gerontol. 45(1), 5–14 (2010)CrossRefGoogle Scholar
  5. 5.
    Hampel, H.: Biomarkers for Alzheimer’s Disease: academic, industry and regulatory perspectives. Nat. Rev. Drug Discov. 9(7), 560–574 (2010)CrossRefGoogle Scholar
  6. 6.
    Vialatte, F.B.: Improving the specificity of EEG for diagnosing Alzheimer’s Disease. Int. J. Alzheimer’s Dis. 2011, 7 (2011)Google Scholar
  7. 7.
    Hampel, H.: Perspective on future role of biological markers in clinical therapy trials of Alzheimer’s disease: a long-range point of view beyond 2020. Biochem. Pharmacol. 88(4), 426–449 (2014)CrossRefGoogle Scholar
  8. 8.
    Borson, S.: Improving dementia care: the role of screening and detection of cognitive impairment. Alzheimer’s Dement. 9(2), 151–159 (2013)CrossRefGoogle Scholar
  9. 9.
    DeKosky, S.T., Marek, K.: Looking backward to move forward: early detection of neurodegenerative disorders. Science 302(5646), 830–834 (2003)CrossRefGoogle Scholar
  10. 10.
    Román, G.C.: Vascular dementia may be the most common form of dementia in the elderly. J. Neurol. Sci. 203, 7–10 (2002)CrossRefGoogle Scholar
  11. 11.
    Thal, D.R., Grinberg, L.T., Attems, J.: Vascular dementia: different forms of vessel disorders contribute to the development of dementia in the elderly brain. Exp. Gerontol. 47(11), 816–824 (2012)CrossRefGoogle Scholar
  12. 12.
    Petersen, R.C.: Mild cognitive impairment as a diagnostic entity. J. Intern. Med. 256(3), 183–194 (2004)CrossRefGoogle Scholar
  13. 13.
    Dorval, V., Nelson, P.T., Hébert, S.S.: Circulating MicroRNAs in Alzheimer’s Disease: The Search for Novel Biomarkers. Frontiers in Molecular Neuroscience 6, 24 (2013)Google Scholar
  14. 14.
    Poil, S.S.: Integrative EEG biomarkers predict progression to Alzheimer’s disease at the MCI stage. Front. Aging Neurosci. 5, 58 (2013)CrossRefGoogle Scholar
  15. 15.
    Mattsson, N.: CSF biomarkers and incipient Alzheimer disease in patients with mild cognitive impairment. JAMA 302(4), 385–393 (2009)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Paraskevas, G.: CSF biomarker profile and diagnostic value in vascular dementia. Eur. J. Neurol. 16(2), 205–211 (2009)CrossRefGoogle Scholar
  17. 17.
    Frankfort, S.V.: Amyloid beta protein and tau in cerebrospinal fluid and plasma as biomarkers for dementia: a review of recent literature. Curr. Clin. Pharmacol. 3(2), 123–131 (2008)CrossRefGoogle Scholar
  18. 18.
    Folin, M.: Apolipoprotein E as vascular risk factor in neurodegenerative dementia. Int. J. Mol. Med. 14, 609–614 (2004)Google Scholar
  19. 19.
    Schneider, A.L., Jordan, K.G.: Regional attenuation without delta (RAWOD): a disqtinctive EEG pattern that can aid in the diagnosis and management of severe acute ischemic stroke. Am. J. Electroneurodiagn. Technol. 45(2), 102–117 (2005)Google Scholar
  20. 20.
    Henderson, G.: Development and assessment of methods for detecting dementia using the human electroencephalogram. IEEE Trans. Biomed. Eng. 53(8), 1557–1568 (2006)CrossRefGoogle Scholar
  21. 21.
    Zhao, P., Ifeachor, E.: EEG assessment of Alzheimers diseases using universal compression algorithm. In: Proceedings of the 3rd International Conference on Computational Intelligence in Medicine and Healthcare (CIMED2007), Plymouth, UK, 25 July 2007Google Scholar
  22. 22.
    Ochoa, J.B.: EEG signal classification for brain computer interface applications. Ec. Polytech. Federale de Lausanne 7, 1–72 (2002)Google Scholar
  23. 23.
    Guérit, J.: EEG and evoked potentials in the intensive care unit. Neurophysiol. Clin. Clin. Neurophysiol. 29(4), 301–317 (1999)CrossRefGoogle Scholar
  24. 24.
    Moretti, D.: Quantitative EEG markers in mild cognitive impairment: degenerative versus vascular brain impairment. Int. J. Alzheimer’s Dis. 2012, 12 (2012)Google Scholar
  25. 25.
    Moretti, D.: Vascular damage and EEG markers in subjects with mild cognitive impairment. Clin. Neurophysiol. 118(8), 1866–1876 (2007)CrossRefGoogle Scholar
  26. 26.
    Pizzagalli, D.A.: Electroencephalography and high-density electrophysiological source localization. In: Handbook of Psychophysiology, vol. 3, pp. 56–84 (2007)Google Scholar
  27. 27.
    John, E.: Developmental equations for the electroencephalogram. Science 210(4475), 1255–1258 (1980)CrossRefGoogle Scholar
  28. 28.
    Jeong, J.: EEG dynamics in patients with Alzheimer’s disease. Clin. Neurophysiol. 115(7), 1490–1505 (2004)CrossRefGoogle Scholar
  29. 29.
    Taywade, S., Raut, R.: A review: EEG signal analysis with different methodologies. In: Proceedings of the National Conference on Innovative Paradigms in Engineering and Technology (NCIPET 2012) (2014)Google Scholar
  30. 30.
    Husain, A., Tatum, W., Kaplan, P.: Handbook of EEG Interpretation. Demos Medical, New York (2008)Google Scholar
  31. 31.
    Punapung, A., Tretriluxana, S., Chitsakul, K.: A design of configurable ECG recorder module. In: Biomedical Engineering International Conference (BMEiCON). IEEE (2012)Google Scholar
  32. 32.
    Klem, G.H.: The Ten-Twenty Electrode System of the International FederationGoogle Scholar
  33. 33.
    Anderson, C.W., Sijercic, Z.: Classification of EEG signals from four subjects during five mental tasks. In: Solving Engineering Problems with Neural Networks: Proceedings of the Conference on Engineering Applications in Neural Networks (EANN 1996), Turkey (1996)Google Scholar
  34. 34.
    Müller, T.: Selecting relevant electrode positions for classification tasks based on the electro-encephalogram. Med. Biol. Eng. Compu. 38(1), 62–67 (2000)CrossRefGoogle Scholar
  35. 35.
    Sanei, S., Chambers, J.A.: EEG Signal Processing. Wiley, Chichester (2013)Google Scholar
  36. 36.
    Moretti, D.V.: Individual analysis of EEG frequency and band power in mild Alzheimer’s disease. Clin. Neurophysiol. 115(2), 299–308 (2004)CrossRefGoogle Scholar
  37. 37.
    Jung, T.P.: Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clin. Neurophysiol. 111(10), 1745–1758 (2000)CrossRefGoogle Scholar
  38. 38.
    Núñez, I.M.B.: EEG Artifact Detection (2011)Google Scholar
  39. 39.
    Guerrero-Mosquera, C., Trigueros, A.M., Navia-Vazquez, A.: EEG Signal Processing for Epilepsy, in Epilepsy-Histological, Electroencephalographic and Psychological Aspects, InTech (2012)Google Scholar
  40. 40.
    Molina, G.N.G.: Direct brain-computer communication through scalp recorded EEG signals. École Polytechnique Fedérale de Lausanne (2004)Google Scholar
  41. 41.
    Naït-Ali, A.: Advanced Biosignal Processing. Springer Science & Business Media, Berlin (2009)CrossRefGoogle Scholar
  42. 42.
    McKeown, M.: A new method for detecting state changes in the EEG: exploratory application to sleep data. J. Sleep Res. 7(S1), 48–56 (1998)CrossRefGoogle Scholar
  43. 43.
    Zikov, T.: A wavelet based denoising technique for ocular artifact correction of the electroencephalogram. In: Proceedings of the Second Joint Engineering in Medicine and Biology, 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society EMBS/BMES Conference. IEEE (2002)Google Scholar
  44. 44.
    Krishnaveni, V.: Removal of ocular artifacts from EEG using adaptive thresholding of wavelet coefficients. J. Neural Eng. 3(4), 338 (2006)CrossRefGoogle Scholar
  45. 45.
    Jahankhani, P., Kodogiannis, V., Revett, K.: EEG signal classification using wavelet feature extraction and neural networks. In: IEEE John Vincent Atanasoff 2006 International Symposium on Modern Computing, JVA 2006. IEEE (2006)Google Scholar
  46. 46.
    Akhtar, M.T., James, C.J.: Focal artifact removal from ongoing EEG–a hybrid approach based on spatially-constrained ICA and wavelet denoising. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2009. IEEE (2009)Google Scholar
  47. 47.
    Inuso, G.: Wavelet-ICA methodology for efficient artifact removal from electroencephalographic recordings. In: International Joint Conference on Neural Networks, IJCNN 2007. IEEE (2007)Google Scholar
  48. 48.
    Jelles, B.: Global dynamical analysis of the EEG in Alzheimer’s disease: frequency-specific changes of functional interactions. Clin. Neurophysiol. 119(4), 837–841 (2008)CrossRefGoogle Scholar
  49. 49.
    Escudero, J.: Blind source separation to enhance spectral and non-linear features of magnetoencephalogram recordings: application to Alzheimer’s disease. Med. Eng. Phys. 31(7), 872–879 (2009)CrossRefGoogle Scholar
  50. 50.
    Hornero, R.: Spectral and nonlinear analyses of MEG background activity in patients with Alzheimer’s disease. IEEE Trans. Biomed. Eng. 55(6), 1658–1665 (2008)CrossRefGoogle Scholar
  51. 51.
    Markand, O.N.: Organic brain syndromes and dementias. Curr. Pract. Clin. Electroencephalogr. 3, 378–404 (1990)Google Scholar
  52. 52.
    Dauwels, J., Vialatte, F., Cichocki, A.: Diagnosis of Alzheimer’s disease from EEG signals: where are we standing? Curr. Alzheimer Res. 7(6), 487–505 (2010)CrossRefGoogle Scholar
  53. 53.
    Jeong, J.: Nonlinear dynamics of EEG in Alzheimer’s disease. Drug Dev. Res. 56(2), 57–66 (2002)MathSciNetCrossRefGoogle Scholar
  54. 54.
    Subha, D.P.: EEG signal analysis: a survey. J. Med. Syst. 34(2), 195–212 (2010)CrossRefGoogle Scholar
  55. 55.
    Abásolo, D.: Analysis of EEG background activity in Alzheimer’s disease patients with lempel-ziv complexity and central tendency measure. Med. Eng. Phys. 28(4), 315–322 (2006)CrossRefGoogle Scholar
  56. 56.
    Escudero, J.: Analysis of electroencephalograms in Alzheimer’s disease patients with multiscale entropy. Physiol. Meas. 27(11), 1091 (2006)CrossRefGoogle Scholar
  57. 57.
    Grassberger, P., Procaccia, I.: Measuring the strangeness of strange attractors. Phys. D 9(1–2), 189–208 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
  58. 58.
    Wolf, A.: Determining lyapunov exponents from a time series. Phys. D 16(3), 285–317 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  59. 59.
    Hamadicharef, B.: Performance evaluation and fusion of methods for early detection of Alzheimer disease. In: International Conference on BioMedical Engineering and Informatics, BMEI 2008. IEEE (2008)Google Scholar
  60. 60.
    Henderson, G.T.: Early Detection of Dementia Using The Human Electroencephalogram (2004)Google Scholar
  61. 61.
    Ferenets, R.: Comparison of entropy and complexity measures for the assessment of depth of sedation. IEEE Trans. Biomed. Eng. 53(6), 1067–1077 (2006)CrossRefGoogle Scholar
  62. 62.
    Costa, M., Goldberger, A.L., Peng, C.-K.: Multiscale entropy analysis of biological signals. Phys. Rev. E 71(2), 021906 (2005)MathSciNetCrossRefGoogle Scholar
  63. 63.
    Subasi, A., Gursoy, M.I.: EEG signal classification using PCA, ICA, LDA and support vector machines. Expert Syst. Appl. 37(12), 8659–8666 (2010)CrossRefGoogle Scholar
  64. 64.
    KavitaMahajan, M., Rajput, M.S.M.: A comparative study of ANN and SVM for EEG classification. Int. J. Eng. Res. Technol. IJERT 1, 1–6 (2012)CrossRefGoogle Scholar
  65. 65.
    Vialatte, F.: Blind source separation and sparse bump modelling of time frequency representation of eeg signals: new tools for early detection of Alzheimer’s disease. In: IEEE Workshop on Machine Learning for Signal Processing. IEEE (2005)Google Scholar
  66. 66.
    Besserve, M.: Classification methods for ongoing EEG and MEG signals. Biol. Res. 40(4), 415–437 (2007)CrossRefGoogle Scholar
  67. 67.
    Garrett, D.: Comparison of linear, nonlinear, and feature selection methods for EEG signal classification. IEEE Trans. Neural Syst. Rehabil. Eng. 11(2), 141–144 (2003)CrossRefGoogle Scholar
  68. 68.
    Lehmann, C.: Application and comparison of classification algorithms for recognition of Alzheimer’s disease in electrical brain activity (EEG). J. Neurosci. Methods 161(2), 342–350 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Luis A. Guerra
    • 1
  • Laura C. Lanzarini
    • 2
  • Luis E. Sánchez
    • 3
  1. 1.Universidad de las Fuerzas Armadas-ESPESangolquiEcuador
  2. 2.Universidad Nacional de la PlataLa PlataArgentina
  3. 3.Universidad de la ManchaLa ManchaSpain

Personalised recommendations