Abstract
The reliability and safety of lithium-ion batteries (LIBs) are key issues in battery applications. Accurate prediction of the state-of-health (SOH) of LIBs can reduce or even avoid battery-related accidents. In this paper, a new back-propagation neural network (BPNN) is proposed to predict the SOH of LIBs. The BPNN uses as input the LIB voltage, current and temperature, as well as the charging time, since it is strongly correlated with the SOH. The number of hidden layer nodes is adaptively set based on the training data in order to improve the generalization capability of the BPNN. The effectiveness and robustness of the proposed scheme is verified using four distinct battery datasets and different training data. Experimental results show that the new BPNN is able to accurately predict the SOH of LIBs, revealing superiority when compared to other alternatives.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The rapid development of LIBs has led to an increasing electrification of transportation systems, namely electric vehicles (EVs). Many countries have enacted policies to stimulate the development of EVs in order to reduce greenhouse gas emissions and save non-renewable energy [1].
The battery energy [2], charging [3] and thermal [4] management are important components of the battery managing system (BMS). In order to make sure that a LIB operates safely and reliably, the SOH is estimated to assess its working condition [5,6,7]. According to the standard of the IEEE \(1188-1996\), the SOH of a LIB can be defined as:
where \(C_\mathrm{{now}}\) and \(C_\mathrm{{new}}\) are the current and the nominal capacity of the LIB, respectively.
To predict the SOH of a LIB, the internal resistance and capacity are often used as characteristics of aging [8,9,10], except when dealing with the solid phase diffusion time of lithium-ions in the positive electrode [11], or when monitoring aspects like the cyclable lithium-ions [12]. By using the capacity, DC pulse or electrochemical impedance spectroscopy (EIS) [13, 14] tests, among others, the LIB health parameters, such as voltage, current, temperature and charging time, are obtained and can be used directly in distinct methods to predict the SOH. For example, the capacity can be obtained by measuring the charge transferred through the battery during the charging or discharging phases [15], and the internal resistance can be determined by calculating the instantaneous voltage drop during a pulse test [16]. However, existing direct methods have limitations in practical applications. Indeed, they require very accurate sensors measurements, the battery must be put out of service for testing (e.g, in capacity or EIS tests), and specific methods are restricted to specific test systems (e.g., DC pulses with high currents are not allowed by the BMS, as they are seen as abnormal operating conditions).
In the last few years, several methods have been proposed and applied to SOH estimation, yielding real-time assessment of LIBs [17]. These approaches can be generally divided into model-based and data-driven methods. The most common model-based approaches rely on electrochemical models or equivalent circuit models (ECMs) [18,19,20,21,22]. Electrochemical models can accurately describe the LIBs dynamics, but the modeling process is complicated and the required computational cost is high, which difficult their use in practical applications. Conversely, ECMs are easier to obtain and involve smaller computational burden; however, their accuracy strongly depends on the values of the models’ parameters, which are difficult to estimate. Due to the complexity and limitations of model-based approaches, data-based techniques have gained interest in SOH estimation. Indeed, a variety of machine learning (ML) algorithms have been proposed, namely artificial neural networks (ANN) [23, 24], support vector machines (SVM) [25, 26] and relevance vector machines (RVM) [27, 28]. In terms of structure, the most recent versions rely on deep learning techniques [29]. Among all ML methods, ANNs have excellent accuracy, adaptability, generalization capability and robustness [30]. For example, a nonlinear autoregressive scheme with exogenous input was proposed in [31], while an extreme learning machine (ELM) was used in [32] for SOH estimation.
Choosing the input data is the first step to consider before training a ML method. Therefore, the dataset to be used as input to the BPNN is worth exploring. The battery data related to aging are either external or internal. The external includes temperature, charging and discharging rates, and depth of discharge, for example [17]. The internal refers mainly to physical and chemical properties of the LIB, such as generation of solid electrolyte interphase (SEI) layer, self-discharge, and decomposition of the anode [33]. In real-world applications, the BMS sensors can collect data, namely the terminals battery voltage, current, surface temperature and time of charge and discharge. Taking multiple data is beneficial to improve the accuracy of the SOH estimation. Current, voltage and temperature have been often used as input to ML algorithms [31, 34]. However, correlation analysis has revealed that the charging time of the LIB is even closer related to the SOH than those. Therefore, using the charging time as input to the BPNN is crucial for high accuracy estimation of SOH.
The BPNN is one of the most convenient types of ANN for the purpose of estimation due to its capabilities of nonlinear mapping, self-learning, adaptability, generalization and fault tolerance. Several methods have been applied to improve the performance of existing BPNNs [35,36,37,38], which have shown promising results. However, in previous research, the BPNN structure has not been fully explored, especially in what concerns the number of hidden layer units. Indeed, most BPNNs in the references above use a fixed number of hidden layer units or set the number of hidden layer units to a value in a given interval [39, 40]. A BPNN with a fixed number of hidden layer units may yield excellent results for a given training set, but behave poorly when the training data changes. For example, it was shown in [41] that for an optimal fractional order of 7/9, keeping the learning rate at 0.5, the prediction accuracy of a BPNN with 500 hidden layer units was lower than that obtained with 300 and 200 hidden layer units, and for learning rate at 1, the number of hidden layer units should be set equal to 500 to obtain the best prediction. Therefore, it is crucial to have a method that finds the optimal number of hidden layer units, so that the BPNN exhibits the best performance as the training data changes.
In this paper, a SOH prediction method based on a BPNN with adaptive hidden layer is proposed. This method takes the mean absolute error (MAE) of the prediction for a training dataset and chooses the number of hidden layer units that minimizes the MAE. Compared with other methods, the proposed scheme is capable of determining the optimal number of hidden layer units during the training phase, instead of fixing its value after experimentation, thus reducing the time required for setting up the network and improving the accuracy of SOH prediction. The main contributions of the paper are:
-
1)
The proposed method determines the optimal BPNN structure adaptively during the training process, just relying on the results obtained for a training dataset. This is different from other neural network algorithms, which adjust the hyperparameters after each experiment;
-
2)
The charging time of the LIB is employed, along with the voltage, current and temperature, as input to the BPNN;
-
3)
The new BPNN is used with four distinct LIBs and different training datasets, and its prediction results are compared with those obtained with other two BPNNs. It is shown that the proposed scheme, by adaptively changing the structure of the network according to the dataset, leads to superior SOH prediction.
-
4)
The proposed BPNN is shown to necessitate only 50% of data for training, yielding mean absolute percentage error (MAPE) inferior to 0.8%.
The remainder of the paper is organized as follows. Section 2 introduces the LIBs datasets and the parameters used for SOH estimation. Section 3 presents the new BPNN algorithm developed. Section 4 shows the experimental results obtained with the proposed BPNN. Section 5 draws the conclusions.
2 Datasets and features for SOH estimation
Four battery aging datasets from the NASA Ames Prognostic Center of Excellence (PoCE) repository are used in the follow-up [42]. The NASA batteries are of \(LiNi_{0.8}Co_{0.15}Al_{0.05}O_{2}\) and have a capacity of 2 Ah. Each LIB {B0005, B0006, B0007, B0018} has distinct aging trends, as shown in Fig. 1.
The datasets include charge and discharge parameters, namely voltage, current, temperature and time of cycle. Every cycle follows the constant current (CC)-constant voltage (CV) protocol for charging, and the CC protocol for discharging, as depicted in Fig. 2. The charge and discharge processes in one cycle have the following phases: (i) the current is set at 1.5 A until the voltage reaches 4.2 V; (ii) the charging current decreases while the voltage is kept at 4.2 V; (iii) the charging current drops until 20 mA, signaling that the charging process is over; (iv) a constant current of 2 A is set to discharge the LIB until the voltage droops down to the cut-off voltage.
Figures 3a, 4a and 5a represent the current, voltage and temperature versus time of the battery B0005, while Figs. 3b, 4b and 5b depict the evolution of those parameters during the 50-th cycle of the four batteries {B0005, B0006, B0007, B0018}. From Figs. 3, 4 we verify that with the increase in the number of charging and discharging cycles, the voltage rises faster and achieves the CV charging step earlier for the same battery. Corresponding to this, the current leaves the CC step faster. On the other hand, in the same cycle of different batteries, although the changes of voltage and current are slightly different, their trends and trajectories are roughly similar. Figure 5 shows that the temperature of the LIB increases with the increase in the number of charging and discharging cycles. The speed at which the peak is reached rises, and the average temperature increases. Moreover, the temperature variation of different batteries is similar, with the exception of the B0018 (\(18\#\)), which has a sudden change in temperature at the beginning of the battery charging and discharging phases.
Based on the above discussion, one can see that the charging time has a relationship with the aging state of the LIB. To explore this relationship closer, the grey relational analysis (GRA) method is applied here. This method is able to calculate the correlation between parameters with very little information [43]. The GRA analysis results are shown in Fig. 6. We verify that the charging time of the LIB is more closely related to the SOH than the voltage, current and temperature. Therefore, in the follow-up, the charging time is adopted as input to the BPNN, while the LIB capacity related to the charge–discharge cycle is selected as the output.
3 Methods
3.1 The BPNN
The structure of the BPNN is shown in Fig. 7. Let us consider that the training dataset is given by \(D=\left\{ (x_{1}, y_{1}), (x_{2}, y_{2})...(x_{i}, y_{i}),...(x_{n}, y_{n}) \right\} , x\in {\mathbb {R}} ^{a}, y\in {\mathbb {R}} ^{b}\), where n is number of samples in the dataset, \(x_{i}\) and \(y_{i}\) are the input and output of the i-th sample, \(i=1, 2,..., n\), and a and b represent the dimensions of the input and output matrices, respectively.
We normalize all parameters by:
where \(z_{i} ^{'}\) and \(z_{i}\) represent the normalized and non-normalized values, and \(z_\mathrm{{max}}\) and \(z_\mathrm{{min}}\) denote the maximum and minimum values of the data, respectively.
The output of the hidden layer is:
where g denotes the activation function of the hidden layer, \(h_{j}\) is the j-th output node of the hidden layer, \(w_{ij}\) represents the weight connecting the input and hidden layers, and \(d_{j}\) is the threshold of the j-th hidden node.
The output of the output layer is given by:
where \({\widehat{y}}_{k}\) represents the prediction of the k-th node, f is the activation function of the output layer, \(w_{jk}\) is the weight connecting the hidden and output layers, and \(d_{k}\) is the threshold of the k-th output node.
According to the model of the BPNN, the mean square error (MSE) of the sample \(\left( x_{l},y_{l} \right)\) is calculated from the actual and predicted values, which is:
The BPNN is an iterative learning algorithm, where the updated estimator for any parameter v is given by:
After the training target error is given, the BPNN uses the gradient descent method to adjust the weights of the network. Given the learning rate \(\eta\), the update formula for the weights is:
where \(w_{ij}'\) and \(w_{jk}'\) are weights after iterating.
The purpose of the BPNN is to minimize the accumulation error on the training set D, which is expressed as:
3.2 The BPNN with adaptive hidden layer
In most BPNN, the number of hidden layer nodes is fixed [39, 44]. Although for a given number of nodes and training dataset, the network can perform well, its performance may degradate seriously as the training dataset changes. Therefore, the BPNN model needs to be re-designed. Indeed, when the network has too many hidden layer nodes, either over-fitting or long training times may result, depending whether or not the training data carries insufficient or enough information, respectively. Conversely, when the network has too few neurons in the hidden layer, under-fitting will result.
The number of hidden layer nodes is chosen by means of the empirical formula:
where \(h_\mathrm{{max}}\) is the maximum number of hidden layer nodes, r and s denote the numbers of input and output nodes, respectively, and \(\varrho\) is a constant lesser than 10 [45].
Herein, the mean absolute error (MAE) is used to quantitatively evaluate the accuracy of the training set, that is,
where \(y_{i}\) and \(\hat{y_{i}}\) denote the real and predicted values of the dataset. Usually, there exists a positive correlation between the accuracy of the training set and the test set [46], meaning that high (low) accuracy of the training set often leads to high (low) accuracy of the test set. Therefore, we choose the number of hidden layer nodes of the BPNN in the interval \([1, h_\mathrm{{max}}]\) and compute the training set error. We denote the error of the last calculation as \(\mathrm{{MAE}}_\mathrm{{new}}\), the corresponding number of hidden layer nodes as \(h_\mathrm{{new}}\), the minimum error in all previous calculations as \(\mathrm{{MAE}}_\mathrm{{previous}}\), and the corresponding number of hidden layer nodes as \(h_\mathrm{{previous}}\). The proposed optimal number of hidden layer nodes is given by:
The proposed BPNN with adaptive number of hidden layer nodes is depicted in Fig. 8. The algorithm is given as follows:
-
1.
Initialization. Set the parameters, namely the numbers of nodes of the input and output layers, the training epochs and the learning rate;
-
2.
Calculate the maximum number of hidden layer nodes \(h_\mathrm{{max}}\) by means of the empirical formula (10), and set the number of hidden layer nodes between 1 and \(h_\mathrm{{max}}\);
-
3.
Train the BPNN with different number of hidden layer nodes and calculate the MAE corresponding to each number of hidden layer nodes;
-
4.
Chose the number of hidden layer nodes that corresponds to the minimum MAE as the best hidden layer, \(h_{best}\);
-
5.
Use training samples to train the BPNN with the optimal number of hidden layer nodes, thus creating the best-fitting network;
-
6.
Use the network created to forecast the test sample.
4 Results
In this Section, the effectiveness and generalization capability of the proposed BPNN are analyzed and discussed.
The health parameters mentioned in Sect. 2 (or some subset) are used as input to the BPNNs tested, and the capacity of the batteries is taken as the output. The proposed adaptive (AD) hidden layer four-dimensional (4D) network (AD4DBPNN), with voltage, current, temperature and charging time as input parameters is compared with an AD hidden layer three-dimensional (3D) network (AD3DBPNN), with voltage, current and temperature as inputs, and with a fixed hidden layer 4D network (4DBPNN) having the same inputs as the AD4DBPNN.
It should be noted that the method proposed in this paper aims to adaptively change the number of hidden layer units in order to obtain the optimal neural network structure every time the dataset changes. Indeed, the optimal network structure for two distinct training datasets, let us say A and B, is not necessarily the same. Therefore, if the network structure is set based on the training dataset A, then this network may not be suitable for using with dataset B. Thus, the network may be suboptimal and poor prediction results may be obtained.
In the follow-up, we use the front part of a single dataset for training and the latter part for testing. The number of hidden layer nodes of the 4DBPNN is set equal to the optimal number of hidden layer nodes determined for the AD4DBPNN when trained using 70% of the data for each battery and, then, it is kept fixed.
The AD4DBPNN, AD3DBPNN and 4DBPNN are implemented using the software MATLAB 2021. With the exception of the number of hidden layer nodes and the dimension of the input, all BPNNs use the same structure and features:
-
Maximum training epochs equal to 1000;
-
Learning rate set to 0.001;
-
Goal error of training (MAE) equal to 0.0001;
-
Range of the connection weights and thresholds in [− 1,1].
The root mean squared error (RMSE) [47] and MAPE [48] are used as evaluation functions:
where \({\hat{y}}_{i}\) and \({y}_{i}\) represent the estimated and true values of the SOH, respectively.
First, 70% of the data of four batteries are used for training, and the remaining 30% are used for testing the effectiveness of the BPNNs. Figure 9 depicts the training results of the AD4DBPNN, AD3DBPNN and 4DBPNN, from where we verify that they are relatively close to the real value. When including the charging time as input (AD4DBPNN and 4DBPNN), the networks yield smaller errors than the AD3DBPNN. Figures 10 and 11 show the RMSEs and MAPEs, respectively. One can see that after adding the charging time as an input, the RMSE of the battery B0005 decreases from 1.87% to 0.41%. For the other three LIBs, the AD4DBPNN still has the lowest RMSE or values close to those of the 4DBPNN. Moreover, with the charging time as input, the MAPE decreased by 0.4%, 0.52% and 0.35% in the batteries B0006, B0007 and B0018, respectively.
Then, we reduce the proportion of the training set from 70% to 60% of the data. Figure 12c and d highlights that the predictions of the AD4DBPNN have higher accuracy than those of the AD3DBPNN. This means that without adding the charging time as input to the neural network the error of the SOH prediction becomes larger. From Fig. 12a–d, one can see that the prediction results of the AD4DBPNN are closer to the true value than those of the 4DBPNN, whose hidden layer keeps unchanged. As shown in Fig. 13, the RMSE of the predicted results of the AD4DBPNN is 30.3% lower than that of the 4DBPNN. As shown in Fig. 14, for the batteries B0006, B0007 and B0018, the MAPE of the AD4DBPNN decreased by 36.3%, 23.5% and 23.4%, respectively, compared with the values of the 4DBPNN. This shows that, under the same training dataset, decreasing the proportion of training data, keeping the number of hidden layer units of the BPNN constant, leads to local optimum. The behavior means that the training set (the first 60%) and the testing set (the last 40%) have small and large errors, respectively, which can be seen in Fig. 12 where the capacity is reduced from 1.5 to 1.4.
Finally, the training set was further reduced to 50% of the data. Figure 15 shows the performance of the three BPNNs. It can be seen that the AD4DBPNN exhibits the better generalization and accuracy, while the results of the AD3DBPNN and 4DBPNN show different degrees of fluctuation. Figure 16 compares the RMSE of the predicted results. We verify that the AD4DBPNN still maintains good accuracy, namely 1.45% for the worst performing battery (B0006). In contrast, both the AD3DBPNN and 4DBPNN have RMSEs exceeding 1.9%, showing poor predictive capability. Figure 17 shows that the proposed method achieves 0.65% MAPE on the worst performing dataset, while the other two algorithms achieve 0.95% and 1.18%, respectively. Table 1 summarizes the best number of hidden layer nodes of the BPNNs when the training set changes. It can be seen that the number of hidden layer nodes for obtaining the best results is different for different training sets.
In general, the AD4DBPNN is more capable of predicting the SOH of LIBs when the training dataset changes. When the charging time is used as input for training, the prediction accuracy is effectively improved. In addition, when the proportion of training data drops to 50% of the data, the maximum error of the AD4DBPNN is only 0.65%, which proves its effectiveness. From Figs. 9, 10, 11, 12,13, 14, 15, 16, 17, the AD4DBPNN still maintains high accuracy when the battery dataset used to train the neural network changes. When the dataset changes from B0007 to B0018, the total number of samples in the dataset decreases from 168 to 132. At this time, the AD4DBPNN can still show very high accuracy in predicting the SOH. The above shows that the AD4DBPNN has good generalization capability.
5 Conclusion
This paper proposed a three-layer, four-dimensional, adaptive hidden layer network (AD4DBPNN), with input parameters voltage, current, temperature and charging time, to predict the SOH of LIBs. The method determines the optimal number of hidden layer units based on the MAE of the prediction for a training dataset. The accuracy of the SOH prediction is improved by including the charging time as input to the BPNN, since it is highly correlated with the SOH of the LIB. Even when the ratio of the training data is reduced to 50% of the full dataset, the proposed method shows good accuracy, with only 1.45% and 0.65% RMSE and MAPE, respectively, on the worst performing dataset. Although the proposed method uses an adaptive scheme to determine the optimal structure of the BPNN, it does not avoid the gradient descent method. Future work will be carried out in the direction of using a suitable algorithm to optimize the iteration method of the network weight threshold.
Data availability
Data are publicly available at https://blog.csdn.net/weixin_38451800/article/details/98060473.
References
Zhao F, Liu F, Liu Z, Hao H (2019) The correlated impacts of fuel consumption improvements and vehicle electrification on vehicle greenhouse gas emissions in China. J Clean Prod 207:702–716
Ouyang Q, Wang Z, Liu K, Xu G, Li Y (2019) Optimal charging control for lithium-ion battery packs: a distributed average tracking approach. IEEE Trans Industr Inf 16(5):3430–3438
Zou C, Manzie C, Nešić D (2018) Model predictive control for lithium-ion battery optimal charging. IEEE/ASME Trans Mechatron 23(2):947–957
Shang Y, Liu K, Cui N, Wang N, Li K, Zhang C (2019) A compact resonant switched-capacitor heater for lithium-ion battery self-heating at low temperatures. IEEE Trans Power Electron 35(7):7134–7144
Xu Z, Guo Y, Saleh JH (2022) A physics-informed dynamic deep autoencoder for accurate state-of-health prediction of lithium-ion battery. Neural Comput Appl 34:1–21
Ossai CI, Egwutuoha IP (2021) Real-time state-of-health monitoring of lithium-ion battery with anomaly detection, Levenberg-Marquardt algorithm, and multiphase exponential regression model. Neural Comput Appl 33(4):1193–1206
Kara A (2021) A data-driven approach based on deep neural networks for lithium-ion battery prognostics. Neural Comput Appl 33(20):13525–13538
Farmann A, Waag W, Marongiu A, Sauer DU (2015) Critical review of on-board capacity estimation techniques for lithium-ion batteries in electric and hybrid electric vehicles. J Power Sources 281:114–130
Lu L, Han X, Li J, Hua J, Ouyang M (2013) A review on the key issues for lithium-ion battery management in electric vehicles. J Power Sources 226:272–288
Li Y, Liu K, Foley AM, Zülke A, Berecibar M, Nanini-Maury E, Van Mierlo J, Hoster HE (2019) Data-driven health estimation and lifetime prediction of lithium-ion batteries: a review. Renew Sustain Energy Rev 113:109254
Prasad GK, Rahn CD (2013) Model based identification of aging parameters in lithium ion batteries. J Power Sources 232:79–85
Zhou X, Stein JL, Ersal T (2017) Battery state of health monitoring by estimation of the number of cyclable Li-ions. Control Eng Pract 66:51–63
Jiang B, Zhu J, Wang X, Wei X, Shang W, Dai H (2022) A comparative study of different features extracted from electrochemical impedance spectroscopy in state of health estimation for lithium-ion batteries. Appl Energy 322:119502
Stroe D-I, Świerczyński M, Stan A-I, Teodorescu R, Andreasen SJ (2014) Accelerated lifetime testing methodology for lifetime estimation of lithium-ion batteries used in augmented wind power plants. IEEE Trans Ind Appl 50(6):4006–4017
Stroe D-I, Swierczynski M, Kær SK, Teodorescu R (2017) Degradation behavior of lithium-ion batteries during calendar ageing-The case of the internal resistance increase. IEEE Trans Ind Appl 54(1):517–525
Stroe D-I, Swierczynski M, Stroe A-I, Kaer SK, Teodorescu R (2017) Lithium-ion battery power degradation modelling by electrochemical impedance spectroscopy. IET Renew Power Gener 11(9):1136–1141
Tian H, Qin P, Li K, Zhao Z (2020) A review of the state of health for lithium-ion batteries: research status and suggestions. J Clean Prod 261:120813
Rossi C, Falcomer C, Biondani L, Pontara D (2022) Genetically optimized extended Kalman filter for state of health estimation based on Li-ion batteries parameters. Energies 15(9):3404
Wang C, Wang S, Zhou J, Qiao J, Yang X, Xie Y (2023) A novel back propagation neural network-dual extended Kalman filter method for state-of-charge and state-of-health co-estimation of lithium-ion batteries based on limited memory least square algorithm. J Energy Storage 59:106563
Liu S, Dong X, Yu X, Ren X, Zhang J, Zhu R (2022) A method for state of charge and state of health estimation of lithium-ion battery based on adaptive unscented Kalman filter. Energy Rep 8:426–436
Bartlett A, Marcicki J, Onori S, Rizzoni G, Yang XG, Miller T (2015) Electrochemical model-based state of charge and capacity estimation for a composite electrode lithium-ion battery. IEEE Trans Control Syst Technol 24(2):384–399
Zhang X, Wu R-C (2020) Modified projective synchronization of fractional-order chaotic systems with different dimensions. Acta Math Appl Sin Engl Ser 36(2):527–538
Driscoll L, de la Torre S, Gomez-Ruiz JA (2022) Feature-based lithium-ion battery state of health estimation with artificial neural networks. J Energy Storage 50:104584
Xia Z, Qahouq JAA (2019) Adaptive and fast state of health estimation method for lithium-ion batteries using online complex impedance and artificial neural network. In: 2019 IEEE applied power electronics conference and exposition (APEC), pp. 3361–336. IEEE
Manoharan A, Begam K, Aparow VR, Sooriamoorthy D (2022) Artificial neural networks, gradient boosting and support vector machines for electric vehicle battery state estimation: a review. J Energy Storage 55:105384
Xiong W, Mo Y, Yan C (2020) Online state-of-health estimation for second-use lithium-ion batteries based on weighted least squares support vector machine. IEEE Access 9:1870–1881
Lyu Z, Wang G, Gao R (2022) Synchronous state of health estimation and remaining useful lifetime prediction of Li-Ion battery through optimized relevance vector machine framework. Energy 251:123852
Widodo A, Shim M-C, Caesarendra W, Yang B-S (2011) Intelligent prognostics for battery health monitoring based on sample entropy. Expert Syst Appl 38(9):11763–11769
Liu X, Xie L, Wang Y, Zou J, Xiong J, Ying Z, Vasilakos AV (2020) Privacy and security issues in deep learning: a survey. IEEE Access 9:4566–4593
De Benedetti M, Leonardi F, Messina F, Santoro C, Vasilakos A (2018) Anomaly detection and predictive maintenance for photovoltaic systems. Neurocomputing 310:59–68
Khaleghi S, Karimi D, Beheshti SH, Hosen MS, Behi H, Berecibar M, Van Mierlo J (2021) Online health diagnosis of lithium-ion batteries based on nonlinear autoregressive neural network. Appl Energy 282:116159
Pan H, Lü Z, Wang H, Wei H, Chen L (2018) Novel battery state-of-health online estimation method using multiple health indicators and an extreme learning machine. Energy 160:466–477
Kim J-H, Woo SC, Park M-S, Kim KJ, Yim T, Kim J-S, Kim Y-J (2013) Capacity fading mechanism of LiFePO4-based lithium secondary batteries for stationary energy storage. J Power Sources 229:190–197
Fan Y, Xiao F, Li C, Yang G, Tang X (2020) A novel deep learning framework for state of health estimation of lithium-ion battery. Journal of Energy Storage 32:101741
Gao Z, Chin CS, Woo WL, Jia J, Da Toh W (2015) Genetic algorithm based back-propagation neural network approach for fault diagnosis in lithium-ion battery system. In: 2015 6th international conference on power electronics systems and applications (PESA), pp. 1–6. IEEE
Wang B, Qin F, Zhao X, Ni X, Xuan D (2020) Equalization of series connected lithium-ion batteries based on back propagation neural network and fuzzy logic control. Int J Energy Res 44(6):4812–4826
Wang Y, Liao X, Lin D, Yang X, Chen Y (2020) Fractional order BPNN for estimating state of charge of lithium-ion battery under temperature influence. IFAC-PapersOnLine 53(2):3707–3712
Ahmad T, Chen H (2019) Deep learning for multi-scale smart energy forecasting. Energy 175:98–112
Liu Z, Sun X, Wang S, Pan M, Zhang Y, Ji Z (2019) Midterm power load forecasting model based on kernel principal component analysis and back propagation neural network with particle swarm optimization. Big Data 7(2):130–138
Wang L, Zeng Y, Chen T (2015) Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst Appl 42(2):855–863
Wang J, Wen Y, Gou Y, Ye Z, Chen H (2017) Fractional-order gradient descent learning of bp neural networks with Caputo derivative. Neural Netw 89:19–30
Bole B, Kulkarni C, Daigle M (2014) Randomized battery usage data set. NASA AMES Prognostics Data Repository 70
Rajesh R, Ravi V (2015) Supplier selection in resilient supply chains: a grey relational analysis approach. J Clean Prod 86:343–359
Zeng Y-R, Zeng Y, Choi B, Wang L (2017) Multifactor-influenced energy consumption forecasting using enhanced back-propagation neural network. Energy 127:381–396
Ye Z, Kim MK (2018) Predicting electricity consumption in a building using an optimized back-propagation and Levenberg-Marquardt back-propagation neural network: case study of a shopping mall in China. Sustain Cities Soc 42:176–183
Bao C, Pu Y, Zhang Y (2018) Fractional-order deep backpropagation neural network. Comput Intell Neurosci 2018:7361628
Naser M, Alavi AH (2021) Error metrics and performance fitness indicators for artificial intelligence and machine learning in engineering and sciences. Archit Struct Constr. https://doi.org/10.1007/s44150-021-00015-8
Botchkarev A (2019) A new typology design of performance metrics to measure errors in machine learning regression algorithms. Interdiscip J Inf Knowl Manag 14:045–076
Acknowledgements
This work was supported by the National Natural Science Funds of China (No. 61403115, No. 11971032).
Funding
Open access funding provided by FCT|FCCN (b-on).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, L., Xu, C., Bao, X. et al. State-of-health estimation of Lithium-ion battery based on back-propagation neural network with adaptive hidden layer. Neural Comput & Applic 35, 14169–14182 (2023). https://doi.org/10.1007/s00521-023-08471-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-023-08471-7