Skip to main content
Log in

Online performance and proactive maintenance assessment of data driven prediction models

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

Many Data-driven decisions in manufacturing need accurate and reliable predictions. Due to high complexity and variability of working conditions, a prediction model may deteriorate over time after deployed. Traditional performance evaluation indexes mainly assess the prediction model from a static perspective, which is difficult to meet the actual needs of model selection and proactive maintenance, resulting in unstable online prediction performance. For regression-based prediction models, this paper designs online prediction performance evaluation indexes (OPPEI) to evaluate the prediction model in terms of its accuracy, degradation speed, and stability. For proactive maintenance, this paper proposes a model maintenance evaluation method based on Principal Component Analysis (PCA). We use PCA to transform various performance indexes and extract the first principal component as a model maintenance evaluation index, which could reduce the over-sensitive or insensitive phenomenon of single indicator. The effectiveness of online prediction performance evaluation indexes and PCA-based proactive maintenance evaluation method are verified by simulation and several real-world load forecasting experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data availability

The datasets generated and analysed during the current study are not publicly available due the fact that they constitute an excerpt of research in progress but are available from the corresponding author on reasonable request.

References

Download references

Funding

The research leading to these results received funding from Nanjing University under Grant Agreement No 2022300018.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhe Song.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Ethical approval

We are sure that our manuscript complies to the Ethical Rules of this journal. The submitted work is original and has not been published elsewhere in any form or language.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Tables 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 and Figs. 17, 18, 19, 20, 21, 22, 23, 24.

Table 8 The training and testing scores of Exp 1-1-1 and Exp 1-2-1
Table 9 The training and testing scores of the Exp 1-1-2 and Exp 1-2-2
Table 10 The training and testing scores of Exp 1-1-3 and Exp 1-2-3
Table 11 The training and testing scores of Exp 1-3-1
Table 12 The training and testing scores of Exp 1-3-2
Table 13 The training and testing scores of Exp 1-3-3
Table 14 The training and testing scores of Exp 2-1-1
Table 15 The training and testing scores of Exp 2-1-2
Table 16 The training and testing scores of Exp 2-1-3
Table 17 The training and testing scores of the experiment 2-2-1
Table 18 The training and testing scores of Exp 2-2-2
Table 19 The training and testing scores of Exp 2-2-3
Table 20 The alarm time based on different indexes
Fig. 17
figure 17

The daily load forecasting model’s training and deployed performance from 01-02-2016 to 30-06-2016 with the input of feature combination 2. The early deterioration time is 06-05-2016, and the obvious deterioration time is 17-06-2016

Fig. 18
figure 18

The performance of daily load forecasting model from 01-04-2016 to 30-06-2016 with input of feature combination 2. The early deterioration time is 06-05-2016, and the obvious deterioration time is 17-06-2016. The alarm time based on P is 06-05-2016; The alarm time based on APE is 21-05-2016; The alarm time based on AE is 18-05-2016; The alarm time based on WMAPE is 11-05-2016; The alarm time based on WMAE is 08-05-2016; The alarm time based on OMASE is 15-04-2016; The alarm time based on MDS is 27-05-2016

Fig. 19
figure 19

The daily load forecasting model’s training and deployed performance from 01-02-2016 to 30-06-2016 with the input of feature combination 3. The early deterioration time is 06-05-2016, and the obvious deterioration time is 17-06-2016

Fig. 20
figure 20

The performance of daily load forecasting model from 01-04-2016 to 30-06-2016 with input of feature combination 3. the early deterioration time is 06-05-2016, and the obvious deterioration time is 17-06-2016.The alarm times based on different evaluation metrics are as follows: P (12-05-2016), APE (21-05-2016), AE (14-05-2016), WMAPE (12-05-2016), WMAE (08-05-2016), OMASE (15-04-2016), and MDS (27-05-2016)

Fig. 21
figure 21

The hourly load forecasting model’s training and deployed performance from 12-04-2017 00:00 to 12-05-2017 23:00 with the input of feature combination 2. The early deterioration time is 29-04-2017 11:00 the obvious deterioration time is 09-05-2017 16:00

Fig. 22
figure 22

The performance of hourly load forecasting model from 22-04-2017 00:00 to 12-05-2017 23:00 with input of feature combination 2. The early deterioration time is 29-04-2017 11:00 the obvious deterioration time is 09-05-2017 16:00. The alarm times based on different evaluation metrics are as follows: P (30-04-2017 06:00), APE (30-04-2017 23:00), AE (29-04-2017 11:00), WMAPE (30-04-2017 12:00), WMAE (30-04-2017 11:00), OMASE (23-04-2017 05:00), and MDS (03-05-2017 11:00)

Fig. 23
figure 23

The hourly load forecasting model’s training and deployed performance from 12-04-2017 00:00 to 12-05-2017 23:00 with the input of feature combination 3. The early deterioration time is 29-04-2017 11:00, and the obvious deterioration time is 09-05-2017 16:00

Fig. 24
figure 24

The performance of hourly load forecasting model from 22-04-2017 00:00 to 12-05-2017 23:00 with input of feature combination 3. The early deterioration time is 29-04-2017 11:00, the obvious deterioration time is 09-05-2017 16:00. The alarm times based on different evaluation metrics are as follows: P (30-04-2017 16:00), APE (01-05-2017 12:00), AE (30-04-2017 16:00), WMAPE (30-04-2017 17:00), WMAE (30-04-2017 12:00), OMASE (24-04-2017 11:00), and MDS (05-05-2017 11:00)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, Y., Wang, T. & Song, Z. Online performance and proactive maintenance assessment of data driven prediction models. J Intell Manuf (2024). https://doi.org/10.1007/s10845-024-02357-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10845-024-02357-8

Keywords

Navigation