Skip to main content

Advertisement

Log in

Mathematical methods for maintenance and operation cost prediction based on transfer learning in State Grid

  • Published:
Applied Mathematics-A Journal of Chinese Universities Aims and scope Submit manuscript

Abstract

The electric power enterprise is an important basic energy industry for national development, and it is also the first basic industry of the national economy. With the continuous expansion of State Grid, the progressively complex operating conditions, and the increasing scope and frequency of data collection, how to make reasonable use of electrical big data, improve utilization, and provide a theoretical basis for the reliability of State Grid operation, has become a new research hot spot. Since electrical data has the characteristics of large volume, multiple types, low-value density, and fast processing speed, it is a challenge to mine and analyze it deeply, extract valuable information efficiently, and serve for the actual problem. According to the features of these data, this paper uses artificial intelligence methods such as time series and support vector regression to establish a data mining network model for standard cost prediction through transfer learning. The experimental results show that the model in this paper obtains better prediction results on a small sample data set, which verifies the feasibility of the deep transfer model. Compared with activity-based costing and the traditional prediction method, the average absolute error of the proposed method is reduced by 10%, which is effective and superior.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. T M Mitchell. Machine Learning, McGraw-Hill, 2003.

  2. S J Pan, Q Yang. A Survey on Transfer Learning, IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345–1359.

    Article  Google Scholar 

  3. L X Duan, I W Tsang, D Xu. Domain Transfer Multiple Kernel Learning, IEEE Transactions on Pattern Analysis Machine Intelligence, 2012, 34(99): 465–479.

    Article  Google Scholar 

  4. W T Tu, S L Sun. A subject transfer framework for EEG classification, Neurocomputing, 2012, 82: 109–116.

    Article  Google Scholar 

  5. H Daumé, D Marcu. Domain adaptation for statistical classifiers, Journal of Artificial Intelligence Research, 2006, 26(1): 101–126.

    Article  MathSciNet  MATH  Google Scholar 

  6. S Biekel, M Brückner, T Schefier. Discriminative learning for differing training and test distributions, In: Proceedings of the 24th International Conference on Machine Learning, New York, USA: ACM, 2007, 81–88.

    Google Scholar 

  7. S Biekel, C Sawade, T Schefier. Transfer learning by distribution matching for targeted advertising, In: Proceedings of the 21st Annual Conference on Neural Information Processing Systems, Cambridge: MIT Press, 2009, 145–152.

    Google Scholar 

  8. P C Wu, T G Dietterich. Improving SVM accuracy by training on auxiliary data sources, In: Proceedings of the 21st International Conference on Machine Learning (ICML), New York, USA: ACM, 2004, 110–117.

    Google Scholar 

  9. W Y Dai, Q Yang, G R Xue, et al. Boosting for transfer learning, In: Proceedings of the 24th International Conference on Machine Learning (ICML), New York, USA: ACM, 2007, 193–200.

    Google Scholar 

  10. B Quanz, J Huan. Large margin transductive transfer learning, In: Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM), New York, USA: ACM, 2009, 1327–1336.

    Google Scholar 

  11. Z J Xu, S L Sun. Multi-view transfer learning with Adaboost, In: Proceedings of the 23rd Conference on Tools with Artificial Intelligence, Boca Raton, FL: IEEE, 2011, 399–402.

    Google Scholar 

  12. Z J Xu, S L Sun. Multi-source transfer learning with Multi-view Adaboost, Neural Information Processing, 2012, 7665: 332–339.

    Google Scholar 

  13. M M Chen, K Q Weinberger, J Blitzer. Co-training for domain adaptation, In: Proceedings of the 25th Conference on Neural Information Processing Systems (NIPS), 2011, 2456–2464.

  14. Y Z Jiang, Z H Deng, S T Wang. Mamdani-Larsen Type Transfer Learning Fuzzy System, Acta Automatica Sinica, 2012, 38(9): 1393–1409.

    Article  MathSciNet  MATH  Google Scholar 

  15. M Q Zhu, Y H Cheng, M Li, et al. A Hybrid Transfer Algorithm for Reinforcement Learning Based on Spectral Method, Acta Automatica Sinica, 2012, 38(11): 1765–1776.

    Article  MATH  Google Scholar 

  16. W H Jiang, F L Chung. Transfer spectral clustering, In: Proceedings of the 2012 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Springer, Berlin, Heidelberg, 2012, 789–803.

    Google Scholar 

  17. A Argyriou, C Micchelli, M Pontil, et al. A spectral regularization frame work for multi-task structure learning, In: Proceedings of Advances in Neural Information Processing Systems (NIPS 2008), Cambridge, MA: MIT Press, 2007, 25–32.

    Google Scholar 

  18. S J Pan, J Kwok, Q Yang. Transfer learning via dimensionality reduction, In: Proceedings of the 23rd International Conference on Artificial Intelligence, California, USA: AAAI Press, 2008, 677–682.

    Google Scholar 

  19. S J Pan, X C Ni, J T Sun, et al. Cross-domain sentiment classification via spectral feature alignment, In: Proceedings of the 19th International Conference on World Wide Web, New York, USA: ACM, 2010, 751–760.

    Google Scholar 

  20. W T Tu, S L Sun. Transferable discriminative dimensionality reduction, In: Proceedings of the 23rd IEEE International Conference on Tools with Artificial Intelligence (CTAI), Boca Raton, FL: IEEE, 2011, 865–868.

    Google Scholar 

  21. X Gao, X M Wang, X L Li, et al. Transfer latent variable model based on divergence analysis, Pattern Recognition, 2011, 44(10): 2358–2366.

    Article  MATH  Google Scholar 

  22. X B Cao, Z Wang, P K Yan, et al. Transfer learning for pedestrian detection, Neurocomputing, 2013, 100: 51–57.

    Article  Google Scholar 

  23. N D Lawrence, J C Platt. Learning to learn with the informative vector machine, In Proceedings of the 21th International Conference on Machine learning, 2004, 65–73.

  24. T Evgeniou, M Pontil. Regularized multi-task learning, In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2004, 109–117.

  25. L Zhang, D Zhang. Domain adaptation transfer extreme learning machines, In: J Cao, K Mao, E Cambria, Z Man, KA Toh, (eds), Proceedings of ELM-2014 Volume 1, Proceedings in Adaptation, Learning and Optimization, Springer, Cham, 2015, 3: 103–119.

    Google Scholar 

  26. J Davis, P M Domingos. Deep transfer via second-order markov logic, In Proceedings of the 26th Annual International Conference on Machine Learning, 2009, 217–224.

  27. J Schmidhuber. Deep learning in neural networks: An overview, Neural Networks, 2015, 61: 85–117.

    Article  Google Scholar 

  28. S Hochreiter, J Schmidhuber. Long Short-Term Memory, Neural Computation, 1997, 9(8): 1735–1780.

    Article  Google Scholar 

  29. Y T Wu, M Yuan, S P Dong, et al. Remaining useful life estimation of engineered systems using vanilla LSTM neural networks, Neurocomputing, 2018, 275: 167–179.

    Article  Google Scholar 

  30. A Smola, B Scholköpf. A tutorial on support vector regression, Statistics and Computing, 2004, 14(3): 199–222.

    Article  MathSciNet  Google Scholar 

  31. X J Zhou, T Jiang. Enhancing Least Square Support Vector Regression with Gradient Information, Neural Processing Letters, 2016, 43: 65–83.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dong-fa Wang.

Additional information

Supported by the program of science and technology of State Grid Zhejiang Electric Power Co., Ltd., named Research and application project of standard cost activity based on machine learning(5211JH1900LZ).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, Yp., Zheng, Y., Wang, Df. et al. Mathematical methods for maintenance and operation cost prediction based on transfer learning in State Grid. Appl. Math. J. Chin. Univ. 37, 598–614 (2022). https://doi.org/10.1007/s11766-022-4319-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11766-022-4319-7

MR Subject Classification

Keywords

Navigation