Advertisement

Accurate Identification of Electrical Equipment from Power Load Profiles

  • Ziyi Wang
  • Chun Li
  • Lin ShangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11440)

Abstract

It is essential for the power industries to identify the running electrical equipment automatically. For power monitoring, the load profile data vary with the equipment’s types. Proceeding from the fundamental features of load time series, we propose a method to identify electrical equipment from power load profiles accurately. Aiming to improve the classification accuracy and generalization performance of convolutional neural network (CNN), we combine the training process of generative adversarial networks (GANs) with CNN, which employs the generated samples to enhance the classification accuracy. The CNN and discriminator in our approach share the first convolution layer for extracting richer features. We evaluate our method on UCR data sets comparing with 12 existing methods. Furthermore, we compare our model with LSTM, GRU and CNN on the electrical equipment load data, which is from industries in certain area. The final results show that our model has a higher equipment identification accuracy than other deep learning models.

Keywords

Power load profiles CNN GAN Time series 

Notes

Acknowledgment

We would like to thank Keith for his help and suggestions in writing this paper. This work is supported by the National Natural Science Foundation of China (No. 61672276) and Natural Science Foundation of Jiangsu, China (BK20161406).

References

  1. 1.
    Qiu, J., Wang, H., Lin, D., He, B.: Nonparametric regression-based failure rate model for electric power equipment using lifecycle data. IEEE Trans. Smart Grid 6(2), 955–964 (2015)CrossRefGoogle Scholar
  2. 2.
    Warnier, M., Dulman, S., Koç, Y., Pauwels, E.: Distributed monitoring for the prevention of cascading failures in operational power grids. Int. J. Crit. Infrastruct. Prot. 17(3), 245–251 (2015)Google Scholar
  3. 3.
    Xing, Z., Pei, J., Keogh, E.J.: A brief survey on sequence classification. SIGKDD Explor. 12(1), 40–48 (2010)CrossRefGoogle Scholar
  4. 4.
    Ye, L., Keogh, E.J.: Time series shapelets: a new primitive for data mining. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, 28 June - 1 July 2009, pp. 947–956 (2009)Google Scholar
  5. 5.
    Schäfer, P.: The BOSS is concerned with time series classification in the presence of noise. Data Min. Knowl. Discov. 29(6), 1505–1530 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. CoRR abs/1312.6114 (2013)Google Scholar
  7. 7.
    Goodfellow, I.J., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, Montreal, Quebec, Canada, 8–13 December 2014, pp. 2672–2680 (2014)Google Scholar
  8. 8.
    Denton, E.L., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada, 7–12 December 2015, pp. 1486–1494 (2015)Google Scholar
  9. 9.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR abs/1511.06434 (2015)Google Scholar
  10. 10.
    Caruana, R.A.: Multitask connectionist learning. In: Connectionist Models Summer School, pp. 372–379 (1993)Google Scholar
  11. 11.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
  12. 12.
    Chen, Y., et al.: The UCR time series classification archive, July 2015. www.cs.ucr.edu/~eamonn/time_series_data/
  13. 13.
    Cui, Z., Chen, W., Chen, Y.: Multi-scale convolutional neural networks for time series classification. CoRR abs/1603.06995 (2016)Google Scholar
  14. 14.
    Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. In: Knowledge Discovery in Databases: Papers from the 1994 AAAI Workshop, Seattle, Washington, July 1994. Technical report WS-94-03, pp. 359–370 (1994)Google Scholar
  15. 15.
    Keogh, E.J., Rakthanmanon, T.: Fast shapelets: a scalable algorithm for discovering time series shapelets. In: Proceedings of the 13th SIAM International Conference on Data Mining, Austin, Texas, USA, 2–4 May 2013, pp. 668–676 (2013)Google Scholar
  16. 16.
    Senin, P., Malinchik, S.: SAX-VSM: interpretable time series classification using SAX and vector space model. In: 2013 IEEE 13th International Conference on Data Mining, Dallas, TX, USA, 7–10 December 2013, pp. 1175–1180 (2013)Google Scholar
  17. 17.
    Schäfer, P.: Towards time series classification without human preprocessing. In: Perner, P. (ed.) MLDM 2014. LNCS (LNAI), vol. 8556, pp. 228–242. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-08979-9_18CrossRefGoogle Scholar
  18. 18.
    Baydogan, M.G., Runger, G.C., Tuv, E.: A bag-of-features framework to classify time series. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2796–2802 (2013)CrossRefGoogle Scholar
  19. 19.
    Lines, J., Bagnall, A.: Ensembles of elastic distance measures for time series classification. In: Proceedings of the 2014 SIAM International Conference on Data Mining, Philadelphia, Pennsylvania, USA, 24–26 April 2014, pp. 524–532 (2014)Google Scholar
  20. 20.
    Schäfer, P.: Scalable time series classification. Data Min. Knowl. Discov. 30(5), 1273–1298 (2016)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Grabocka, J., Schilling, N., Wistuba, M., Schmidt-Thieme, L.: Learning time-series shapelets. In: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2014, New York, NY, USA, 24–27 August 2014, pp. 392–401 (2014)Google Scholar
  22. 22.
    Bagnall, A., Lines, J., Hills, J., Bostrom, A.: Time-series classification with COTE: the collective of transformation-based ensembles. In: 32nd IEEE International Conference on Data Engineering, ICDE 2016, Helsinki, Finland, 16–20 May 2016, pp. 1548–1549 (2016)Google Scholar
  23. 23.
    Chung, J., Gülçehre, Ç., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.National Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina

Personalised recommendations