# DNN Based Approach

## Abstract

The recent success of Deep Neural Networks (DNN) in several application scenarios drove the scientific community to employ this paradigm also for NILM. Kelly and Knottenbelt compared three alternative DNNs: in the first, they employed a convolutional layer followed by long short-term memory (LSTM) layers to estimate the disaggregated signal from the aggregate one. In the second, a denoising autoencoder composed of convolutional and fully connected layers is trained to provide a denoised signal from the aggregate one. The third network estimates the start time, the end time and the mean power demand of each appliance. The algorithms were evaluated on the UK-DALE dataset and showed superior performance with respect to the combinatorial optimization and FHMM algorithms implemented in the Non-intrusive Load Monitoring Toolkit (NILMTK).

## Keywords

Deep neural network Denoising autoencoder Footprint Active power Reactive power## References

- 15.G.W. Hart, Nonintrusive appliance load monitoring. Proc. IEEE
**80**(12), 1870–1891 (1992)CrossRefGoogle Scholar - 21.J.Z. Kolter, T. Jaakkola, Approximate inference in additive factorial HMMs with application to energy disaggregation. J. Mach. Learn. Res.
**22**, 1472–1482 (2012)Google Scholar - 29.J.Z. Kolter, M.J. Johnson, REDD: a public data set for energy disaggregation research, in
*Proceedings of the SustKDD Workshop on Data Mining Applications in Sustainability*, San Diego (2011), pp. 1–6Google Scholar - 30.F.C.C. Garcia, C.M.C. Creayla, E.Q.B. Macabebe, Development of an intelligent system for smart home energy disaggregation using stacked denoising autoencoders, in
*Proceedings of the International Symposium on Robotics and Intelligent Sensors (IRIS)*, Tokyo, Dec. 17–20 2016, pp. 248–255Google Scholar - 31.J. Kelly, W. Knottenbelt, Neural NILM: deep neural networks applied to energy disaggregation, in
*Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments*, BuildSys ’15 (ACM, New York, 2015), pp. 55–64Google Scholar - 47.M. Berges, E. Goldman, H.S. Matthews, L. Soibelman, K. Anderson, User-centered nonintrusive electricity load monitoring for residential buildings. J. Comput. Civ. Eng.
**25**(6), 471–480 (2011)CrossRefGoogle Scholar - 50.H.-H. Chang, P.W. Wiratha, N. Chen, A non-intrusive load monitoring system using an embedded system for applications to unbalanced residential distribution systems. Energy Procedia
**61**, 146–150 (2014)CrossRefGoogle Scholar - 58.S. Makonin, F. Popowich, L. Bartram, B. Gill, I.V. Bajic, AMPds: a public dataset for load disaggregation and eco-feedback research, in
*Proceedings of the IEEE Electrical Power and Energy Conference (EPEC)*, Halifax (2013)Google Scholar - 60.S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput.
**9**, 1735–1780 (1997)CrossRefGoogle Scholar - 61.J. Kelly, W. Knottenbelt, The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. Scientific Data
**2**, 150007 (2015)CrossRefGoogle Scholar - 73.J. Kelly, N. Batra, O. Parson, H. Dutta, W. Knottenbelt, A. Rogers, A. Singh, M. Srivastava, NILMTK v0.2: a non-intrusive load monitoring toolkit for large scale data sets: demo abstract, in
*Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings*, New York, BuildSys ’14 (ACM, New York, 2014), pp. 182–183Google Scholar - 97.P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res.
**11**(3), 3371–3408 (2010)MathSciNetzbMATHGoogle Scholar - 98.S. Araki, T. Hayashi, M. Delcroix, M. Fujimoto, K. Takeda, T. Nakatani, Exploring multi-channel features for denoising-autoencoder-based speech enhancement, in
*Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing*, Brisbane, 19–24 Apr. 2015, pp. 116–120Google Scholar - 99.X. Lu, Y. Tsao, S. Matsuda, C. Hori, Speech enhancement based on deep denoising autoencoder, in
*Proceedings of Interspeech*, Lyon, 25–29 Aug. 2013, pp. 436–440Google Scholar - 100.V. Nair, G.E. Hinton, Rectified linear units improve restricted Boltzmann machines, in
*Proceedings of the 27th International Conference on Machine Learning (ICML)*, Haifa, 21–24 Jun. 2010, pp. 807–814Google Scholar - 101.A. Gabaldon, R. Molina, A. Marín-Parra, S. Valero-Verdu, C. Alvarez, Residential end-uses disaggregation and demand response evaluation using integral transforms. J. Mod. Power Syst. Clean Energy
**5**(1), 91–104 (2017)CrossRefGoogle Scholar - 102.M. Zhong, N. Goddard, C. Sutton, Latent Bayesian melding for integrating individual and population models, in
*Proceedings of Advances in Neural Information Processing Systems*, Montréal, 7–12 Dec. 2015, pp. 3618–3626Google Scholar - 103.I. Sutskever, J. Martens, G. Dahl, G. Hinton, On the importance of initialization and momentum in deep learning, in
*Proceedings of the 30th International Conference on Machine Learning (ICML)*, Atlanta, 16–21 Jun. 2013, pp. 2176–2184Google Scholar - 104.Theano Development Team, Theano: a Python framework for fast computation of mathematical expressions,
*arXiv e-prints*, vol. abs/1605.02688, May 2016Google Scholar - 105.F. Liebgott, B. Yang, Active learning with cross-dataset validation in event-based non-intrusive load monitoring, in
*2017 25th European Signal Processing Conference (EUSIPCO)*(IEEE, Piscataway, 2017), pp. 296–300CrossRefGoogle Scholar - 106.J. Alcalá, J. Ureña, Á. Hernández, D. Gualda, Event-based energy disaggregation algorithm for activity monitoring from a single-point sensor. IEEE Trans. Instrum. Meas.
**66**(10), 2615–2626, (2017)CrossRefGoogle Scholar - 107.M. Azaza, F. Wallin, Finite state machine household’s appliances models for non-intrusive energy estimation. Energy Procedia
**105**, 2157–2162 (2017)CrossRefGoogle Scholar - 108.I. Sutskever, J. Martens, G. Dahl, G. Hinton, On the importance of initialization and momentum in deep learning, in
*Proceedings of the 30th International Conference on Machine Learning (ICML)*, Atlanta, 16–21 Jun. 2013, pp. 2176–2184Google Scholar