Advertisement

Revisit Lmser from a Deep Learning Perspective

  • Wenjin Huang
  • Shikui TuEmail author
  • Lei XuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11936)

Abstract

Proposed in 1991, Least Mean Square Error Reconstruction for self-organizing network, shortly Lmser, was a further development of the traditional auto-encoder (AE) by folding the architecture with respect to the central coding layer and thus leading to the features of Duality in Connection Weight (DCW) and Duality in Paired Neurons (DPN), as well as jointly supervised and unsupervised learning which is called Duality in Supervision Paradigm (DSP). However, its advantages were only demonstrated in a one-hidden-layer implementation due to the lack of computing resources and big data at that time. In this paper, we revisit Lmser from the perspective of deep learning, develop Lmser network based on multiple fully-connected layers, and confirm several Lmser functions with experiments on image recognition, reconstruction, association recall, and so on. Experiments demonstrate that Lmser indeed works as indicated in the original paper, and it has promising performance in various applications.

Keywords

Autoencoder Lmser Bidirectional deep learning 

Notes

Acknowledgement

This work was supported by the Zhi-Yuan Chair Professorship Start-up Grant (WF220103010), and Startup Fund (WF220403029) for Youngman Research, from Shanghai Jiao Tong University.

References

  1. 1.
    Ballard, D.H.: Modular learning in neural networks. In: Proceedings of the Sixth National Conference on Artificial Intelligence, AAAI 1987, vol. 1, pp. 279–284 (1987)Google Scholar
  2. 2.
    Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4–5), 291–294 (1988)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). http://arxiv.org/abs/1412.6572
  4. 4.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  5. 5.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR abs/1608.06993 (2016). http://arxiv.org/abs/1608.06993
  8. 8.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980
  9. 9.
    LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database, vol. 2. AT&T Labs (2010). http://yann.lecun.com/exdb/mnist/
  10. 10.
    Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In: Advances in Neural Information Processing Systems, pp. 2802–2810 (2016)Google Scholar
  11. 11.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  12. 12.
    Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. CoRR abs/1708.07747 (2017)Google Scholar
  13. 13.
    Xu, L.: Least MSE reconstruction for self-organization: (i)&(ii). In: Proceedings of 1991 International Joint Conference on Neural Networks, pp. 2363–2373 (1991)Google Scholar
  14. 14.
    Xu, L.: Least mean square error reconstruction principle for self-organizing neural-nets. Neural Netw. 6(5), 627–648 (1993)CrossRefGoogle Scholar
  15. 15.
    Xu, L.: An overview and perspectives on bidirectional intelligence: Lmser duality, double IA harmony, and causal computation. IEEE/CAA J. Autom. Sin. 6(4), 865–893 (2019).  https://doi.org/10.1109/JAS.2019.1911603MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science and Engineering, Centre for Cognitive Machines and Computational Health (CMaCH), SEIEE SchoolShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations