Advertisement

On a Fitting of a Heaviside Function by Deep ReLU Neural Networks

  • Katsuyuki Hagiwara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11301)

Abstract

A recent research interest on deep neural networks is to understand why deep networks are preferred to shallow networks. In this article, we considered an advantage of a deep structure in realizing a heaviside function in training. This is significant not only as simple classification problems but also as a basis in constructing general non-smooth functions. A heaviside function can be well approximated by a difference of ReLUs if we can set extremely large weight values. However, it is not so easy to attain them in training. We showed that a heaviside function can be well represented without large weight values if we employ a deep structure. We also showed that update terms of weights at input side can be necessarily large if a network is trained to realize a heaviside function. Therefore, apparent acceleration of training is brought about by setting a small learning rate. As a result, we can say that, by employing a deep structure, a good fitting of heaviside function can be obtained within a reasonable training time under a moderate small learning rate. Our results suggest that a deep structure is effective in a practical training that requires a discontinuous output.

Keywords

Deep neural networks ReLU Heaviside function 

References

  1. 1.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Networks 5, 157–166 (1994)CrossRefGoogle Scholar
  2. 2.
    Hagiwara, K., Fukumizu, K.: Relation between weight size and degree of over-fitting in neural network regression. Neural Netw. 21, 48–58 (2008)CrossRefGoogle Scholar
  3. 3.
    Imaizumi, M., Fukumizu, K.: Deep neural networks learn non-smooth functions effectively. arXiv preprint arXiv:1802.04474 (2018)
  4. 4.
    Liang, S., Srikant, R.: Why deep neural networks for function approximation? arXiv preprint arXiv:1610.04161 (2017)
  5. 5.
    Petersen, P., Voigtlaender, F.: Optimal approximation of piecewise smooth functions using deep ReLU neural networks. arXiv preprint arXiv:1709.05289 (2017)
  6. 6.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar
  7. 7.
    Yarotsky, D.: Error bounds for approximations with deep ReLU networks. Neural Netw. 94, 103–114 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Faculty of EducationMie UniversityTsuJapan

Personalised recommendations