Abstract
This chapter will discuss the neural network approximation of noncontinuous functions. Currently, this is a problematic area for neural networks because network processing is based on calculating partial function derivatives (using the gradient descent algorithm), and calculating them for noncontinuous functions at the points where the function value suddenly jump or drop leads to questionable results. You will dig deeper into this issue in this chapter. The chapter also includes a method I developed that solves this issue.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsAuthor information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 Igor Livshin
About this chapter
Cite this chapter
Livshin, I. (2019). Approximating Noncontinuous Functions. In: Artificial Neural Networks with Java. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-4421-0_8
Download citation
DOI: https://doi.org/10.1007/978-1-4842-4421-0_8
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-4420-3
Online ISBN: 978-1-4842-4421-0
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)