Abstract
A neural network is a black-box model, so it doesn’t reveal any information about the identified system. It is a challenging task to open up this box to support model-building procedures. However, based on the extracted information, model reduction and visualization could be done on the base model. The key idea is that the neural networks can be transformed into a fuzzy rule base where the rules can be analyzed, visualized, interpreted and even reduced.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2015 The Author(s)
About this chapter
Cite this chapter
Kenesei, T., Abonyi, J. (2015). Interpretability of Neural Networks. In: Interpretability of Computational Intelligence-Based Regression Models. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-21942-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-21942-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21941-7
Online ISBN: 978-3-319-21942-4
eBook Packages: Computer ScienceComputer Science (R0)