Advertisement

Interpretability of Neural Networks

  • Tamás KeneseiEmail author
  • János Abonyi
Chapter
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)

Abstract

A neural network is a black-box model, so it doesn’t reveal any information about the identified system. It is a challenging task to open up this box to support model-building procedures. However, based on the extracted information, model reduction and visualization could be done on the base model. The key idea is that the neural networks can be transformed into a fuzzy rule base where the rules can be analyzed, visualized, interpreted and even reduced.

Keywords

Hide Layer Fuzzy Rule Hide Neuron Firing Strength Fuzzy Logic Operator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Copyright information

© The Author(s) 2015

Authors and Affiliations

  1. 1.Department of Process EngineeringUniversity of PannoniaVeszprémHungary

Personalised recommendations