Interpretability of Neural Networks
A neural network is a black-box model, so it doesn’t reveal any information about the identified system. It is a challenging task to open up this box to support model-building procedures. However, based on the extracted information, model reduction and visualization could be done on the base model. The key idea is that the neural networks can be transformed into a fuzzy rule base where the rules can be analyzed, visualized, interpreted and even reduced.