Abstract
When it comes to interpretability, ML models, particularly DL models, are frequently regarded as a black box due to their complexity and lack of transparency in approach. It is fairly simple to train a network to be specific. A DL model learns to classify an object, recognize text, or generate digital images. It efficiently encapsulates feature learning in the network’s hidden layer, but explainability in brief decreases.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Somani, A., Horsch, A., Prasad, D.K. (2023). Knowledge Encoding and Interpretation. In: Interpretability in Deep Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-20639-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-20639-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20638-2
Online ISBN: 978-3-031-20639-9
eBook Packages: Computer ScienceComputer Science (R0)