Analysing the Trade-Off Between Comprehensibility and Accuracy in Mimetic Models
One of the main drawbacks of many machine learning techniques, such as neural networks or ensemble methods, is the incomprehensibility of the model produced. One possible solution to this problem is to consider the learned model as an oracle and generate a new model that “mimics” the semantics of the oracle by expressing it in the form of rules. In this paper we analyse experimentally the influence of pruning, the size of the invented dataset and the confidence of the examples in order to obtain shorter sets of rules without reducing too much the accuracy of the model. The experiments show that the factors analysed affect the mimetic model in different ways. We also show that by combining these factors in a proper way the quality of the mimetic model improves significantly wrt. other previous reports on the mimetic method.
KeywordsEnsemble Method Repetition Factor Confidence Threshold Random Dataset Label Dataset
Unable to display preview. Download preview PDF.
- 1.Black, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
- 3.Domingos, P.: Learning Multiple Models without Sacrificing Comprehen-sibility. In: Proc. of the 14th National Conf. on AI, p. 829 (1997)Google Scholar
- 5.Estruch, V., Hernández-Orallo, J.: Theoretical Issues of Mimetic Classifiers, TR DSIC (2003), http://www.dsic.upv.es/~flip/papers/mim.ps.gz