Journal of Intelligent Information Systems

, Volume 32, Issue 3, pp 267–295

Gaining insight through case-based explanation

Authors

  • Conor Nugent
    • 4CUniversity College Cork
  • Dónal Doyle
    • Idiro Technologies Dublin
    • Computer ScienceUniversity College Dublin
Article

DOI: 10.1007/s10844-008-0069-0

Cite this article as:
Nugent, C., Doyle, D. & Cunningham, P. J Intell Inf Syst (2009) 32: 267. doi:10.1007/s10844-008-0069-0

Abstract

Traditional explanation strategies in machine learning have been dominated by rule and decision tree based approaches. Case-based explanations represent an alternative approach which has inherent advantages in terms of transparency and user acceptability. Case-based explanations are based on a strategy of presenting similar past examples in support of and as justification for recommendations made. The traditional approach to such explanations, of simply supplying the nearest neighbour as an explanation, has been found to have shortcomings. Cases should be selected based on their utility in forming useful explanations. However, the relevance of the explanation case may not be clear to the end user as it is retrieved using domain knowledge which they themselves may not have. In this paper the focus is on a knowledge-light approach to case-based explanations that works by selecting cases based on explanation utility and offering insights into the effects of feature-value differences. In this paper we examine to two such a knowledge-light frameworks for case-based explanation. We look at explanation oriented retrieval (EOR) a strategy which explicitly models explanation utility and also at the knowledge-light explanation framework (KLEF) that uses local logistic regression to support case-based explanation.

Keyword

Case-based explanation

Copyright information

© Springer Science+Business Media, LLC 2008