Abstract
Past works on reasoning about inconsistency in AI have suffered from multiple flaws: (i) they apply to one logic at a time and are often invented for one logic after another. (ii) They assume that the AI researcher will legislate how applications resolve inconsistency even though the AI researcher may often know nothing about a specific application which may be built in a completely different time frame and geography than the AI researcher’s work – in the real world, users are often stuck with the consequences of their decisions and would often like to decide what they want to do with their data (including what data to consider and what not to consider when there are inconsistencies). An AI system for reasoning about inconsistent information must support the user in his/her needs rather than forcing something down their throats. (iii) Most existing frameworks use some form or the other of maximal consistent subsets.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Rescher N, Manor R (1970) On inference from inconsistent premises. Theory Decis 1:179–219
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 The Author(s)
About this chapter
Cite this chapter
Martinez, M.V., Molinaro, C., Subrahmanian, V.S., Amgoud, L. (2013). Conclusions. In: A General Framework for Reasoning On Inconsistency. SpringerBriefs in Computer Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6750-2_6
Download citation
DOI: https://doi.org/10.1007/978-1-4614-6750-2_6
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-6749-6
Online ISBN: 978-1-4614-6750-2
eBook Packages: Computer ScienceComputer Science (R0)