Abstract
One aspect of legal reasoning is the act of working out another party’s mental states (their beliefs, intentions, etc.) and assessing how their reasoning proceeds given various conditions. This process of “mindreading” would ideally be achievable by means of a strict system of rules allowing us, in a neat and logical way, to determine what is or what will go on in another party’s mind. We argue, however, that commonsense reasoning, and mindreading in particular, are not adequately described in this way: they involve features of uncertainty, defeasibility, vagueness, and even inconsistency that are not characteristic of an adequate formal system. We contend that mindreading is achieved, at least in part, through “mental simulation,” involving, in addition, nested levels of uncertainty and defeasibility. In this way, one party temporarily puts himself or herself certainly in the other party’s shoes, without relying wholly on a neat and explicit system of rules.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Physica-Verlag Heidelberg
About this chapter
Cite this chapter
Barnden, J.A., Peterson, D.M. (2002). Artificial Intelligence, Mindreading, and Reasoning in Law. In: MacCrimmon, M., Tillers, P. (eds) The Dynamics of Judicial Proof. Studies in Fuzziness and Soft Computing, vol 94. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1792-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1792-8_2
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-662-00323-7
Online ISBN: 978-3-7908-1792-8
eBook Packages: Springer Book Archive