Skip to main content

Part of the book series: Law, Governance and Technology Series ((LGTS,volume 5))

Abstract

This chapter is concerned with models of reasoning about the evidence. We consider, in turn, “metre models” of the shifts of opinion in the adjudicators’ mind as items of information come in, and a distributed belief revision model for such dynamics. We discuss the weight given jury research in North America. Then we consider some seminal computer tools modelling the reasoning about a charge and explanations: Thagard’s ECHO (and its relation to Josephson’s PEIRCE-IGTT), and Nissan’s ALIBI, a planner seeking exoneration and producing explanations minimising liability. A quick survey of Bayesian approaches in law is followed with a discussion of the controversy concerning applications of Bayesianism to modelling juridical decision-making. We quickly sample some probabilistic applications (Poole’s Independent Choice Logic and reasoning about accounts of a crime, and next, dynamic uncertain inference concerning criminal cases, and Snow and Belis’s recursive multidimensional scoring). Finally, we consider Shimony and Nissan’s application of the kappa calculus to grading evidential strength, and then argue for trying to model relative plausibility.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Coherence is crucial for how cogent legal narratives are. But coherence is also a crucial factor in argumentation, and coherence in this other sense has been formally modelled in Henry Prakken’s (2005) “Coherence and flexibility in dialogue games for argumentation”.

  2. 2.

    Reasoning on that kind of situation is also modelled in a project described in Section 2.5.2 below.

  3. 3.

    In distributed AI, dealing with societies of artificial agents, the topic of an agent’s reputation has tended to be underresearched, but it ought to be an important topic for distributed AI just as it is for social psychology. A book on the subject is Conte and Paolucci (2002). The same considerations apply to trust among agents, also an important subject for AI modelling, just the way it is for social psychology, and which is the subject of a book by Castelfranchi and Falcone (2010). See a survey in Sabater and Sierra (2005).

  4. 4.

    In psychological research about detecting deception (e.g., Vrij, 2000; Porter & Yuille, 1995, 1996; Colwell, Hiscock-Anisman, Memon, Woods, & Yaeger, 2006) – a subject to which we are going to return (see fn 81 below) – “The Statement Validity Analysis (SVA), a memory-based approach, is the most widely used system of credibility assessment to date” (Colwell et al., 2006). It is traced back to Undeutsch’s (1982) statement reality analysis. The latter became known as the Undeutch Hypothesis. It “which posits that memory for an actual event will differ from fabrication in structure, content, and quality; and that these systematic differences will be measurable. Recall of a genuine memory is expected to demonstrate a richness of detail, logical structure, and spontaneity or ‘unstructured production’ ” (Colwell et al., 2006). “The cognitive approach to credibility/deception is based on the Information Manipulation Theory (IMT) [McCornack, 1992]. Maintaining deception in an interrogative atmosphere can be a cognitively demanding task. A deceiver must deal with his/her conflicting goals of disclosing enough information to please the interrogator while retaining sufficient control over the facts conveyed to avoid detection” (Colwell et al., 2006). According to IMT, which “highlights the complex and interactional nature of deception, emphasizing impression management and the deceiver’s control over the information” (Colwell et al., 2006): “Deceivers usually convey less information than truth-tellers (less quantity), reply in a more irrelevant manner (less relevance), provide incoherent information (lower quality), and rarely engage in sarcasm (different manner of responding [(Porter & Yuille, 1995)]). These specific mechanisms, as well as others, can provide a multitude of channels for a deceiver to utilize.” (Colwell et al., 2006).

  5. 5.

    “Dempster-Shafer theory [(Shafer, 1976)] has been developed to handle partially specified domains. It distinguishes between uncertainty and ignorance by creating belief functions. Belief functions allow the user to bound the assignment of probabilities to certain events, rather than give events specific probabilities. Belief functions satisfy axioms that are weaker than those for probability theory. When the probabilistic values of the beliefs that a certain event occurred are exact, then the belief value is exactly the probability that the event occurred. In this case, Dempster-Shafer theory and probability theory provide the same conclusions” (Stranieri & Zeleznikow, 2005a).

  6. 6.

    An ATMS is a mechanism that enables a problem solver to make inferences under different hypothetical conditions, by maintaining the assumptions on which each piece of information and each inference depends (de Kleer, 1986, 1988). An ATMS maintains how each piece of inferred information depends on presumed information and facts, and how inconsistencies arise. “A Truth maintenance system (TMS) may be employed to protect the logical integrity of the conclusions of an inferencing system” (Luger & Stubblefield, 1998, section 7.2.3, p. 275).

    “Jon Doyle (1979) created one of the earliest truth maintenance systems, called a justification based truth maintenance system or JTMS. Doyle was the first researcher to explicitly separate the truth maintenance system, a network of propositions and their justifications, from the reasoning system operating in some domain. The result of this split is that the JTMS communicates with the problem solver, perhaps an automated theorem prover, receiving information about new propositions and justifications and in turn supplying the problem solver with information about which propositions should be believed based on the current existing justifications. There are three main operations that are performed by the JTMS. First, the JTMS inspects the network of justifications. This inspection can be triggered by queries from the problem solver such as: Should I believe in proposition p? Why should I believe proposition p? What assumptions underlie proposition p? The second operation of the JTMS is to modify the dependency network, where modifications are driven by information supplied by the problem solver. Modifications include adding new propositions, adding or removing premises, adding contradictions, and justifying the belief in a proposition. The final operation of the JTMS is to update the network. This operation is executed whenever a change is made in the dependency network. The update operation recomputes the labels of all propositions in a manner that is consistent with existing justifications” (Luger & Stubblefield, 1998, section 7.2.3, pp. 276–277).

    “A second type [of] truth maintenance system is the assumption-based truth maintenance system (ATMS). The term assumption-based was first introduced by de Kleer (1984), although similar ideas may be found in Martins and Shapiro (1983). [Cf. Martins and Shapiro (1988), Martins (1990).] In these systems, the labels for nodes in the network are no longer IN and OUT but rather the sets of premises (assumptions) underlying their derivation. de Kleer also makes a distinction between premise nodes that hold universally and nodes that can be assumptions made by the problem solver and that may later be retracted. […] The communication between the ATMS and the problem solver is similar to that between JTMS and its problem solver with operators for inspection, modification, and updating. The only difference is that with ATMS there is no longer a single state of belief but rather subsets of potential supporting premises. The goal of computation with the ATMS is to find minimal sets of premises sufficient for the support of each node. This computation is done by propagating and combining labels, beginning with labels for the premises” (Luger & Stubblefield, 1998, section 7.2.3, pp. 278–279).

  7. 7.

    Another example that befits updating, rather than revision, is when, in the series of messages from the Red Brigades while they were holding prisoner the Italian politician Aldo Moro (who was abducted on 16 March 1978 from his car in Rome, the five men of his escort having been killed), there was a rather abnormal message which stated that Moro’s body had been dumped in a lake in the mountains of central Italy, Lago della Duchessa. The bottom of the lake was searched by law enforcement staff, who found instead the body of a shepherd (it was an unrelated crime). But then the series of messages from the Red Brigades started again, with no reference to their prank of sending law enforcement agents scuttling to Lago della Duchessa.

  8. 8.

    “Nonmonotonic reasoning, because conclusions must sometimes be reconsidered, is called defeasible; that is, new information may sometimes invalidate previous results. Representation and search procedures that keep track of the reasoning steps of a logic system are called truth maintenance systems or TMS. In defeasible reasoning, the TMS preserves the consistency of the knowledge base, keeping track of conclusions that might later need be questioned” (Luger & Stubblefield, 1998, p. 270).

  9. 9.

    In Latin simpliciter means “simply”, but here it has the following technical sense: “Justification simpliciter requires the degree of justification to pass a threshold, but the threshold is contextually determined and not fixed by logic alone.” (Pollock, 2010, p. 8).

  10. 10.

    On confirmation bias as occurring in the police interrogation rooms, see e.g. Kassin, Goldstein, and Savitsky (2003), Meissner and Kassin (2002), and Hill, Memon, and McGeorge (2008).

  11. 11.

    This name for the concept was spread by a book by Leon Festinger (1919–1989), A Theory of Cognitive Dissonance (Festinger, 1957).

  12. 12.

    In psychology, studies of persuasion include, e.g., Chaiken (1987), Chaiken, Liberman, and Eagly (1989), Chaiken, Wood, and Eagly (1996), Clark and Delia (1976). In the given disciplinary context of psychology, “the study of persuasion concerns the variables and processes that govern the formation and change of attitudes” (Chaiken et al., 1996, p. 702). Message-based persuasion is one strand of such research. Other traditions of persuasion research include “the influence of individuals’ own behaviors and messages on their attitudes, social influence effects in group contexts and, to a lesser extent, the attitudinal effects of mere or repeated exposure to attitude objects and the selective effects of attitudes on information processing” (ibid.). Also see Cialdini (1993) about influence. Papageorgis and McGuire (1961) discussed immunity to persuasion produced by pre-exposure to weakened counterarguments. Persuasion is also the subject of, e.g., Stiff (1994), Sawyer (1981), Petty and Cacioppo (1986) and Petty, Wegener, and White (1998).

  13. 13.

    For example, Iacoviello’s book (1997) discussed how the Court of Cassation in Italy check the motivation of the sentence given in criminal trials heard by the lower courts in Italy. Iacoviello (2006) invoked clearer rules about flaws in the motivation, and the effect they have in terms of mistrial.

  14. 14.

    It is sometimes said that this is because the jury replaces the medieval ordeal, in which the adjudication was taken to be supernatural.

  15. 15.

    Cf. Bex and Walton’s (2010) “Burdens and Standards of Proof for Inference to the Best Explanation”. Also see Atkinson and Bench-Capon (2007a).

  16. 16.

    An example of the fallacies associated with subjective estimates of the likelihood of future things, is more clearly seen when it comes to how post factum one reports one’s own evaluation of the likelihood beforehand. See Fischhoff and Beyth’s (1975) ‘ “I Knew It Would Happen”: Remembered Probabilities of Once-Future Things’, as well as Merton’s (1948) ‘The Self-Fulfilling Prophecy’.

  17. 17.

    17An intuitive explanation of the term is that, e.g., bus arrivals are Poissonian.

  18. 18.

    18By “anchor”, in the algebraic sequential averaging model, opinion is meant.

  19. 19.

    Pennington and Hastie showed people a movie of a trial. They found that in order to make sense of the wealth of detail, the participants constructed stories about what happened. In another experiment, they found that when evidence was given in an order which made the story easy to construct, the participants were more likely to construct the same story. When the evidence was in story order, 78% of participants found the defendant guilty. Yet when the evidence was out of order, only 31% voted for the guilty verdict.

    Emplotting items of information into an explanatory narrative is also a subject of debate concerning historical explanation. In the journal History and Theory, David Carr’s (2008) ‘Narrative Explanation and Its Malcontents’ gives this general example about how we figure out narrative explanations (ibid., pp. 19–20):

    Suppose that on a busy city street we see a young man carrying a large potted plant that almost obscures his view, running so fast that he risks colliding with other pedestrians, and shouting the name of a woman in a very loud voice. When someone like this attracts our attention, his action puzzles us. We want to know why he’s behaving in this strange way. We seek an explanation.

    We learn that he has returned home to find a note from his girlfriend with whom he shared his apartment, but with whom he had been quarreling; indeed she had decided to leave him and move out, and in fact had removed her belongings and is gone. The man was shaken and distraught. Then he noticed that she left behind her favorite plant, and learned from a neighbor that she had left only a few minutes ago and is walking in the direction of a friend’s apartment. Seizing on the plant as a pretext to find her and beg her to return, he picks it up and runs into the street, hoping to catch up with her.

    Most of us would be satisfied with this account as an explanation of the man’s action. We might ask for more details, but we don’t really need them. Our perplexity goes away; our question has been answered. We now know why he did what he did.

    What we have given is a typical narrative account. We have explained an action by telling a story about it. The narrative has all the standard elements of a good story: it has a central subject or protagonist. It has a beginning: we need not go any further back than his return to the empty apartment, though it helps to learn that the two had been quarreling before that. That sets the scene. The story has a middle, in which our hero reacts emotionally to the opening scene, assesses the situation with the help of some new information (that she had just left), and decides to take action. What he does then, running with the plant through the street and shouting his girlfriend’s name, is where we came in, as it were. There is an element of suspense here: will he succeed? And the story has an end, even though we don’t yet know exactly what it will be. He’ll catch up with her or he won’t. If he does, he’ll be successful in winning her back, or he won’t. But this range of alternatives, even though we don’t know which of them will occur, is determined by the story so far. They belong to the story.

    One thing to be noted about this explanation is that it is probably the same one that the man himself would give for his own action. Though we could have gotten this explanation from someone else, we could also have gotten it from him, if we had occasion to ask. This rather obvious fact suggests that the narrative mode is very close in form to the structure of action itself, from the agent’s point of view. […]

    However (Carr, 2008, p. 21):

    Of course, questions might arise about whether the man was telling the truth, especially if his story conflicted with another story — say, his girlfriend’s story — of the same events. Here we would indeed have a legitimate reason to question the agent’s narrative account of his own action. If it became important for some reason to settle the discrepancy, we might have to call in other witnesses and ask for their accounts of the same action.

    This could take us from the everyday into the world of legal or juridical institutions, where someone — a judge or jury — would have to decide which account of the action to believe. A journalist might have similar concerns, wanting to reconstruct “what really happened” out of the varying accounts of the original events. Historians, too, often see their task as reconstruction of the past along these lines. Here the value of hindsight is that from its perspective it can reveal elements that augment the original story.

  20. 20.

    False memories of events that actually never happened in an individual’s lifetime, and are nevertheless retrieved by that individual, have been researched by various scholars (especially concerning the susceptibility of children to the development of false memories), but are especially associated with research conducted by Elizabeth Loftus (http://www.seweb.uci.edu/faculty/loftus/). See, e.g., Brainerd and Reyna (2004), Garven, Wood, Malpass, and Shaw (1998), Howe (2005), Johnson, Hashtroudi, and Lindsay (1993), Lane and Zaragoza (2007), Strange, Sutherland, and Garry (2006), and Wade, Garry, Read, and Lindsay (2002), Wade, Sharman, Garry, Memon, Merckelbach, and Loftus (2007). In the late 2000s, Henry Otgaar has produced a steady flow of publications on false memories, especially in children (http://www.personeel.unimaas.nl/henry.otgaar/#Publications), e.g., Otgaar, Candel, and Merckelbach (2008), Otgaar, Candel, Merckelbach, and Wade (2009), Otgaar, Candel, Memon, and Almerigogna (2010), Otgaar et al. (2010), Otgaar, Candel, Scoboria, and Merckelbach (2010), Otgaar, Candel, Smeets, and Merckelbach (2010), Otgaar and Smeets (2010), Howe, Candel, Otgaar, Malone, and Wimmer (2010), and Otgaar (2009).

  21. 21.

    E.g., Loftus and Doyle (1997), Loftus (1979, 1981a, 1981b, 1987, 1997, 1998, 2002, 2003a, 2003b, 2005); cf. Loftus (1974, 1975, 1976, 1980, 1983, 1986a, 1986b, 1991, 1993a, 1993b); and see: Loftus and Greene (1980), Loftus and Ketcham (1994), Loftus and Pickrell (1995), Loftus and Rosenwald (1993), Loftus and Palmer (1974), Loftus and Hoffman (1989), Loftus and Loftus (1980), Loftus, Miller, and Burns (1978), Loftus, Weingardt, and Wagenaar (1985), Loftus, Loftus, and Messo (1987), Loftus, Donders, Hoffman, and Schooler (1989), Penrod, Loftus, and Winkler (1982), Garry, Manning, Loftus, and Sherman (1996), Mazzoni, Loftus, and Kirsch (2001), Wells and Loftus (1991), Schooler, Gerhard, and Loftus (1986), Bell and Loftus (1988, 1989), Deffenbacher and Loftus (1982), Monahan and Loftus (1982), Castella and Loftus (2001), Nourkova, Bernstein, and Loftus (2004), and Harley, Carlsen, and Loftus (2004).

    Neimark (1996) has written about Loftus. After teaching for 29 years at the University of Washington at Seattle, she moved to the University of California at Irvine. Nevertheless, her debunking the myth of “repressed memories” (cf., e.g., McNally, 2003) in relation to child abuse as alleged in the Jane Doe case had unpleasant consequences for the scholar (Tavris, 2002). She is much admired, while also controversial. Her results cannot be safely ignored.

  22. 22.

    A psychologist of law, Amina Memon, in her 2008 course handouts on Psychology, Law and Eyewitness Testimony at the University of Aberdeen in Scotland, stated: “The last 20 years has seen an explosion of research in the Psychology and Law field. The area that has grown more than any other is research on perceptions of credibility and accuracy of participants in the legal system. Psychologists have asked questions that have direct relevance in the legal arena: Are there reliable indicators of deception? Is it possible to persuade an innocent person that they may have committed a crime? Are juries biased? Can social pressure to remember result in false memory creation? […]”

  23. 23.

    Code in the LISP programming language for ECHO is available at its originator’s website. That is the website (http://cogsci.uwaterloo.ca) of Paul Thagard’s computational epistemology laboratory.

  24. 24.

    The syntax (explains (H1 H2) E1) means that hypotheses H1 and H2 together explain evidence E1. Coding this in LISP is straightforward. “The relation explains is asymmetrical, but ECHO establishes a symmetrical link between a hypothesis and what it explains” (Thagard, 2004, p. 237).

  25. 25.

    Neural networks are the subject of Section 6.1.14 in this book.

  26. 26.

    26This image (http://en.wikipedia.org/wiki/File:Artificial_neural_network.svg) was made by C. Burnett and is in the public domain under the terms of the GNU Free Documentation License.

  27. 27.

    Clearly the system PEIRCE was named after the philosopher Charles Sanders Peirce (1839–1914), with whom the theory of abductive reasoning is mainly associated. An architecture for abductive reasoning was also described by Poole (1989).

  28. 28.

    On abduction and logic programming, see Kakas, Kowalski, and Toni (1992, 1998), Toni and Kowalski (1995), Fung and Kowalski (1997), Eshghi and Kowalski (1989). On abductive logical models, also see sections 3.1 and 4 in Prakken and Renooij (2001), whereas Section 5 in that same paper is about argument-based reconstruction of a given case about a car accident. In MacCrimmon and Tillers (2002), Part Five comprises four articles on abductive inference as applied to fact investigation in law.

  29. 29.

    With reference to the RED-2 tool for abduction, the Josephsons’ book states: “Logically, the composite hypothesis is a conjunction of little hypotheses, so, if we remove one of the conjuncts, the resulting hypothesis is distinctly more likely to be true because it makes fewer commitments. Superfluous hypothesis parts make factual commitments, expose themselves to potential falsity, with no compensating gain in explanatory power. To put it more classically: if hypothesis parts are treated as logical conjuncts, then an additional part introduces an additional truth condition to satisfy. Thus the hypothesis that is simpler (in not including an unneeded part) is more likely to be true. Thus the sense of parsimony we use here is such that the more parsimonious hypothesis is more likely to be true.” (quoted from p. 84 in the same book: Josephson & Josephson, 1994).

  30. 30.

    Cf. in RED-2: “After parsimony criticism, a second process of criticism begins in which each hypothesis in the composite is examined to see if it is essential, that is, to see if part of what it explains can be explained in no other way. There are two ways to find essentials. The first is during the initial assembly process. If only one hypothesis offers to explain a finding on which attention is focused, that hypothesis is a discovered essential. The second way to discover essentials is that an attempt is made for each part of the composite hypothesis, not already known to be essential, to assemble a complete alternative hypothesis not including that part. If the attempt succeeds, it shows that there are other ways of explaining the same things, even though they may not be as good as the original. But if the attempt fails, it shows that there is something that has no other plausible explanation other than by using the hypothesis part in question […] Note the distinction between hypothesis parts that are nonsuperfluous relative to a particular composite, that is they cannot be removed without explanatory loss, and essentials without which no complete explanations can be found in the whole hypothesis space. An essential hypothesis is very probably correct, especially if it was rated as highly plausible by its specialist” (Josephson & Josephson, 1994, pp. 84–85).

  31. 31.

    John R. Josephson, in the same book on p. 238, begins a chapter which “develops the hypothesis that perception is abduction in layers and that understanding spoken language is a special case”; it “present[s] a layered-abduction computational model of perception that unifies bottom-up and top-down processing in a single logical and information-processing framework. In this model the processes of interpretation are broken down into discrete layers where at each layer a best-explanation composite hypothesis is formed of the data presented by the layer or layers below, with the help of information from above. The formation of such a hypothesis is an abductive inference process, similar to diagnosis or scientific theory formation.”

  32. 32.

    “In PEIRCE-IGTT, each pass through the loop leads to conclusions whose proper confidence is relative to the previous passes. A hypothesis judged to be Essential because a competing hypothesis was ruled out as a result of its being incompatible with a Clear-Best, is only an Essential hypothesis relative to Clear-Bests. An Essential from the first pass through the loop is more confidently an Essential than an Essential that is relative to Clear-Bests is. Similarly, any newly included hypothesis that is relative to guessing (that is, a hypothesis is included as a result of the effects of the inclusion of a guessed hypothesis) must be regarded as less confident than any hypothesis included before guessing began.” (Fox & Josephson, 1994, p. 217).

  33. 33.

    Lipton (2007), from the abstract.

  34. 34.

    Lipton (2004), from the publisher’s blurb.

  35. 35.

    Ben-Menahem (1990), from the abstract.

  36. 36.

    A Bayesian network is a directed acyclic graph (i.e., a graph without loops, and with nodes and arrows rather than direction-less edges), such that the nodes represent propositions or variables, the arcs represent the existence of direct causal influences between the linked propositions, and the strengths of these influences are quantified by conditional probabilities. Whereas in an inference network the arrow is from a node standing for evidence to a node standing for a hypothesis, in a Bayesian network instead the arrow is from the hypothesis to the evidence. In an inference network, an arrow represents a relation of support. In a Bayesian network, an arrow represents a causal influence, and the arrow is from a cause to its effect. “Bayesian Networks (BNs) are an efficient and comprehensible means of describing the joint probability distribution over many variables over their respective domains. The variables are created and assigned a meaning by the compositional modeller, and their probability distributions are calculated from the combined response of the influences that affect them.” (Keppens et al., 2005a, section  2.1). The classic book on Bayesian networks is by Judea Pearl (1988), the scholar who introduced and developed that formalism during the 1980s.

  37. 37.

    See Section 2.2.1.6 above.

  38. 38.

    “Artificial neural networks are often referred to as connectionist networks, and the paradigm of neural networks is often referred to as ‘connectionism’. Some scientists are interested in artificial neural networks as a tool in helping to understand the neural networks in our own brain. ‘Connectionist’ is sometimes used to emphasize that neural nets are being used for the purpose of computing with no concern for biological realism” (Callan, 1999, p. 223).

  39. 39.

    A tree is such a graph, that any two nodes are connected by exactly one path.

  40. 40.

    Mens rea in relation to computerising criminal law was discussed by Bennun (1996).

  41. 41.

    Colwell et al. (2006) claimed: “In general, truth-tellers provide more detailed accounts than deceivers. They are also more likely to include the time and location, unique or unusual facts (e.g., the perpetrator limped), portions of the conversation that took place (e.g., I yelled “HELP”), and interactions between the perpetrator and themselves. Deceivers, on the other hand, may offer fewer details to reduce the chance of contradicting themselves when asked to repeat the story. They might also lack the knowledge or are unable to imagine/create plausible, complex descriptions. Deceivers rely on thoughts and logic to account for their actions within fabrications […]. Since interrogators can investigate the fact’s credibility, deceivers refrain from disclosing certain details that would reveal their lies. As mentioned earlier, truth-tellers customarily make spontaneous corrections to assure the accuracy of their story, while deceivers tend to give their accounts in chronological order, making their statements easier to detect.”

    ALIBI does not take into account the need for checking, e.g., such relevant temporal information as pertaining to the hunting season, or to having been inducted into the armed forces reserve. In 1991 in mid January, on the eve of the deadline of the Allied strike against Iraq and thus of Saddam’s threat to attack Israeli cities with missiles possibly carrying nonconventional weapons, in Israeli cities people were in a hurry, sealing rooms and even baby cots against chemicals, so the sight of armed uniformed personnel queuing at a bank teller was clearly projected against the global situational backdrop – e.g., its being the eve of the first expected Scud missile strikes – as it was being interpreted by observers also queuing at the bank. I saw such a scene myself, at a bank in Beer-Sheva, and everybody was gloomy and as the impending attack that the civilian population would suffer was quite focal on people’s mind (on the bus, a young mother was staring in a kind of disbelief or stupor at her toddler, as though this was to be the end of their lives), there were no suspicions that the uniformed, conspicuously armed personnel queuing at the bank would try to rob the bank. Which in fact they did not.

  42. 42.

    Bear in mind that in New York City, it did happen that policemen shot dead a man carrying a knife in the street. They misunderstood his motives for carrying the knife. He was a Jewish slaughterer, and therefore had precise rules to follow about the standards of his knife, and actually a slaughterer needs to have the knife regularly checked by a rabbi, who would verify compliance. There also was an episode in Britain, when a man who was carrying the leg of a piece of furniture was shot dead by the police, who miinterpreted his intentions.

  43. 43.

    Why did the person accused accept cash from the employee? An excuse for that could be that the accused misunderstood, and believed that he was entitled to receiving the money. Or then he may have been sort of a “distract professor”, who for the very same reason did not realise that the employee was feeling under threat. But the circumstances are such, that we would find it hard to believe this. By contrast, there was no reason to conjecture ulterior motives, when I myself (then an undergraduate, in the late 1970s) and other students were leaving the classroom with our professor, and in the corridor he asked: “Where am I going?”, and one of us students replied: “To your office”.

  44. 44.

    44Bear in mind that whereas in ALIBI we didn’t develop a model of the emotions, in artificial intelligence the emotions are not infrequently being modelled, in systems from the l990s and 2000s. Nissan (2009c) is a survey of such approaches. Cf. Nissan (2009d); Nissan, Cassinis, and Morelli (2008); and Cassinis, Morelli, and Nissan (2007).

  45. 45.

    The subjectiveness of perception in eyewitness reports has been dealt with, for example, by Cesare Musatti (1931) in Italy. Elizabeth Loftus (already in Loftus, 1975, 1979) has observed that leading questions cause eyewitnesses to unwittingly complete their recollections by reconstruction; in Boris, a question-answering program for the analysis of narratives, a phenomenon was noticed that its developer, Dyer, first considered to be a bug, but then recognised that it is valuable, as it simulates the Loftus effect on recollections: episodic memory modifications occur during question answering (Dyer, 1983a, in subsections 1.5, 5.5, 12.1, 12.2).

  46. 46.

    See Sections 5.2.8 and 5.2.9 in this book, and see e.g. Schank (1972), Schank & Riesbeck (1981), Dyer (1983a).

  47. 47.

    Within studies of the pragmatics of communication, Nicoloff (1989) has discussed threats. By contrast, Joel D. Hamkins and Benedikt Löwe (2008) have introduced a modal logic of forcing. In a study in psychology, Kassin and McNall (1991) discussed promises and threats in police interrogations, by pragmatic implication.

  48. 48.

    Incidentally, the evolution of commitment law in the nineteenth century has been discussed by Appelbaum and Kemp (1982). “The generally accepted interpretation of the evolution of commitment law in the nineteenth century is challenged by means of an historical investigation of the law’s development in a single state – Pennsylvania.” (ibid., from the abstract). That is precisely the U.S. state relevant for Cresson. The abstract of Appelbaum and Kemp (1982) points out: “Rather than an abrupt switch from relaxed commitment procedures to a system of stringent safeguards, which most historical accounts of the period describe, examination reveals that Pennsylvania law underwent a slow accretion of procedural protections, with the essential discretionary role of families, friends, and physicians left undisturbed. The implications for current policy of this challenge to the traditional account are discussed”.

  49. 49.

    In his 1852 book The Key of David (which is now available online), Cresson pointed out in an appendix this argument to show his lunacy that the prosecution made in the case brought by his wife. In Appendix F (now accessible at http://www.jewish-history.com/cresson/cresson42.html) Cresson also pointed out that his son’s testimony against him, concerning his joining the Shakers, was about events that took place when the son was only a few weeks old. “What a most remarkable Precocious Boy this, in his malignity to, and persecution of his own father.

    In Nissan (2010e), I discussed the phenomenon of cultural traits or ethnic or religious identities being sometimes medicalised, i.e., considered in medical terms. This was not infrequent in the 19th and early 20th century (cf. Nissan & Shemesh, 2010). But some disliked religion was only one context for mental flaws to be ascribed (Mark Twain summed it up nicely: “If the man doesn’t believe in what we do, we say he is a crank, and that settles it. I mean, it does nowadays, because now we can’t burn him”). Another context involved supposedly inferior races or cultures. For example, in the second half of the 19th century, just as both women and Jews were seeking emancipation, pseudo-scientific claims would be made about both groups’s inferiority and predisposition for madness: “Jews, like women, possessed a basic biological predisposition to specific forms of mental illness. Thus, like women, who were also making specific political demands on the privileged group at the same moment in history, Jews could be dismissed as unworthy of becoming part of the privileged group because of their aberration.” (Gilman, 1984, p. 157). “Jews, like women, possessed a basic biological predisposition to specific forms of mental illness. Thus, like women, who were also making specific political demands on the privileged group at the same moment in history, Jews could be dismissed as unworthy of becoming part of the privileged group because of their aberration” (ibid.). It was also claimed that Black people either freed or seeking freedom, while not ones accepting slavery, were mad (ibid.).

    In one section of Nissan (2010e), I considered a trial for lunacy from Philadelphia. Warder Cresson (1798–1860) was born to a Quaker family, and grew up to become a farmer and preacher in the area of his native Philadelphia, where he eventually married and had six children (see on him Fox, 1971). While a young man, he experimented with millenarist groups. From the late 1820s, he published religious tracts. In 1844, he published a visionary tract about Jerusalem, and on that same year, he went there on pilgrimage. While in Jerusalem, his views changed, and he converted to Judaism. Bear in mind that by the mid 19th century, Jerusalem already had a clear Jewish majority, and this population was religious and economically sustainable because of alms from abroad. The standard discourse of Jerusalemite Judaism, by which Cresson let himself be convinced, was complex and cogent by its inner logic. This was something not visible in Cresson’s original social environment in Philadelphia.

    It is understandable that to a religious Protestant family in 1848, seeing a husband and father come back a Jew, it seemed to be a good explanation that he had lost his reason – apart from their goals of controlling his assets. To his family, he forfeited the salvation of his soul, by adopting an identity that was utterly unappealing – and ostensibly that of a defeated, despised creed and nation. At the same time, there was widespread religious effervescence among Protestant denominations in the Early Republic, and it clearly was a sign of civil maturity for the United States of America that a court of law (albeit not the lower court) resisted the proposition that choosing Judaism of all denominational options was proof of lunacy.

  50. 50.

    When Cresson returned to Philadelphia, in order to settle his matters before going back to Jerusalem, he was involved in local Jewish life, regularly attended service at the Mikve Israel synagogue, and continued (as he was already doing from Jerusalem as early as 1844) to write for Isaac Leeser’s magazine The Occident. His wife, Elizabeth Townsend, and his son Jacob applied to the court and obtained a commission in lunacy. This decision of the lower court was reversed in a trial that became a cause célèbre, with eminent counsel retained by both parties, and much attention from the press. The hearing extended over six days in May, 1851, and nearly one hundred witnesses were called. Upon his return to Jerusalem, Cresson married a Sephardic Jewish woman, Rachel Moleano, he used to dress as a Jerusalemite Sephardic Jew, and the community he joined honoured him so much that his funeral in 1860 was with honors befitting a prominent rabbi. It must be said that prior to his conversion, Cresson was very close to two prominent rabbis in Jerusalem, but when he converted in 1848, this took place once opposition was overcome, from the beth din (rabbinic court) and the chief rabbi, Abraham Chai Gagin. This was because of a general reluctance to proselitise was (and is) accompanied by wariness lest, if a request to be converted is fulfilled, the convert would not be up to his or her new duties.

  51. 51.

    Confabulation in depositions occur when a witness is inferring, rather than merely reporting. This may be an effect of witnesses having discussed their recollections, which has modified what they later think they remember. Witnesses must report what they perceived, not what they inferred. In particular, if it was two eyewitnesses who saw the same event and discussed it, this may influence what they later claim to remember; this is sometimes referred to as memory conformity. Concerning the latter, see, e.g., Memon and Wright (1999), Gabbert, Memon, and Allan (2003), Gabbert, Memon, Allan, and Wright (2004), Luus and Wells (1994), Meade and Roediger (2002), Meudell, Hitch, and Boyle (1995), Principe and Ceci (2002), Skagerberg (2007).

  52. 52.

    A distinction is to be made between jokes about self-exoneration from crime that do resemble indeed some output from ALIBI, in that they try to explain out some ascertained facts from the charge either innocently, or in not as damning a manner, and jokes that invoke extenuating circumstances. A well-known joke of the latter kind is that of the son who kills his parents, and then expects mercy in court because he is an orphan. Another such joke, from the United States (Shebelsky, 1991), is about a defence lawyer who points out to the judge that his client was characterised as an incorrigible bank robber, without a single socially redeeming feature. The lawyer claims that he intends to disprove that. The judge asks how. The lawyer replies that it’s by proving beyond a shadow of doubt that the note his client handed the teller was on recycled paper. What stands out in the latter joke is that the defendant is apparently pleading guilty, but is seeking extenuating circumstances. At any rate, the defendant admits he handed the teller a threatening note.

  53. 53.

    Subcontracting in a multiagent system is the subject of Grant, Kraus, and Perlis (2005). “We present a formalism for representing the formation of intentions by agents engaged in cooperative activity. We use a syntactic approach presenting a formal logical calculus that can be regarded as a meta-logic that describes the reasoning and activities of the agents. Our central focus is on the evolving intentions of agents over time, and the conditions under which an agent can adopt and maintain an intention. In particular, the reasoning time and the time taken to subcontract are modeled explicitly in the logic” (ibid., p. 163).

  54. 54.

    Multiagent systems are the subject of Section 6.1.6 below.

  55. 55.

    Speech acts are the subject of Searle (1969), the classic about this subject.

  56. 56.

    A rather rudimentary, and application-specific device for representing formally concerted action between agents in a juridic setting was described in Nissan (1995b). The formal framework was SEPPHORIS, which combined a representation of events, stipulations, and legal prescriptions, with a kind of graph-rewriting grammars (i.e, a device for transforming parts of graphs, step by step). It actually was a hypergraph-grammar. A hypergraph can be conceived of as a set of sets. A much more flexible formalism I developed for representing narratives is episodic formulae, to which we shall come back elsewhere in this book. See Section 5.3.

  57. 57.

    E.g., Poulin, Mackaay [sic], Bratley, and Frémont (1992), Vila and Yoshino (2005), Knight, Ma, and Nissan (1998), Zarri (1998), Farook and Nissan (1998), Valette and Pradin-Chézalviel (1998). Nissan (2011a) applies Petri nets to textual interpretation, other than in law. Spatial reasoning, which in particular may be in the legal domain (Nissan, 1997a, 1997b) – is amenable to commonsense modeling (Asher & Sablayrolles, 1995), possibly exhibiting similarities with modeling temporal relations (Cohn, Gotts, Cui, Randell, & Bennett, 1994; Bennett, 1994; Randell & Cohn, 1992).

    Adderley and Musgrove (2003a), who applied neural networks to the modus operandi modelling of group offending (in particular, to burglaries carried out by gangs) remarked about temporal analysis: “Certain offenders have a propensity to offend within certain hours of the day and on particular days of the week. The detected crimes were compared against the crimes attributed to the Primary Network to ascertain whether there were similarities or differences between times and days. Temporal analysis presents problems within the field of crime-pattern analysis due to the difficulty of ascertaining the exact time at which the offense occurred. There are generally two times that are relevant: the time that the building was secured and the time that the burglary was discovered, the from time/date and the to time/date. […]” (ibid., p. 187).

  58. 58.

    Moral luck is a broader issue. It is your moral luck in that you were not in a given kind of situation, and therefore did not have the opportunity to perpetrate something reprehensible that would fit that kind of situation.

  59. 59.

    Agents’ beliefs about their own or others’ obligations were discussed, for example, in Nissan and Shimony (1997).

  60. 60.

    Some probability or other likelihood factor could perhaps be associated with the PRECONDITIONS or EFFECTS. Such factors could be used to interpret events when some facts are missing, to understand clearly what happened.

  61. 61.

    In the early 1970s, the late Maria Nowakowska developed a motivational calculus (Nowakowska, 1973b, 1984, Vol. 1, chapter 6), and a formal theory of actions (Nowakowska, 1973a, 1973b, 1976a, 1978; cf. Nowakowski [sic] 1980), whose definitive treatment was in Nowakowska (1984, Vol. 2, chapter 9). She also developed a formal theory of dialogues (Nowakowska, 1976b, 1984, Vol. 2, chapter 7), and a theory of multimedia units for verbal and nonverbal communication (Nowakowska, 1986, chapter 3). In ‘Theories of Dialogues’, chapter 7 in her Theories of Research, Nowakowska (1984) devoted chapter 5 to a mathematical model of the “Emotional dynamics of a dialogue”, and Section 5.3 to a formalization of a “Provocation threshold”. This is relevant, and potentially useful for present-day conversational models involving emotions, in the design of computer interfaces, or of software supporting the interaction among a group of human users.

    In natural-language processing (NLP), Kenneth Mark Colby’s PARRY program (Colby, 1975, 1981) embodies in its response mechanism a model of symptoms of paranoia. PARRY used to run in conversational mode, taking as input sentences from a human interviewer. The program embodies in its response mechanism a model of symptoms of paranoia. PARRY impersonates a person experiencing negative emotions or emotional states, the latter being represented by numerical variables for ‘anger’, ‘fear’, and ‘mistrust’. Colby (1981) “describes a computer simulation model embodying a theory that attempts to explain the paranoid mode of behavior in terms of strategies for minimizing and forestalling shame induced distress. The model consists of two parts, a parsing module and an interpretation-action module. To bring the model into contact with the conditions of a psychiatric diagnostic interview, the parsing module attempts to understand the interview input of clinicians communicating in unrestricted natural language. The meaning of the input is passed to an interpretation-action module made up of data structures and production rules that size up the current state of the interview and decide which (linguistic) actions to perform in order to fulfill the model’s intentions. This module consists of an object system which deals with interview situations and a metasystem which evaluates how well the object system is performing to attain its ends” (ibid., from the abstract).

    A psychiatrist from Harvard Medical School, Theo Manschreck (1983) – while conceding that “Colby’s approach represents an ambitious undertaking, and in some respects it has been successful” (ibid., p. 340) – proposed a narrower interpretation of the results claimed by Colby for PARRY. For example: “Colby’s theory sheds virtually no light on the pathogenesis of delusions, the reasons delusions remain fixed and unarguable, the rarity of some delusions and commonness of others, the transient nature of some delusions, the reasons delusions arise suddenly before associated features are present, or at times only when associated features have been persistently present, and so forth. Most clinicians and research psychopathologists would insist that these questions are central to the problem of paranoid behavior and even to the mode of paranoid thinking” (Manschreck, 1983, p. 341).

    Colby retorted (1983). He remarked: “The theory presented in the target article proposes that the paranoid system described approximates an instance of a theoretical purposive-cognitive-affective algorithmic system physically realized in the model. Both in the target article and in my Response to the first round of commentaries, I stressed that the theory proposed does not account for the initial origin of the paranoid mode. It is limited to explaining in part how the empirical system works now. It is not an ontogenetic explanation of what factors in the patient’s history resulted in the acquisition of his paranoid mode of processing information. It formulates proximate, not ultimate causes” (ibid., p. 342). It is important to realise that just as Manschreck was a psychiatrist, Colby, too, was affiliated with psychiatry: he was with the Neuropsychiatric Institute at the School of Medicine of the University of California, Los Angeles. The focus of Colby’s interests, too, was in psychiatry.

    A much more sophisticated NLP program than PARRY in respect of artificial intelligence (rather than of psychological claims), namely, BORIS, was developed by Michael Dyer (1983a), an author who afterwards made significant contributions to connectionist NLP, as well as to the emerging paradigm of “artificial life”. BORIS, that by now must be put into a historical context, but still offers important insights, detects or conjectures characters’ affect on processing narrative textual accounts. Reasoning is carried out according to this kind of information. For example, a character is likely to be upset because of a plan failure (which to BORIS heightens arousal: possibly anger, though not specifically frustration). Characters’ plan failures within a plot are the central concept employed by BORIS for understanding the narrative. Such NLP processing requirements were the only criterion when designing the representation and treatment of affects in BORIS (Dyer, 1983a, p. 130). “BORIS is designed only to understand the conceptual significance of affective reactions on the part of narrative characters. To do so BORIS employs a representational system which shares AFFECTs to one another through decomposition and shared inferences” (ibid.). For example, somebody who has just been fired may go home and kick his dog; the former event explains the latter (ibid., p. 131). Admittedly, there was no intent to model emotions or emotional states as such. Also see Dyer (1983b, 1987) on affect in narratives, or on computer models of emotions.

    Apart from BORIS, the treatment of emotion in OSCAR deserves mention. This program, described by John Pollock in his book How to Build a Person (1989), embodies a partial model of human cognitive states and emotions. Besides, William S. Faught (1978) described “a model based on conversational action patterns to describe and predict speech acts in natural language dialogs and to specify appropriate actions to satisfy the system’s goals” (ibid., p. 383). Faught credits Izard’s (1971) differential emotion theory (cf. Izard, 1977, 1982) and his own (Faught, 1975) “extension of it into affect as motivation for other thought processing” (Faught, 1978, p. 387).

    A model of artificial emotions was proposed by Camurri and Ferrentino (1999). They argued for its inclusion in multimedial systems with multimodal adaptive user-interaction. Their applications are to dance and music. In its simplest form, to Camurri and Ferrentino, an artificial agent’s “emotional state is a point in space, which moves in accordance with the stimuli (carrots and sticks) from the inside and the outside of the agent” (ibid., p. 35). The agent is robotic, and movements (for choreography) are detected. The stimuli change the agent’s affective “character”, which is a point in a space of two dimensions (ibid., p. 38).

    The two axes represent the degree of affection of the agent towards itself and towards others, respectively. We call these two axes “Ego” and “Nos”, from the Latin words [for] “I” and “We”. A point placed in the positive x (Ego)-axis represents and agent whose character has in a good disposition towards itself. A point towards the left (negative) Ego would mean an agent fairly discouraged about itself. The emotion space is usually partitioned into regions […] labeled by the kind of character the agent simulates.

  62. 62.

    In their textbook, Shiraev and Levy remarked (2007, pp. 22–23): “One of the assumptions in contemporary cross-cultural psychology is that it is not possible to fully understand the psychology of the people in a particular ethnic or any other social group without a complete understanding of the social, historic, political, ideological, and religious premises that have shaped people of this group. Indigenous theories, including indigenous psychology, are characterized by the use of conceptions and methodologies associated exclusively with the cultural group under investigation (Ho, 1998). […] Maybe because of disappointing beliefs that contemporary psychologists cannot really comprehend all other cultures, a growing interest in indigenous psychologies has emerged.”

  63. 63.

    So Griffiths (2003, pp. 300–301): “One influential argument starts from the widely accepted idea that an emotion involves a cognitive evaluation of the stimulus. In that case, it is argued, cultural differences in how stimuli are represented will lead to cultural differences in emotion. If two cultures think differently about danger, then, since fear involves an evaluation of a stimulus as dangerous, fear in these two cultures will be a different emotion. Adherents of Ekman’s [universalist] basic emotions theory are unimpressed by this argument since they define emotions by their behavioral and physiological characteristics and allow that there is a great deal of variation in what triggers the same emotion in different cultures. Social constructionists also define the domain of emotion in a way that makes basic emotions research less relevant. The six of seven basic emotions seem to require minimal cognitive evaluation of the stimulus. Social constructionists often refuse to regard these physiological responses as emotions in themselves, reserving that term for the broader cognitive state of a person involved in a social situation in which they might be described as, for example, angry or jealous.”

  64. 64.

    Discussing transactional theories of emotions, Griffiths (2003, p. 299) remarked: “To behave angrily because of the social effects of that behavior is to be angry insincerely. This, however, is precisely what transactional theories of emotion propose: emotions are ‘nonverbal strategies of identity realignment and relationship reconfiguration’ (Parkinson, 1995, p. 295). While this sounds superficially like the better-known idea that emotions are ‘social constructions’ (learnt social roles), the evolutionary rationale for emotions view, and the existence of audience effects in non-human animals, warn against any facile identification of the view that emotions are social transactions with the view that they are learnt of highly variable across cultures. Indeed, the transactional view may seem less paradoxical to many people once the idea that emotions are strategic, social behaviors is separated from the idea that they are learnt behaviors or that they are intentional actions.”

  65. 65.

    Jan Plamper, who interviewed three leading practitioners in the history of the emotions, has pointed out: “The history of emotions is a burgeoning field – so much so, that some are invoking an ‘emotional turn’ ” (Plamper, 2010, p. 237).

  66. 66.

    Stearns (1995, pp. 38–39): “The most belligerent camps in the emotions field involve naturalists or universalists on the one hand, and constructionists on the other. […] [M]any scholars reject compromise […]. Naturalists, often building from Darwinian beliefs about emotions’ role in human survival, tend to argue for a series of innate emotions, essentially uniform (at least from one group of people to the other) and often, as with anger or fear, biologically grounded (Kemper [etc.]). […] Other approaches to naturalism are possible that move away from fixed emotion lists while preserving the importance of an innate, physiological component. Psychoanalyst Daniel Stern (1985), utilizing studies of infants, has posited a set of ‘vitality affects’, defined as very general surges of emotional energy in infants that are then shaped by contacts with adult care-givers into discrete emotions. Possibly some combination approach will turn out to work well, with a few basic emotions (like fear) combined with the more general vitalities that can be moulded into a more variable array including such possibilites as jealousy (found in many cultures but not all, and probably involving a blend of several distinct emotions including grief, anger and fear) or guilt.”

  67. 67.

    In Section 8.4.2.3 we are going to briefly deal with formal models of time.

  68. 68.

    Incidentally, in a study in the psychology of interrogations, Kassin and McNall (1991) discussed promises and threats in police interrogation, by pragmatic implication.

  69. 69.

    On the dynamics of epistemic states, cf. e.g. Gärdenfors (1988). We don’t concern ourselves with the philosophical debate on whether true belief amounts indeed to knowledge (Sartwell, 1992), on epistemic luck (Engel, 1992), and so forth. Trenton Merricks (1995) takes issue with Alvin Plantinga’s (1993a, 1993b) concept of warrant, i.e. – as quoted from Plantinga (1993a) by Merricks (1995, p. 841) with a correction – “that, whatever precisely it is, which makes the difference between knowledge and mere true belief”. “A warranted belief, for our [i.e., Merricks’] purposes, is one that, given its content and context, has enough by way of warrant to be knowledge” (Merricks, 1995, p. 841, fn. 2), which is focal to a debate in the philosophy of knowledge, but is arguably a moot point for jurisprudence. On justified belief within the theory of justification in philosophy, cf Alston (1989), Goldman (1986), Sosa (1991), and Clay and Lehrer (1989). From the literature of AI, see, e.g., Maida (1991) and Ballim and Wilks (1991).

  70. 70.

    Luciano Re Cecconi, born in 1948, was a well-known football player in Italy (his club was Lazio). He was shot dead in Rome in the evening 18 January 1977 by a jeweller, Bruno Tabocchini, when Re Cecconi carried out a prank by posturing as though he was threatening him in order to rob him. Re Cecconi did so in the mistaken belief that he, being a celebrity, would be promptly recognised and the jeweller would realise he was just joking. Re Cecconi was accompanied by another well-known football player, Pietro Ghedin, and of Giorgio Fraticcioli, the owner of a perfumery. The purpose of the visit to the jeweller’s was so that Fraticcioli could consign two perfume bottles he was ordered. On the spot, it occurred to Re Cecconi, with a raised collar, to simulate that his right hand inside a pocket of his coat was a pistol. He exclaimed: “Datemi tutto, questa è una rapina!” (“Give me everything, this is a robbery!”). The jeweller wasn’t a football fan, and didn’t recognise Re Cecconi, all the more so as he hadn’t been looking at his visitors: Re Cecconi had shouted behind the back of the jeweller, so the latter turned and shot in the chest Re Cecconi, who died half an hour later. On falling, Re Cecconi whispered: “Era uno scherzo, era solo uno scherzo”(“It was a prank, just a prank”). Ghedin had raised his own hands, identified himself, turned towards Re Cecconi and told him to stand up as the prank was over, but then noticed that his companion was bleeding.

    The jeweller had been recently robbed twice, so he had a pistol hidden under the till, and he had already had the opportunity to use it in order to defend himself from a robbery (he had shot and wounded two robbers). Tabocchini was arrested, and tried 18 days later for unintentional excessive legitimate defence (“eccesso colposo di legittima difesa”). He was acquitted, as he had shot for putative legitimate defence (“legittima difesa putativa”). Comments about the tragedy pointed out that re Cecconi was one of the few players of Lazio who didn’t own a firearm.

    That episode is retold at: http://it.wikipedia.org/wiki/Luciano_Re_Cecconi and http://www.laziowiki.org/wiki/La_tragedia_della_morte_di_Re_Cecconi (a site of Lazio football club) The journalist Enzo Fiorenza published an instant book about that tragedy (Fiorenza, 1977), paradoxically with a publisher called “Centro dell’Umorismo Italia”.

  71. 71.

    http://www.antonioanselmomartino.it/index.php?option=com_content%26task=view%26id=26%26 Itemid=64

  72. 72.

    This, like a more extensive report, Olson and Wells (2002), also published at the website of Gary Wells (and accessed by myself in 2011), was based on Olson’s master’s thesis. Portions of the data in Olson and Wells (2002) were presented at the 2001 Biennial Meeting of the Society for Applied Research in Memory and Cognition. Olson and Wells (2002) included the description of the results of experimentation. “Participants were 252 students from a large Midwestern university recruited for an experiment titled ‘Police Detective Reasoning Skills’” (ibid., p. 11).

  73. 73.

    Cf. Culhane and Hosch (2004). Incidentally, it is worthwhile to note that Culhane et al. (2004) researched possible bias on the part of crime victims serving as jurors, whereas Culhane and Hosch (2005) researched whether there is bias against the defendant on the part of law enforcement officers serving as jurors. Scott Culhane is affiliated with the Department of Criminal Justice at the University of Wyoming. He earned a Ph.D. in legal psychology at the University of Texas at El Paso in 2005.

  74. 74.

    In some circumstances, it may even not be obvious to a perpetrator that he needs an alibi. But then an innocent person may also not have a provable alibi. Don Vito Cascio Ferro (1862–1943) had been an anarchist agitator in Sicily in 1892; he afterwards became a mafia boss, and moved to the United States, where, welcomed by the Mano Nera criminal organisation, allegedly he was the one who introduced the practice of protection money (or racket, in Sicilian u pizzu, i.e., their own “beak” that blackmailers want to wet). Having moved back to Sicily in 1904, he became well connected with persons in the institutions. When the New York City police detective Lieutenant Joe Petrosino came to Italy to investigate him, Cascio Ferro had him stalked all the way. In the evening of 12 March 1909, Petrosino was killed in Piazza Marina in Palermo. (Eventually, New York city hall dismissed the police chief, who was considered responsible for the secret of Petrosino’s trip not being kept.) Baldassare Ceola, a Northerner who was questor (police chief of the province) in Palermo and had been questor in Milan earlier on, had Cascio Ferro arrested. Cascio Ferro appeared surprised, and did not even have an alibi. But the case was taken away from Ceola, who was moved elsewhere with the rank of prefect (governor of a province). The inquiry ended with Cascio Ferro not being prosecuted, because the evidence against him was considered insufficient. It was only in 1930 that Cascio Ferro got a life sentence for correità morale (as a moral accessory) in two other murders. While in prison, Cascio Ferro stated that he was the one who carried out Petrosino’s murder, and that this was the only time he had personally killed somebody. He claimed that on the given evening, he was hosted at dinner by a member of Parliament, that he left for a while in order to kill Petrosino, and that then he returned to the dinner. That statement was published in the New York Times on 6 July 1942. Arrigo Petacco, in his biography of Petrosino (1972), disbelieved Cascio Ferro’s claim, remarking that at the time of the murder, Cascio Ferro was a guest for dinner of a member of Parliament indeed, but that the place was Burgio, not Palermo, so Petrosino would not have had the time to also be in Palermo and kill Petrosino, then return to that dinner. It is usually conceded that Cascio Ferro had some role in the murder (Pallotta, 1977, pp. 24–28, 106–107). It is interesting that being a guest for dinner could be an alibi, unless a “momentary” absence is noticed. But when arrested for Petrosino’s murder, Cascio Ferro did not have an alibi.

  75. 75.

    Olson and Wells (2002, p. 27) supplied this definition: “The term alibi provider refers to the suspect or defendant who is being questioned regarding his or her whereabouts at the time of the crime. We call a person whose statements are put forward to support the alibi an alibi corroborator. Although an alibi corroborator is ‘providing an alibi’ for the suspect in the colloquial sense, we reserve the term alibi provider for the suspect him or herself.”

  76. 76.

    Classifiers will be discussed in Chapter 6, which is about data mining.

  77. 77.

    Lior Rokach and Oded Maimon’s (2008) is the first book entirely dedicated to decision trees in data mining.

  78. 78.

    Neural networks are the subject of Section 6.1.14 in the present book.

  79. 79.

    Fuzzy set theory uses the standard logical operators ∧ (and), ∨ (or), ∼ (not). Thus given truth values (or membership values) μ(p) for p and μ(q) for q, we can develop truth values (or membership values) for p∧q, p∨q and ∼p. These values are determined by

    1. (a)

      μ(∼p) = 1–μ(p),

    2. (b)

      μ(p∧q) = min{ μ(p), μ(q)},

    3. (c)

      μ(p∨q) = max{ μ(p), μ(q)}

    Fuzzy logic is the subject of Section 6.1.15 in this book. Fuzzy logic is a many-valued propositional logic where each proposition P rather than taking the value T or F has a probability attached (thus between 0 and 1) of being true. It would take the value 0 if it were false and 1 if it were true. Logical operators and probability theory are then combined to model reasoning with uncertainty. Fuzzy rules capture something of the uncertainty inherent in the way in which language is used to construct rules. Fuzzy logic and statistical techniques can be used in dealing with uncertain reasoning.

    Philipps and Sartor (1999) applied fuzzy reasoning as well as neural networks in an AI & Law context. Stranieri and Zeleznikow (2005a) remarked: “[Philipps & Sartor, 1999] argue that fuzzy logic is an ideal tool for modelling indeterminancy. But what is indeterminancy? Indeterminancy is not uncertainty. To quote the Roman maxim – Mater semper certa est, pater semper incertus – one can never be certain that a man was the real father of a child, even if he was the mother’s husband. But the concept of a father is certainly determinate.”

  80. 80.

    Concerning vagueness, see in Sections 4.6.2.2 and 6.1.13.12 below. Fuzzy logic is discussed in Section 6.1.15.

  81. 81.

    Xu, Kaoru, and Yoshino (1999) constructed a case-based reasoner to provide advice about contracts under the United Nations Convention on Contracts for the International Sale of Goods (CISG). They adopted a fuzzy approach to case-based representation and inference in CISG. Philipps and Sartor (1999) applied fuzzy reasoning as well as neural networks in an AI & Law context.

  82. 82.

    Shortliffe (1976) is a book about MYCIN.

  83. 83.

    “How to specify the appropriate reference class for determining hypothesis confirmation has been a prominent issue in the philosophy of science. Hans Reichenbach (1949, p. 374) suggested we choose the smallest class for which reliable statistics were available; Wesley Salmon, by contrast, advocated that for single cases we ought to select the broadest homogeneous class. For a discussion of these positions see Salmon (1967, pp. 91, 124)” (Allen & Pardo, 2007a, p. 112, fn. 9). In legal theory, within the Bayesianist camp Tillers (2005) discussed reference classes.

  84. 84.

    See Allen (1997, 2001b, 2003), Allen and Lively (2003 [2004]), and Allen and Pardo (2007a, 2008).

  85. 85.

    That same Bayesianist referee also claimed: “The Bayesian side of things is clearly in the ascendant, which the article might wish to note. And the largest reason for that, which the article might also wish to note […] due to the availability of greater computational capacity, leading to success for Bayesian net technology in modeling, and Markov Chain Monte Carlo simulation methods in optimization and learning”.

  86. 86.

    See Allen’s paper (2008a) “Explanationism All the Way Down”; cf. Allen and Pardo (2008). Allen claims (2008a, p. 325):

    A more promising approach to understanding juridical proof is that it is a form of inference to the best explanation.  Conceiving of cases as involving the relative plausibility of the parties’ claims (normally provided in story or narrative form) substantially resolves all the paradoxes and difficulties  […]

      

  87. 87.

    In an e-list posting from 2 July 2001 (at bayesian-evidence@vuw.ac.nz), Peter Tillers, a legal scholar who is a prominent supporter of Bayesianism in the Bayesianism debate.

  88. 88.

    Here is a fuller quotation from Tillers’ posting (the brackets are Tillers’ own):

    I think the paper’s authors assume a bit too readily that evidence and inference problems in the courtroom are computationally tractable. Even with HUGIN, some real-world cases – many real-world cases – will be computationally intractable (even with the use of very powerful computers) unless the cases – the problems of evidence – are simplified. And the question of how complex inference problems should be simplified to make them computationally-tractable while assuring that the resulting probability computations are still informative and not misleading is not an easy one. (If the simplification cannot be done “mechanically” or “objectively,” there is little to be said for the proposition that “experts” should decide how otherwise computationally-intractable complex [courtroom] cases and problems should be simplified.)

    My quoting offhand comments from an email, or rather from an informal posting from a scholars’ e-list, or then from a confidential referee report, itself calls for comment. Readers will have noticed that this book has a huge bibliography. When what is to be covered is such a wide range, it is very difficult to fully satisfy all criteria that are clearer or even obvious to specialists of given domains. Even with measures taken to be as precise as possible, a rather mundane problem that may occur sometimes, and that I freely acknowledge, is that citations are exemplificative, without following the historical development of debate within the given specialism. For example, it does happen sometimes that I cite rather derivative work, while not some seminal work. It should be clear that this is not to deny credit to the latter. This book is already long the way it is, and we cannot aim at tracing the history of all disciplines involved.

  89. 89.

    Answering his own question 4, namely, “Whether there are any juridical fact-finding contexts in which Bayes’ theorem might be useful”, Ron Allen answers it this way: “The Bayesian skeptic is not this skeptical. There may very well be situations involving virtually purely statistical evidential bases in which Bayes’ theorem would be a useful analytical tool. It is even possible that Bayes’ theorem might prove useful in some extremely impoverished nonstatistical evidential settings. The skeptical claim, by contrast, doubts that Bayes’ theorem is very useful for real human decision making in the typical juridical context involving a rich, highly complex set of interdependent pieces of evidence” (ibid., p. 258).

  90. 90.

    “Dempster-Shafer theory [Shafer, 1976] has been developed to handle partially specified domains. It distinguishes between uncertainty and ignorance by creating belief functions. Belief functions allow the user to bound the assignment of probabilities to certain events, rather than give events specific probabilities. Belief functions satisfy axioms that are weaker than those for probability theory. When the probabilistic values of the beliefs that a certain event occurred are exact, then the belief value is exactly the probability that the event occurred. In this case, Dempster-Shafer theory and probability theory provide the same conclusions.” (Stranieri & Zeleznikow, 2005a).

  91. 91.

    The relations between causality, propension, and probability which take up the first part of Snow and Belis (2002), were also discussed by Marianne Belis, in French, in Belis (1995).

  92. 92.

    See Belis (1973), and Belis and Snow (1998).

  93. 93.

    Recursion is a mode of processing by which a procedure carries out some operations and then invokes itself, until a given condition is satisfied.

  94. 94.

    In his formative period in the 1990s, Eyal Shimony’s disciplinary affiliation within AI research was closely associated with belief networks and uncertainty as well as with abductive reasoning (e.g., Charniak & Shimony, 1990, 1994; Shimony & Charniak, 1990; Shimony, 1993; Santos & Shimony, 1994). These are subjects he is still pursuing in his current research (e.g., Shimony & Domshlak, 2003; Domshlak & Shimony, 2004).

  95. 95.

    Judea Pearl began his paper (2001) by claiming (ibid., p. 19, his brackets):

    I turned Bayesian in 1971, as soon as I began reading Savage’s monograph The Foundations of Statistical Inference [Savage, 1962]. The arguments were unassailable: (i) It is plain silly to ignore what we know, (ii) It is natural and useful to cast what we know in the language of probabilities, and (iii) If our subjective probabilities are erroneous, their impact will get washed out in due time, as the number of observations increases.

    Thirty years later, I am still a devout Bayesian in the sense of (i), but I now doubt the wisdom of (ii) and I know that, in general, (iii) is false. Like most Bayesians, I believe that the knowledge we carry in our skulls, be its origin experience, schooling or hearsay, is an invaluable resource in all human activity, and that combining this knowledge with empirical data is the key to scientific enquiry and intelligent behavior. Thus, in this broad sense, I am still a Bayesian. However, in order to be combined with data, our knowledge must first be cast in some formal language, and what I have come to realize in the past ten years is that the language of probability is not suitable for the task; the bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships. Specifically, the building blocks of our scientific and everyday knowledge are elementary facts such as “mud does not cause rain” and “symptoms do not cause disease” and those facts, strangely enough, cannot be expressed in the vocabulary of probability calculus. It is for this reason that I consider myself only a half-Bayesian.

  96. 96.

    See in Section 5.1.2 below. “The distinction between the structure of proof and a theory of evidence is simple. The structure of proof determines what must be proven. In the conventional [probabilistic] theory [which Allen attacks] this is elements to a predetermined probability, and in the relative plausibility theory [which Ron Allen approves of] that one story or set of stories is more plausible than its competitors (and in criminal cases that there is no plausible competitor). A theory of evidence indicates how this is done, what counts as evidence and perhaps how it is processed” (Allen, 1994, p. 606). The central thesis of Allen (1991) was summarised in Allen’s paper ‘Explanationism All the Way Down’ (2008a, p. 325) as: “A more promising approach to understanding juridical proof is that it is a form of inference to the best explanation. Conceiving of cases as involving the relative plausibility of the parties’ claims (normally provided in story or narrative form) substantially resolves all the paradoxes and difficulties  […]”.  In Allen (2008b), the relationship between juridical proof and inference to the best explanation (IBE) was thoroughly examined.

References

  • Adderley, R., & Musgrove, P. (2003a). Modus operandi modeling of group offending: A case study. Section 6.12 In J. Mena (Ed.), Investigative data mining for security and criminal detection (pp. 179–195). Amsterdam & Boston: Butterworth-Heinemann (of Elsevier).

    Google Scholar 

  • Alchourrón, C. E., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic, 50, 510–530.

    Google Scholar 

  • Allen, J. F. (1983a). Recognizing intentions from natural language utterances. Chapter 2 In M. Bradie & R. C. Berwick (Eds.), Computational models of discourse (pp. 108–166). Cambridge, MA: MIT Press.

    Google Scholar 

  • Allen, R., & Redmayne, M. (Eds.). (1997). Bayesianism and Juridical Proof, special issue, The International Journal of Evidence and Proof, 1, 253–360. (London: Blackstone)

    Google Scholar 

  • Allen, R. J. (1991). The nature of juridical proof. Cardozo Law Review, 13, 373–422.

    Google Scholar 

  • Allen, R. J. (1994). Factual ambiguity and a theory of evidence. Northwestern University Law Review, 88, 604–640.

    Google Scholar 

  • Allen, R. J. (1997). Rationality, algorithms and juridical proof: A preliminary inquiry. International Journal of Evidence and Proof, 1, 254–275.

    Google Scholar 

  • Allen, R. J. (2001b). Clarifying the burden of persuasion and Bayesian decision rules: A response to Professor Kaye. International Journal of Evidence and Proof, 4, 246–259.

    Google Scholar 

  • Allen, R. J. (2003). The error of expected loss minimization. Law, Probability & Risk, 2, 1–7.

    Google Scholar 

  • Allen, R. J. (2008b). Juridical proof and the best explanation. Law & Philosophy, 27, 223–268.

    Google Scholar 

  • Allen, R. J., & Lively, S. (2003 [2004]). Burdens of persuasion in civil cases: Algorithms v. explanations. MSU Law Review, 2003, 893–944.

    Google Scholar 

  • Allen, R. J., & Pardo, M. S. (2007a). The problematic value of mathematical models of evidence. Journal of Legal Studies, 36, 107–140.

    Google Scholar 

  • Allen, R. J., & Pardo, M. S. (2008). Juridical proof and the best explanation. Law & Philosophy, 27, 223–268.

    Google Scholar 

  • Alston, W. P. (1989). Epistemic justification. Ithaca, NY: Cornell University Press.

    Google Scholar 

  • Appelbaum, P. S., & Kemp, K. N. (1982). The evolution of commitment law in the nineteenth century: A reinterpretation. Law and Human Behavior, 6(3/4), 343–354.

    Google Scholar 

  • Åqvist, L. (1992). Towards a logical theory of legal evidence: Semantic analysis of the Bolding-Ekelöf degrees of evidential strength. In A. A. Martino (Ed.), Expert systems in law (pp. 67–86). Amsterdam: North-Holland.

    Google Scholar 

  • Asher, N., & Sablayrolles, P. (1995). A typology and discourse semantics for motion verbs and spatial PPs in French. Journal of Semantics, 12(2), 163–209.

    Google Scholar 

  • Ballim, A., & Wilks, Y. (1991). Artificial believers: The ascription of belief. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Baron, J. (1994). Nonconsequentialist decisions. With open peer commentary and the author’s response. Behavioral and Brain Sciences, 17(1), 1–42.

    Google Scholar 

  • Belis, M. (1973). On the causal structure of random processes. In R. J. Bogdan & I. Niiniluoto (Eds.), Logic, language, and probability (pp. 65–77). Dordrecht, The Netherlands: Reidel (now Spinger).

    Google Scholar 

  • Belis, M. (1995). Causalité, propension, probabilité. Intellectica, 1995/2, 21, 199–231. http://www.intellectica.org/archives/n21/21_11_Belis.pdf

    Google Scholar 

  • Bell, B. E., & Loftus, E. F. (1988). Degree of detail of eyewitness testimony and mock juror judgments. Journal of Applied Social Psychology, 18, 1171–1192.

    Google Scholar 

  • Bell, B. E., & Loftus, E. F. (1989). Trivial persuasion in the courtroom: The power of (a few) minor details. Journal of Personality and Social Psychology, 56, 669–679.

    Google Scholar 

  • Benferhat, S., Dubois, D., & Prade, H. (2001). A computational model for belief change. In M. A. Williams & H. Rott (Eds.), Frontiers in belief revision (pp. 109–134). (Applied Logic Series, 22). Dordrecht: Kluwer.

    Google Scholar 

  • Ben-Menahem, Y. (1990). The Inference to the best explanation. Erkenntnis, 33(3), 319–344.

    Google Scholar 

  • Bennett, B. (1994). Spatial reasoning with propositional logics. In J. Doyle, E. Sandewall, & P. Torasso (Eds.), Principles of Knowledge Representation and reasoning: Proceedings of the fourth international conference (KR94). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Bennett, W. L., & Feldman, M. S. (1981). Reconstructing reality in the courtroom: Justice and judgement in American culture. New Brunswick, NJ: Rutgers University Press; London: Tavistock.

    Google Scholar 

  • Bennun, M. E. (1996). Computerizing criminal law: Problems of evidence, liability and mens rea. Information & Communications Technology Law, 5(1), 29–44.

    Google Scholar 

  • Blackman, S. J. (1988). Expert systems in case-based law: The rule against hearsay. LL.M. thesis, Faculty of Law, University of British Columbia, Vancouver, BC.

    Google Scholar 

  • Bolding, P. O. (1960). Aspects of the burden of proof. Scandinavian Studies in Law, 4, 9–28.

    Google Scholar 

  • BonJour, L. (1998). The elements of coherentism. In L. M. Alcoff (Ed.), Epistemology: The big questions (pp. 210–231)_. Oxford: Blackwell.(page numbers are referred to in the citation as in Alcoff.) (Originally, In: BonJour, L. (1985). Structure of empirical knowledge (pp. 87–110)., Cambridge, MA: Harvard University Press)

    Google Scholar 

  • Brainerd, C. J., & Reyna, V. F. (2004). Fuzzy-trace theory and memory development. Developmental Review, 24, 396–439.

    Google Scholar 

  • Byrne, M. D. (1995). The convergence of explanatory coherence and the story model: A case study in juror decision. In J. D. Moore & J. F. Lehman (Eds.), Proceedings of the 17th annual conference of the cognitive science society (pp. 539–543). Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Cabras, C. (1996). Un mostro di carta. In C. Cabras (Ed.), Psicologia della prova (pp. 233–258). Milan: Giuffrè.

    Google Scholar 

  • Callan, R. (1999). The essence of neural networks. Hemel Hempstead: Prentice Hall Europe.

    Google Scholar 

  • Callen, C. R. (2002). Othello could not optimize: Economics, hearsay, and less adversary systems. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 437–453). (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg: Physica-Verlag.

    Google Scholar 

  • Camurri, A., & Ferrentino, P. (1999). Interactive environments for music and multimedia. Multimedia Systems, 7(1), 32–47.

    Google Scholar 

  • Cassinis, R., Morelli, L. M., & Nissan, E. (2007). Emulation of human feelings and behaviours in an animated artwork. International Journal on Artificial Intelligence Tools, 16(2), 291–375. Full-page contents of the article on p. 158.

    Google Scholar 

  • Castelfranchi, C., & Falcone, R. (1998). Towards a theory of delegation for agent-based systems. Robotics and Autonomous Systems, 24, 141–157.

    Google Scholar 

  • Castelfranchi, C., & Falcone, R. (2010). Trust theory: A socio-cognitive and computational approach. Chichester: Wiley.

    Google Scholar 

  • Chaiken, S. (1987). The heuristic model of persuasion. In M. P. Zanna, J. M. Olson, & C. P. Herman (Eds.), Social influence: The Ontario symposium (Vol. 5, pp. 3–39). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Chaiken, S., Liberman, A., & Eagly, A. H. (1989). Heuristic and systematic information processing within and beyond the persuasion context. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 212–252). New York: Guilford Press.

    Google Scholar 

  • Chaiken, S., Wood, W., & Eagly, A. H. (1996). Principles of persuasion. In E. T. Higgins & A. Kruglanski (Eds.), Social psychology: Handbook of basic mechanisms and processes (pp. 702–742). New York: Guilford Press.

    Google Scholar 

  • Charniak, E., & Shimony, S. E. (1994). Cost-based abduction and MAP explanation. Artificial Intelligence, 66, 345–374.

    Google Scholar 

  • Cialdini, R. (1993). Influence: Science and practice (3rd ed.). New York: HarperCollins.

    Google Scholar 

  • Ciampolini, A., & Torroni, P. (2004). Using abductive logic agents for modelling judicial evaluation of criminal evidence. Applied Artificial Intelligence, 18(3/4), 251–275.

    Google Scholar 

  • Clark, R. A., & Delia, J. G. (1976). The development of functional persuasive skills in childhood and early adolescence. Child Development, 47, 1008–1014.

    Google Scholar 

  • Clay, M., & Lehrer, K. (Eds.). (1989). Knowledge and skepticism. Boulder, CO: Westerview Press.

    Google Scholar 

  • Cohen, P. R., & Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42(2/3), 213–261.

    Google Scholar 

  • Colby, K. M. (1975). Artificial paranoia. Oxford: Pergamon Press.

    Google Scholar 

  • Colby, K. M. (1981). Modeling a paranoid mind. The Behavioral and Brain Sciences, 4(4), 515–560.

    Google Scholar 

  • Colwell, K., Hiscock-Anisman, C., Memon, A., Woods, D., & Yaeger, H. (2006). Strategies of impression management among deceivers and truth tellers: How liars attempt to convince. American Journal of Forensic Psychology, 24(2), 31–38.

    Google Scholar 

  • Conte, R., & Paolucci, M. (2002). Reputation in artificial societies. Social beliefs for social order. Dordrecht: Kluwer.

    Google Scholar 

  • Culhane, S. E., & Hosch, H. M. (2004). An alibi witness’s influence on juror’s decision making. Journal of Applied Social Psychology, 34, 1604–1616.

    Google Scholar 

  • Culhane, S. E., & Hosch, H. M. (2005). Law enforcement officers serving as jurors: Guilty because charged? Psychology, Crime and Law, 11, 305–313.

    Google Scholar 

  • Deffenbacher, K. A., & Loftus, E. F. (1982). Do jurors share a common understanding concerning eyewitness behaviour? Law and Human Behavior, 6, 15–29.

    Google Scholar 

  • de Kleer, J. (1986). An assumption-based TMS. Artificial Intelligence, 28, 127–162.

    Google Scholar 

  • Dershowitz, A. M. (1986). Reversal of fortune: Inside the von Bülow case. New York: Random House.

    Google Scholar 

  • Dolnik, L., Case, T. I., & Williams, K. D. (2003). Stealing thunder as a courtroom tactic revisited: Processes and boundaries. Law and Human Behavior, 27(3), 267–287.

    Google Scholar 

  • Doyle, J. (1979). A truth maintenance system. Artificial Intelligence, 12, 231–272.

    Google Scholar 

  • Dragoni, A. F., & Animali, S. (2003). Maximal consistency, theory of evidence, and Bayesian conditioning in the investigative domain. Cybernetics and Systems, 34(6/7), 419–465.

    Google Scholar 

  • Dragoni, A. F., Giorgini, P., & Nissan, E. (2001). Distributed belief revision as applied within a descriptive model of jury deliberations. In a special issue on “Artificial Intelligence and Law”, Information & Communications Technology Law, 10(1), 53–65.

    Google Scholar 

  • Dyer, M. G. (1983a). In-depth understanding: A computer model of integrated processing of narrative comprehension. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Dyer, M. G. (1983b). The role of affect in narratives. Cognitive Science, 7, 211–242.

    Google Scholar 

  • Dyer, M. G. (1987). Emotions and their computations: Three computer models. Cognition and Emotion, 1(3), 323–347.

    Google Scholar 

  • Edwards, D., & Potter, J. (1995). Attribution. Chapter 4 In R. Harré & P. Stearns (Eds.), Discursive psychology in practice (pp. 87–119). London and Thousand Oaks, CA: Sage.

    Google Scholar 

  • Einhorn, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review, 92, 433–461.

    Google Scholar 

  • Ekelöf, P. O. (1964). Free evaluation of evidence. Scandinavian Studies in Law (Faculty of Law, Stockholm University), 8, 45–66.

    Google Scholar 

  • Engel, M. (1992). Is epistemic luck compatible with knowledge? Southern Journal of Philosophy, 30, 59–75.

    Google Scholar 

  • Eshghi, K., & Kowalski, R. (1989). Abduction compared with negation by failure. In G. Levi & M. Martelli (Eds.), Sixth international conference on logic programming (pp. 234–254). Cambridge, MA:MIT Press.

    Google Scholar 

  • Faught, W. S. (1978). Conversational action patterns in dialogs. In D. A. Waterman & F. Hayes-Roth (Eds.), Pattern-directed inference systems (pp. 383–397). Orlando, FL: Academic.

    Google Scholar 

  • Fenton, N. E., & Neil, M. (2000). The jury observation fallacy and the use of Bayesian networks to present probabilistic legal arguments. Mathematics Today: Bulletin of the Institute of Mathematics and its Application (IMA), 36(6), 180–187. Paper posted on the Web at http://www.agena.co.uk/resources.html

    Google Scholar 

  • Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row Peterson. Reissues of the same edition, Stanford, California: Stanford University Press, 1962, 1970; London: Tavistock Publications, 1962. Revised and enlarged German translation: Theorie der kognitiven Dissonanz (1978).

    Google Scholar 

  • Fikes, R. E., & Nilsson, N. J. (1971). STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 89–205, 189–208.

    Google Scholar 

  • Fillmore, C. J. (1968). The case for case. In E. Bach & R. T. Harms (Eds.), Universals in linguistic theory. New York: Holt, Rinehart and Winston.

    Google Scholar 

  • Findlay, M., & Duff, P. (Eds.). (1988). The jury under attack. London: Butterworths.

    Google Scholar 

  • Fiorenza, E. (1977). Re Cecconi: La morte assurda. (“Instant book” series.) Rome: Editore Centro dell’Umorismo Italia.

    Google Scholar 

  • Fox, R., & Josephson, J. R. (1994). Software: PEIRCE-IGTT. In J. R. Josephson & S. G. Josephson (Eds.), Abductive inference: Computation, philosophy, technology (pp. 215–223). Cambridge: Cambridge University Press.

    Google Scholar 

  • Fung, T. H., & Kowalski, R. (1997). The IFF proof procedure for abductive logic programming. Journal of Logic Programming, 33(2), 151–165.

    Google Scholar 

  • Gabbert, F., Memon, A., & Allan, K. (2003). Memory conformity: Can eyewitnesses influence each other’s memories for an event? Applied Cognitive Psychology, 17, 533–544.

    Google Scholar 

  • Gabbert, F., Memon, A., Allan, K., & Wright, D. (2004). Say it to my face: Examining the effects of socially encountered misinformation. Legal and Criminological Psychology, 9, 215–227.

    Google Scholar 

  • Gaines, D. M., Brown, D. C., & Doyle, J. K. (1996). A computer simulation model of juror decision making. Expert Systems With Applications, 11(1), 13–28.

    Google Scholar 

  • Gärdenfors, P. (1988). Knowledge in flux: Modeling the dynamics of epistemic states. Cambridge, MA: MIT Press.

    Google Scholar 

  • Garry, M., Manning, C., Loftus, E. F., & Sherman, S. J. (1996). Imagination inflation: Imagining a childhood event inflates confidence that it occurred. Psychonomic Bulletin and Review, 3, 208–214. Posted on the Web at: http://faculty.washington.edu/eloftus/Articles/Imagine.htm

    Google Scholar 

  • Garven, S., Wood, J., Malpass, R., & Shaw, III, J. (1998). More than suggestion: The effect of interviewing techniques from the McMartin Preschool case. Journal of Applied Psychology, 83, 347–359.

    Google Scholar 

  • Gilbert, D. T., & Malone, D. S. (1995). The correspondence bias. Psychological Bulletin, 117, 21–38.

    Google Scholar 

  • Gilman, S. L. (1984). Jews and mental illness: Medical metaphors, anti-Semitism and the Jewish response. Journal of the History of the Behavioral Sciences, 20, 150–159. Reprinted in his Disease and Representation: Images of Illness from Madness to AIDS. Ithaca, NY: Cornell University Press. (Also in Italian, Bologna: Il Mulino, 1993.)

    Google Scholar 

  • Goldman, A. I. (1986). Epistemology and cognition. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Goldsmith, R. W. (1989). Potentialities for practical, instructional and scientific purposes of computer aids to evaluating judicial evidence in terms of an evidentiary value model. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “Logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 317–329). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche.

    Google Scholar 

  • Grant, J., Kraus, S., & Perlis, D. (2005). A logic-based model of intention formation and action for multi-agent subcontracting. Artificial Intelligence, 163(2), 163–201.

    Google Scholar 

  • Griffiths, P. E. (2003). Emotions. Chapter 12 In S. P. Stich & T. A. Warfield (Eds.), The Blackwell guide to philosophy of mind (pp. 288–308). Oxford: Blackwell.

    Google Scholar 

  • Grosz, B., & Kraus, S. (1996). Collaborative plans for complex group action. Artificial Intelligence, 86(2), 269–357.

    Google Scholar 

  • Gulotta, G. (2004). Differenti tattiche persuasive. In G. Gulotta & L. Puddu (Eds.), La persuasione forense: strategie e tattiche (pp. 85–148). Milan: Giuffrè, with a consolidated bibliography on pp. 257–266.

    Google Scholar 

  • Halliwell, J., Keppens, J., & Shen, Q. (2003). Linguistic Bayesian Networks for reasoning with subjective probabilities in forensic statistics. In G. Sartor (Ed.), Proceedings of the ninth International Conference on Artificial Intelligence and Law (ICAIL 2003), Edinburgh, Scotland, 24–28 June 2003 (pp. 42–50). New York: ACM Press.

    Google Scholar 

  • Hamkins, J. D., & Löwe, B. (2008). The modal logic of forcing. Transactions of the American Mathematical Society, 360, 1793–1817.

    Google Scholar 

  • Han, J., & Kamber, M. (2001). Data mining: Concepts and techniques. San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Harley, E. M., Carlsen, K. A., & Loftus, G. R. (2004). The “saw-it-all-along” effect: Demonstrations of visual hindsight bias. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 960–968.

    Google Scholar 

  • Harman, G. H. (1965). Inference to the best explanation. Philosophical Review, 74(1), 88–95.

    Google Scholar 

  • Harman, G. H. (1968). Enumerative induction as inference to the best explanation. Journal of Philosophy, 65(18), 529–533.

    Google Scholar 

  • Harris, M. D. (1985). Introduction to natural language processing. Reston, VA: Reston Publ. Co.

    Google Scholar 

  • Hastie, R. (Ed.). (1993). Inside the juror: The psychology of juror decision making. (Cambridge Series on Judgment and Decision Making). Cambridge: Cambridge University Press, 1993 (hard cover), 1994 (paperback).

    Google Scholar 

  • Hastie, R., Penrod, S. D., & Pennington, N. (1983). Inside the jury. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Henrion, M., Provan, G., Del Favero, B., & Sanders, G. (1994). An experimental comparison of numerical and qualitative probabilistic reasoning. In R. Lopez de Mántaras & D. Poole (Eds.), Uncertainty in artificial intelligence: Proceedings of the Tenth Conference, July 1994. San Mateo, CA: Morgan Kaufmann pp. 319–326.

    Google Scholar 

  • Hill, C., Memon, A., & McGeorge, P. (2008). The role of confirmation bias in suspect interviews: A systematic evaluation. Legal & Criminological Psychology, 13, 357–371.

    Google Scholar 

  • Ho, D. (1998). Indigenous psychologies: Asian perspectives. Journal of Cross-Cultural Psychology, 29(1), 88–103.

    Google Scholar 

  • Holstein, J. A. (1985). Jurors’ interpretation and jury decision making. Law and Human Behavior, 9, 83–100.

    Google Scholar 

  • Howe, M., Candel, I., Otgaar, H., Malone, C., & Wimmer, M. C. (2010). Valence and the development of immediate and long-term false memory illusions. Memory, 18, 58–75. http://www.personeel.unimaas.nl/henry.otgaar/HoweOtgaar%20MEMORY%202010.pdf

    Google Scholar 

  • Howe, M. L. (2005). Children (but not adults) can inhibit false memories. Psychological Science, 16, 927–931.

    Google Scholar 

  • Izard, C. E. (1977). Human emotions. (“Emotions, Personality, and Psychotherapy” Series). New York: Plenum.

    Google Scholar 

  • Izard, C. E. (1982). Comments on emotion and cognition: Can there be a working relationship?. In M. S. Clark & S. T. Fiske (Eds.), Affect and cognition. Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Jackson, B. S. (1998a). Bentham, truth and the semiotics of law. In M. D. A. Freeman (Ed.), Legal theory at the end of the millennium (pp. 493–531). (Current Legal Problems 1998, Vol. 51). Oxford: Oxford University Press.

    Google Scholar 

  • Jackson, B. S. (1998b). On the atemporality of legal time. In F. Ost & M. van Hoecke (Eds.), Temps et Droit. Le droit a t il pour vocation de durer? (pp. 225–246). Brussels: E. Bruylant.

    Google Scholar 

  • Jackson, B. S. (1998c). Truth or proof?: The criminal verdict. International Journal for the Semiotics of Law, 11(3), 227–273.

    Google Scholar 

  • Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114, 3–28.

    Google Scholar 

  • Jøsang, A., & Bondi, V. A. (2000). Legal reasoning with subjective logic. Artificial Intelligence and Law, 8, 289–315.

    Google Scholar 

  • Josephson, J. R., & Josephson, S. G. (Eds.). (1994). Abductive inference: Computation, philosophy, technology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Kadane, J., & Schum, D. (1996). A probabilistic analysis of the Sacco and Vanzetti evidence. New York: Wiley.

    Google Scholar 

  • Kakas, T., Kowalski, K., & Toni, F. (1992). Abductive logic programming. Journal of Logic and Computation, 2(6), 719–770.

    Google Scholar 

  • Kakas, T., Kowalski, R., & Toni, F. (1998). The role of logic programming in abduction. In D. Gabbay, C. J. Hogger, & J. A. Robinson (Eds.), Handbook of logic in artificial intelligence and programming (Vol. 5, pp. 235–324). Oxford: Oxford University Press.

    Google Scholar 

  • Kassin, S. M., & McNall, K. (1991). Police interrogations and confessions: Communicating promises and threats by pragmatic implication. Law and Human Behavior, 15, 233–251.

    Google Scholar 

  • Kassin, S. M., Goldstein, C. J., & Savitsky, K. (2003). Behavioral confirmation in the interrogation room: On the dangers of presuming guilt. Law and Human Behavior, 27, 187–203.

    Google Scholar 

  • Kitayama, Sh., & Markus, H. R. (Eds.). (1994). Emotion and culture: Empirical studies of mutual influence. Washington, DC: American Psychological Association.

    Google Scholar 

  • Kraus, S. (1996). An overview of incentive contracting. Artificial Intelligence, 83(2), 297–346.

    Google Scholar 

  • Lagerwerf, L. (1998). Causal connectives have presuppositions: Effects on coherence and discourse structure. Doctoral dissertation, Netherlands Graduate School of Linguistics, Vol. 10. The Hague, The Nertherlands: Holland Academic.

    Google Scholar 

  • Lane, S. M., & Zaragoza, M. S. (2007). A little elaboration goes a long way: The role of generation in eyewitness suggestibility. Memory & Cognition, 35(6), 125–126.

    Google Scholar 

  • Leippe, M. R. (1985). The influence of eyewitness nonidentifications on mock jurors’ judgments of a court case. Journal of Applied Social Psychology, 15, 656–672.

    Google Scholar 

  • Levitt, T. S., & Laskey, K. B. (2002). Computational inference for evidential reasoning in support of judicial proof. In M. MacCrimmon & P. Tillers, P. (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 345–383). (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg, Germany: Physical-Verlag.

    Google Scholar 

  • Linde, C. (1993). Life stories: The creation of coherence. New York: Oxford University Press.

    Google Scholar 

  • Lindsay, R. C. L., Lim, R., Marando, L., & Cully, D. (1986). Mock-juror evaluations of eyewitness testimony: A test of metamemory hypotheses. Journal of Applied Social Psychology, 15, 447–459.

    Google Scholar 

  • Lipton, P. (2004). Inference to the best expanation (2nd ed.) (revised, augmented). London & New York: Routledge.

    Google Scholar 

  • Lipton, L. (2007). Alien abduction: Inference to the best explanation and the management of testimony. Episteme, 4(3), 238–251.

    Google Scholar 

  • Loftus, E. F. (1974). Reconstructing memory: The incredible witness. Psychology Today, 8, 116–119.

    Google Scholar 

  • Loftus, E. F. (1975). Leading questions and the eye witness report. Cognitive Psychology, 7, 560–572.

    Google Scholar 

  • Loftus, E. F. (1976). Unconscious transference in eyewitness identification. Law and Psychology Review, 2, 93–98.

    Google Scholar 

  • Loftus, E. F. (1979). Eyewitness testimony. Cambridge, MA: Harvard University Press. (Revised edn.: 1996).

    Google Scholar 

  • Loftus, E. F. (1981a). Eyewitness testimony: Psychological research and legal thought. In N. Morris & M. Tonry (Eds.), Crime and justice 3. Chicago: University of Chicago Press.

    Google Scholar 

  • Loftus, E. F. (1981b). Mentalmorphosis: Alteration in memory produced by the bonding of new information to old. In J. Long & A. Baddeley (Eds.), Attention and performance IX (pp. 417–434). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Loftus, E. F. (1983). Silence is not golden. American Psychologist, 38, 9–15.

    Google Scholar 

  • Loftus, E. F. (1986a). Experimental psychologist as advocate or impartial educator. Law and Human Behavior, 10, 63–78.

    Google Scholar 

  • Loftus, E. F. (1986b). Ten years in the life of an expert witness. Law and Human Behavior, 10, 241–263.

    Google Scholar 

  • Loftus, E. F. (1991). Resolving legal questions with psychological data. American Psychologist, 46, 1046–1048.

    Google Scholar 

  • Loftus, E. F. (1993a). The reality of repressed memories. American Psychologist, 48, 518–537. http://faculty.washington.edu/eloftus/Articles/lof93.htm

    Google Scholar 

  • Loftus, E. F. (1993b). Psychologists in the eyewitness world. American Psychologist, 48, 550–552.

    Google Scholar 

  • Loftus, E. F. (Sept. 1997). Creating false memories. Scientific American, 277, 70–75. http://faculty.washington.edu/eloftus/Articles/sciam.htm

    Google Scholar 

  • Loftus, E. F. (1998). The price of bad memories. Skeptical Inquirer, 22, 23–24.

    Google Scholar 

  • Loftus, E. F. (2002). Memory faults and fixes. Issues in Science and Technology, 18(4), National Academies of Science, 2002, pp. 41–50. http://faculty.washington.edu/eloftus/Articles/IssuesInScienceTechnology02%20vol%2018.pdf

    Google Scholar 

  • Loftus, E. F. (2003b). Make-believe memories. American Psychologist, 58(11), 867–873. Posted at: http://faculty.washington.edu/eloftus/Articles/AmerPsychAward+ArticlePDF03%20(2).pdf

    Google Scholar 

  • Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learning and Memory, 12, 361–366.

    Google Scholar 

  • Loftus, E. F., Donders, K., Hoffman, H. G., & Schooler, J. W. (1989). Creating new memories that are quickly accessed and confidently held. Memory and Cognition, 17, 607–616.

    Google Scholar 

  • Loftus, E. F., & Doyle, J. M. (1997). Eyewitness testimony: Civil and criminal. Charlottesville, VA: Lexis Law Publishing.

    Google Scholar 

  • Loftus, E. F., & Greene, E. (1980). Warning: Even memory for faces may be contagious. Law and Human Behavior, 4, 323–334.

    Google Scholar 

  • Loftus, E. F., & Hoffman, H. G. (1989). Misinformation and memory: The creation of new memories. Journal of Experimental Psychology: General, 118, 100–104. http://faculty.washington.edu/eloftus/Articles/hoff.htm

    Google Scholar 

  • Loftus, E. F., & Ketcham, K. (1994). The Myth of repressed memory: False memories and allegations of sexual abuse. New York: St. Martin’s Press.

    Google Scholar 

  • Loftus, E. F., & Loftus, G. R. (1980). On the permanence of stored information in the brain. American Psychologist, 35, 409–420.

    Google Scholar 

  • Loftus, E. F., Loftus, G. R., & Messo, J. (1987). Some facts about ‘weapon focus’. Law and Human Behavior, 11, 55–62.

    Google Scholar 

  • Loftus, E. F., Miller, D. G., & Burns, H. J. (1978). Semantic integration of verbal information into a visual memory. Journal of Experimenal Psychology: Human Learning and Memory, 4, 19–31.

    Google Scholar 

  • Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behaviour, 13, 585–589.

    Google Scholar 

  • Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric Annals, 25(12), 720–725.

    Google Scholar 

  • Loftus, E. F., & Rosenwald, L. A. (1993) Buried memories, shattered lives. American Bar Association Journal, 79, 70–73.

    Google Scholar 

  • Loftus, E. F., Weingardt, J. W., & Wagenaar, W. A. (1985). The fate of memory: Comment on McCloskey and Zaragoza. Journal of Experimental Psychology: General, 114, 375–380.

    Google Scholar 

  • Luger, G. F., & Stubblefield, W. A. (1998). Artificial intelligence: Structures and strategies for complex problem solving (3rd ed.). Reading, MA: Addison Wesley Longman.

    Google Scholar 

  • Luus, C. A. E., & Wells, G. L. (1994). The malleability of eyewitness confidence: Co witness and perseverance effects. Journal of Applied Psychology, 79, 714–723.

    Google Scholar 

  • MacCrimmon, M. (1989). Facts, stories and the hearsay rule. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “Logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 461–475). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche.

    Google Scholar 

  • MacCrimmon, M., & Tillers, P. (Eds.). (2002). The dynamics of judicial proof: Computation, logic, and common sense. (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg: Physical-Verlag.

    Google Scholar 

  • Maida, A. S. (1991). Maintaining mental models of agents who have existential misconceptions. Artificial Intelligence, 50, 331–383.

    Google Scholar 

  • Manschreck, T. C. (1983). Modeling a paranoid mind: A narrower interpretation of the results. [A critique of Colby (1981).] The Behavioral and Brain Sciences, 6(2), 340–341. [Answered by Colby (1983).]

    Google Scholar 

  • Martino, A. A. (1997). Quale logica per la politica. In A. A. Martino (Ed.), Logica delle norme (pp. 5–21). Pisa, Italy: SEU: Servizio Editoriale Universitario di Pisa, on behalf of Università degli Studi di Pisa, Facoltà di Scienze Politiche. English translation: A logic for politics. Accessible online at a site of his publications: http://www.antonioanselmomartino.it/index.php?option=com_content%26task=view%26id=26%26Itemid=64

    Google Scholar 

  • Martins, J. P. (1990). The truth, the whole truth, and nothing but the truth: An indexed bibliography to the literature of truth maintenance systems. AI Magazine, 11(5), 7–25.

    Google Scholar 

  • Martins, J. P., & Shapiro, S. C. (1988). A model for belief revision. Artificial Intelligence, 35, 25–79.

    Google Scholar 

  • Mazzoni, G. A. L., Loftus, E. F., & Kirsch, I. (2001). Changing beliefs about implausible autobiographical events: A little plausibility goes a long way. Journal of Experimental Psychology: Applied, 7, 51–59. Posted at: http://faculty.washington.edu/eloftus/Articles/mazzloft.htm

    Google Scholar 

  • McAllister, H. A., & Bregman, N. J. (1989). Juror underutilization of eyewitness nonidentifications: A test of the disconfirmed expectancy explanation. Journal of Applied Social Psychology, 19, 20–29.

    Google Scholar 

  • McCabe, S. (1988). Is jury research dead? In M. Findlay & P. Duff (Eds.), The Jury under attack (pp. 27–39). London: Butterworths.

    Google Scholar 

  • McClelland, J. L., & Rumelhart, D. E. (1989). Explorations in parallel distributed processing. Cambridge, MA: The MIT Press.

    Google Scholar 

  • McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59(1), 1–16.

    Google Scholar 

  • McNally, R. J. (2003). Remembering trauma. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Meade, M. L., & Roediger, H. L., III. (2002). Explorations in the social contagion of memory. Memory & Cognition, 30, 995–1009.

    Google Scholar 

  • Meissner, C., & Kassin, S. (2002). He’s guilty: Investigator bias in judgements of truth and deception. Law and Human Behavior, 26, 469–480.

    Google Scholar 

  • Meldman, J. A. (1975). A preliminary study in computer-aided legal analysis. Dissertation. Technical Report MAC TR 157. Cambridge, MA: Massachusetts Institute of Technology.

    Google Scholar 

  • Memon, A., & Wright, D. (1999). The search for John Doe 2: Eyewitness testimony and the Oklahoma bombing. The Psychologist, 12, 292–295.

    Google Scholar 

  • Merricks, T. (1995). Warrant entails truth. Philosophy and Phenomenological Research, 55(4), 841–855.

    Google Scholar 

  • Meudell, P. R., Hitch, G. J., & Boyle, M. M. (1995). Collaboration in recall: Do pairs of people cross cue each other to produce new memories? The Quarterly Journal of Experimental Psychology, 48a, 141–152.

    Google Scholar 

  • Michon, J. A., & Pakes, F. J. (1995). Judicial decision-making: A theoretical perspective. Chapter 6.2 In R. Bull & D. Carson (Ed.), Handbook of psychology in legal contexts (pp. 509–525). Chichester: Wiley.

    Google Scholar 

  • Monahan, J., & Loftus, E. F. (1982). The psychology of law. Annual Review of Psychology, 33, 441–475.

    Google Scholar 

  • Moulin, B. (1992). A conceptual graph approach for representing temporal information in discourse. Knowledge-Based Systems, 5(3), 183–192.

    Google Scholar 

  • Musatti, C. L. (1931). Elementi di psicologia della testimonianza (1st ed.). Padova, Italy: CEDAM, 1931. Second edition, with comments added by the author, Padova: Liviana Editrice, 1989.

    Google Scholar 

  • Nebel, B. (1994). Base revision operations and schemes: semantics, representation, and complexity. In A. G. Cohn (Ed.), Proceedings of the 11th European conference on artificial intelligence. New York: Wiley.

    Google Scholar 

  • Nicoloff, F. (1989). Threats and illocutions. Journal of Pragmatics, 13(4), 501–522.

    Google Scholar 

  • Nissan, E. (1995a). Meanings, expression, and prototypes. Pragmatics & Cognition, 3(2), 317–364.

    Google Scholar 

  • Nissan, E. (1995b). SEPPHORIS: An augmented hypergraph-grammar representation for events, stipulations, and legal prescriptions. Law, Computers, and Artificial Intelligence, 4(1), 33–77.

    Google Scholar 

  • Nissan, E. (1996). From ALIBI to COLUMBUS. In J. Hulstijn & A. Nijholt (Eds.), Automatic interpretation and generation of verbal humor: Proceedings of the 12th Twente workshop on language technology, Twente (pp. 69–85). Enschede, The Netherlands: University of Twente.

    Google Scholar 

  • Nissan, E. (1997a). Notions of place: A few considerations. In A. A. Martino (Ed.), Logica delle norme (pp. 256–302). Pisa, Italy: SEU.

    Google Scholar 

  • Nissan, E. (1997b). Notions of place, II. In A. A. Martino (Ed.), Logica delle norme (pp. 303–361). Pisa, Italy: SEU.

    Google Scholar 

  • Nissan, E. (1997c). Emotion, culture, communication. Pragmatics & Cognition, 5(2), 355–369.

    Google Scholar 

  • Nissan, E. (2000a). Artificial intelligence and criminal evidence: A few topics. In C. M. Breur, M. M. Kommer, J. F. Nijboer, & J. M. Reijntjes (Eds.), New trends in criminal investigation and evidence, Vol. 2 = Proceedings of the second world conference on new trends in criminal investigation and evidence, Amsterdam, 10–15 December 1999 (pp. 495–521). Antwerp, Belgium: Intersentia.

    Google Scholar 

  • Nissan, E. (2003f). Review of Hastie (1993). Cybernetics and Systems, 34(6/7), 551–558.

    Google Scholar 

  • Nissan, E. (2004). Legal evidence scholarship meets artificial intelligence. [Reviewing MacCrimmon & Tillers (2002).] Applied Artificial Intelligence, 18(3/4), 367–389.

    Google Scholar 

  • Nissan, E. (2009c). Computational models of the emotions: from models of the emotions of the individual, to modelling the emerging irrational behaviour of crowds. AI & Society: Knowledge, Culture and Communication, 24(4), 403–414.

    Google Scholar 

  • Nissan, E. (2009d). Review of: A. Adamatzky, Dynamics of Crowd-Minds: Patterns of Irrationality in Emotions, Beliefs and Actions (World Scientific Series on Nonlinear Science, Series A, Vol. 54), Singapore, London, and River Edge, NJ: World Scientific, 2005. Pragmatics & Cognition, 17(2), 472–481.

    Google Scholar 

  • Nissan, E. (2010e). Ethnocultural barriers medicalized: A critique of Jacobsen. Journal of Indo-Judaic Studies, 11, 75–119.

    Google Scholar 

  • Nissan, E., Gini, G., & Colombetti, M. (2008) [2009]. Guest editorial: Marco Somalvico Memorial Issue. Annals of Mathematics and Artificial Intelligence, 54(4), 257–264. doi:10.1007/s10472 008 9102 9

    Google Scholar 

  • Nissan, E., & Martino, A. A. (2004b). Artificial intelligence and formalisms for legal evidence: An introduction. Applied Artificial Intelligence, 18(3/4), 185–229.

    Google Scholar 

  • Nissan, E., & Rousseau, D. (1997). Towards AI formalisms for legal evidence. In Z. W. Ras & A. Skowron (Eds.), Foundations of intelligent systems: Proceedings of the 10th international symposium, ISMIS’97 (pp. 328–337). Berlin: Springer.

    Google Scholar 

  • Nissan, E., & Shemesh, A. O. (2010). Saturnine traits, melancholia, and related conditions as ascribed to Jews and Jewish culture (and Jewish responses) from Imperial Rome to high modernity. In A. Grossato (Ed.), Umana, divina malinconia, special issue on melancholia, Quaderni di Studi Indo-Mediterranei, 3 (pp. 97–128). Alessandria, Piedmont, Italy: Edizioni dell’Orso.

    Google Scholar 

  • Nowakowska, M. (1973a). A formal theory of actions. Behavioral Science, 18, 393–416.

    Google Scholar 

  • Nowakowska, M. (1973b). Language of motivation and language of actions. The Hague: Mouton.

    Google Scholar 

  • Nowakowska, M. (1976a). Action theory: Algebra of goals and algebra of means. Design Methods and Theories, 10(2), 97–102.

    Google Scholar 

  • Nowakowska, M. (1976b). Towards a formal theory of dialogues. Semiotica, 17(4), 291–313.

    Google Scholar 

  • Nowakowska, M. (1978). Formal theory of group actions and its applications. Philosophica, 21, 3–32.

    Google Scholar 

  • Nowakowska, M. (1984). Theories of research (2 Vols.). Seaside, CA: Intersystems Publications.

    Google Scholar 

  • Nowakowska, M. (1986). Cognitive sciences: Basic problems, new perspectives, and implications for artificial intelligence. Orlando, FL: Acedemic.

    Google Scholar 

  • Olson, E. A., & Wells, G. L., (2002). What makes a good alibi? A proposed taxonomy. Ames, IA: Iowa State University, n.d. (but 2002). Portions of the data in this report were presented at the 2001 Biennial Meeting of the Society for Applied Research in Memory and Cognition. http://www.psychology.iastate.edu/~glwells/alibi_taxonomy.pdf

    Google Scholar 

  • Otgaar, H. (2009). Not all false memory paradigms are appropriate in court. In L. Strömwall & P.A. Granhag (Eds.), Memory: Reliability and personality (pp. 37–46). Göteborg, Sweden: Göteborg University.

    Google Scholar 

  • Otgaar, H., & Smeets, T. (2010). Adaptive memory: Survival processing increases both true and false memory in adults and children. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 1010–1016. http://www.personeel.unimaas.nl/henry.otgaar/Otgaar_AdaptiveMemoryFalseMemory_2010_JEPLMC.pdf

    Google Scholar 

  • Otgaar, H. P., Candel, I., & Merckelbach, H. (2008). Children’s false memories: Easier to elicit for a negative than a neutral event. Acta Psychologica, 128, 350–354. http://www.personeel.unimaas.nl/henry.otgaar/Otgaar_ChildrensFalseMemoriesNegativeNeutral3_2008_AP.pdf

    Google Scholar 

  • Otgaar, H. P., Candel, I., Merckelbach, H., & Wade, K. A. (2009). Abducted by a UFO: Prevalence information affects young children’s false memories for an implausible event. Applied Cognitive Psychology, 23, 115–125. http://www.personeel.unimaas.nl/henry.otgaar/Otgaar_PrevalenceUFOChildrensfalsememories3_2009_ACP.pdf

    Google Scholar 

  • Pallotta, G. (1977). Dizionario storico della mafia. (Paperbacks società d’oggi, 8.) Rome: Newton Compton Editori.

    Google Scholar 

  • Papageorgis, D., & McGuire, W. J. (1961). The generality of immunity to persuasion produced by pre-exposure to weakened counterarguments. Journal of Abnormal and Social Psychology, 62, 475–481.

    Google Scholar 

  • Pardo, M. S. (2005). The field of evidence and the field of knowledge. Law and Philosophy, 24, 321–391.

    Google Scholar 

  • Parkinson, B. (1995). Ideas and realities of emotion. London: Routledge.

    Google Scholar 

  • Parry, A. (1991). A universe of stories. Family Process, 30(1), 37–54.

    Google Scholar 

  • Pattenden, R. (1993). Conceptual versus pragmatic approaches to hearsay. Modern Law Review, 68 56(2), 138–156.

    Google Scholar 

  • Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan-Kaufmann.

    Google Scholar 

  • Pearl, J. (2001). Bayesianism and causality, and why I am only a half-Bayesian. In D. Corfield & J. Williamson (Eds.), Foundations of bayesianism (pp. 19–36). (Kluwer Applied Logic Series, 24). Dordrecht, The Netherlands: Kluwer. http://ftp.cs.ucla.edu/pub/stat_ser/r284 reprint.pdf

    Google Scholar 

  • Pennington, N., & Hastie, R. (1981). Juror decision-making models: The generalization gap. Psychological Bulletin, 89, 146–287.

    Google Scholar 

  • Pennington, N., & Hastie, R. (1986). Evidence evaluation in complex decision making. Journal of Personality and Social Psychology, 51, 242–258.

    Google Scholar 

  • Pennington, N., & Hastie, R. (1988). Explanation-based decision making: Effects of memory structure on judgment. Journal of Experimental Psychology: Learning, Memory and Cognition, 14, 521–533.

    Google Scholar 

  • Pennington, N., & Hastie, R. (1992). Explaining the evidence: Tests of the story model for juror decision making. Journal of Personality and Social Psychology, 62, 189–206.

    Google Scholar 

  • Pennington, N., & Hastie, R. (1993). The story model for juror decision making. In R. Hastie (Ed.), Inside the Juror: The psychology of juror decision making (pp. 192–221). Cambridge, England: Cambridge University Press.

    Google Scholar 

  • Penrod, S., Loftus, E., & Winkler, J. (1982). The reliability of witness testimony: A psychological perspective. In N. L. Kerr & R. M. Bray (Eds.), The criminal justice system (pp. 119–168). New York: Academic.

    Google Scholar 

  • Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change., New York: Springer.

    Google Scholar 

  • Petty R. E., Wegener, D. T., & White, P. H. (1998). Flexible correction processes in social judgment: implications for persuasion. Social Cognition, 16, 93–113.

    Google Scholar 

  • Plamper, J. (2010). The history of emotions: An interview with William Reddy, Barbara Rosenwein, and Peter Stearns. History and Theory, 71 49, 237–265.

    Google Scholar 

  • Plantinga, A. (1993a). Warrant: The current debate. Oxford: Oxford University Press.

    Google Scholar 

  • Pollock, J. L. (2010). Defeasible reasoning and degrees of justification. Argument & Computation, 1(1), 7–22.

    Google Scholar 

  • Poole, D. (1989). Explanation and prediction: An architecture for default and abductive reasoning. Computational Intelligence, 5(2), 97–110.

    Google Scholar 

  • Poole, D. (2002) Logical argumentation, abduction and Bayesian decision theory: A Bayesian approach to logical arguments and its application to legal evidential reasoning. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 385–396). (Studies in Fuzziness and Soft Computing, Vol. 94). Heidelberg: Physical-Verlag.

    Google Scholar 

  • Porter, S., & Yuille, J. C. (1996). The language of deceit: An investigation of the verbal clues to deception in the interrogation context. Law and Human Behavior, 20, 443–459.

    Google Scholar 

  • Porter, S., & Yuille, J. C. (1995). Credibility assessment of criminal suspects through statement analysis. Psychology, Crime, and Law, 1, 319–331.

    Google Scholar 

  • Prakken, H., & Renooij, S. (2001). Reconstructing causal reasoning about evidence: A case study. In B. Verheij, A. R. Lodder, R. P. Loui, & A. J. Muntjwerff (Eds.), Legal knowledge and information systems. Jurix 2001: The 14th annual conference (pp. 131–137). Amsterdam: IOS Press.

    Google Scholar 

  • Principe, G., & Ceci, S. (2002). I saw it with my own ears: The effect of peer conversations on children’s reports of non-experienced events. Journal of Experimental Child Psychology, 83, 1–25.

    Google Scholar 

  • Randell, D. A., & Cohn, A. G. (1992). Exploiting lattices in a theory of space and time. Computers and Mathematics with Applications, 23(6/9), 459–476. Also in: Lehmann, F. (Ed.). Semantic networks. Oxford: Pergamon Press. The book was also published as a special issue of Computers and Mathematics with Applications, 23(6–9).

    Google Scholar 

  • Reddy, W. M. (1997). Against constructionism: The historical ethnography of emotions. Current Anthropology, 38(2), 327–351.

    Google Scholar 

  • Reichenbach, H. (1949). The theory of probability. Berkeley, CA: University of California Press.

    Google Scholar 

  • Rousseau, D. (1995). Modelisation et simulation de conversations dans un univers multi-agent. Ph.D. Dissertation. Technical Report #993, Montreal, Canada: Department of Computer Science and Operational Research, University of Montreal.

    Google Scholar 

  • Sabater, J., & Sierra, C. (2005). Review on computational trust and reputation models. Artificial Intelligence Review, 24, 33–60.

    Google Scholar 

  • Salmon, W. C. (1967). The foundations of scientific inference. Pittsburgh, PA: Universitgy of Pittsburgh Press.

    Google Scholar 

  • Santos, E., Jr., & Shimony, S. E. (1994). Belief updating by enumerating high-probability independence-based assignments. In R. Lopez de Mántaras & D. Poole (Eds.), Uncertainty in artificial intelligence: Proceedings of the tenth conference (pp. 506–513). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Sappington, D. (1984). Incentive contracting with asymmetric and imperfect precontractual knowledge. Journal of Economic Theory, 34, 52–70.

    Google Scholar 

  • Sartwell, C. (1992). Why knowledge is merely true belief. Journal of Philosophy, 89, 167–180.

    Google Scholar 

  • Savage, L. J. (1962). The foundations of statistical inference. London: Methuen and Co. Ltd.

    Google Scholar 

  • Sawyer, A. G. (1981). Repetition, cognitive responses and persuasion. In R. E. Petty, T. M. Ostrom, & T. C. Brock (Eds.), Cognitive responses in persuasion (pp. 237–261). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Schank, R. G. (1972). Conceptual dependency: A theory of natural language understanding. Cognitive Psychology, 3, 552–631.

    Google Scholar 

  • Schank, R. G. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Schank, R. G., & Riesbeck, C. K. (Eds.). (1981). Inside computer understanding: Five programs plus miniatures. Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Schooler, J. W., Gerhard, D., & Loftus, E. F. (1986). Qualities of the unreal. Journal of Experimental Psychology: Learning, Memory and Cognition, 12, 171–181.

    Google Scholar 

  • Schum, D. A. (1989). Knowledge, credibility, and probability. Journal of Behavioural Decision Making, 2, 39–62.

    Google Scholar 

  • Schum, D. A. (1994). The evidential foundations of probabilistic reasoning. (Wiley Series in Systems Engineering.) New York: Wiley. Reprinted, Evanston, IL: Northwestern University Press, 2001.

    Google Scholar 

  • Schum, D. A., & Martin, A. W. (1982). Formal and empirical research on cascaded inference in jurisprudence. Law and Society Review, 17, 105–151.

    Google Scholar 

  • Schum, D., & Tillers, P. (1989). Marshalling evidence throughout the process of fact investigation: A simulation. Report Nos. 89-01 through 89-04, supported by NSF Grant No. SES 8704377. New York: Cardozo School of Law.

    Google Scholar 

  • Searle, J. (1969). Speech acts: An essay in the philosophy of language. Cambridge: Cambridge University Press.

    Google Scholar 

  • Shafer, G. (1976). A mathematical theory of evidence. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Shapira, R. A. (2002). Saving desdemona. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 419–435). Studies in Fuzziness and Soft Computing, Vol. 94. Heidelberg: Physical-Verlag.

    Google Scholar 

  • Shimony, S. E. (1993). The role of relevance in explanation. I: Irrelevance as statistical independence. International Journal of Approximate Reasoning, 8(4), 281–324.

    Google Scholar 

  • Shimony, S. E., & Charniak, E. (1990). A new algorithm for finding MAP assignments to belief networks. In P. P. Bonissone, M. Henrion, L. N. Kanal, & J. F. Lemmer (Eds.), Uncertainty in artificial intelligence: Proceedings of the sixth conference (pp. 185–193). Amsterdam: North-Holland.

    Google Scholar 

  • Shimony, S. E., & Domshlak, C. (2003). Complexity of probabilistic reasoning in directed-path singly connected Bayes networks. Artificial Intelligence, 151, 213–225.

    Google Scholar 

  • Shimony, S. E., & Nissan, E. (2001). Kappa calculus and evidential strength: A note on Åqvist’s logical theory of legal evidence. Artificial Intelligence and Law, 9(2/3), 153–163.

    Google Scholar 

  • Shortliffe, E. H. (1976). Computer based medical consultations: MYCIN. New York: Elsevier.

    Google Scholar 

  • Shortliffe, E. H., & Buchanan, B. G. (1975). A method of inexact reasoning, Mathematical Biosciences, 23, 351–379.

    Google Scholar 

  • Shyu, C. H., Fu, C.-M., Cheng, T., & Lee, C. H. (1989). A heuristic evidential reasoning model. In A. A. Martino (Ed.), Pre-proceedings of the third international conference on “Logica, Informatica, Diritto: Legal Expert Systems”, Florence, 1989 (2 vols. + Appendix) (Vol. 1, pp. 661–670). Florence: Istituto per la Documentazione Giuridica, Consiglio Nazionale delle Ricerche.

    Google Scholar 

  • Skagerberg, E. M. (2007). Co-witness feedback in line-ups. Applied Cognitive Psychology, 21, 489–497.

    Google Scholar 

  • Snow, P., & Belis, M. (2002) Structured deliberation for dynamic uncertain inference. In M. MacCrimmon & P. Tillers (Eds.), The dynamics of judicial proof: Computation, logic, and common sense (pp. 397–416). (Studies in Fuzziness and Soft Computing, Vol. 94.) Heidelberg: Physical-Verlag.

    Google Scholar 

  • Sosa, E. (1991). Knowledge in perspective. Cambridge: Cambridge University Press.

    Google Scholar 

  • Sowa, J. F. (1984). Conceptual structures: Information processing in mind and machine. Reading, MA: Addison Wesley.

    Google Scholar 

  • Spohn, W. (1988). A dynamic theory of epistemic states. In W. L. Harper & B. Skyrms (Eds.), Causation in decision, belief change, and statistics (pp. 105–134). Dordrecht, The Netherlands: Reidel (Kluwer).

    Google Scholar 

  • Spooren, W. (2001). Review of Lagerwerf (1998). Journal of Pragmatics, 33, 137–141.

    Google Scholar 

  • Stearns, C. Z., & Stearns, P. N. (1986). Anger: The struggle for emotional control in America’s history. Chicago: University of Chicago Press.

    Google Scholar 

  • Stearns, C. Z., & Stearns, P. N. (1988). Emotion and social change: Toward a new psychohistory. New York: Holmes & Meier.

    Google Scholar 

  • Stearns, P. N. (1989). Jealousy: The evolution of an emotion in American history. New York: New York University Press.

    Google Scholar 

  • Stearns, P. N. (1994). American cool: Constructing a twentieth-century emotional style. (The History of Emotions, 3). New York: New York University Press.

    Google Scholar 

  • Stearns, P. N. (1995). Emotion. Chapter 2 In R. Harré & P. Stearns (Eds.), Discursive psychology in practice. London: Sage.

    Google Scholar 

  • Stearns, P. N. & Haggerty, T. (1991). The role of fear: Transitions in American emotional standards for children, 1850–1950. American Historical Review, 96, 63–94.

    Google Scholar 

  • Stein, A. (2001). Of two wrongs that make a right: Two paradoxes of the Evidence Law and their combined economic justification. Texas Law Review, 79, 1199–1234.

    Google Scholar 

  • Stern, D. N. (1985). The interpersonal world of the infant: A view from psychoanalysis and developmental psychology. New York: Basic Books.

    Google Scholar 

  • Stiff, J. B. (1994). Persuasive communication. New York: Guilford.

    Google Scholar 

  • Strange, D., Sutherland, R., & Garry, M. (2006). Event plausibility does not determine children’s false memories. Memory, 14, 937–951.

    Google Scholar 

  • Stranieri, A., & Zeleznikow, J. (2005a). Knowledge discovery from legal databases. (Springer Law and Philosophy Library, 69.) Dordrecht, The Netherlands: Springer.

    Google Scholar 

  • Tavris, C. (2002). The high cost of skepticism. Skeptical Inquirer, 26(4), 41–44 (July/August 2002).

    Google Scholar 

  • Thagard, P. (1989). Explanatory coherence. Behavioural and Brain Sciences, 12(3), 435–467. Commentaries and riposte up to p. 502.

    Google Scholar 

  • Thagard, P. (2000a). Coherence in thought and action. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Thagard, P. (2004). Causal inference in legal decision making: Explanatory coherence vs. Bayesian networks. Applied Artificial Intelligence, 18(3/4), 231–249.

    Google Scholar 

  • Thomas, E. A. C., & Hogue, A. (1976). Apparent weight of evidence, decision criteria, and confidence ratings in juror decision-making. Psychological Review, 83, 442–465.

    Google Scholar 

  • Tillers, P. (2005). If wishes were horses: Discursive comments on attempts to prevent individuals from being unfairly burdened by their reference classes. Law, Probability, and Risk, 4, 33–49.

    Google Scholar 

  • Tillers, P., & Schum, D. (1998). A theory of preliminary fact investigation. In S. Brewer & R. Nozick (Eds.), The philosophy of legal reasoning: Scientific models of legal reasoning. New York: Garland.

    Google Scholar 

  • Toni, F., & Kowalski, R. (1995). Reduction of abductive logic programs to normal logic programs. In L. Sterling (Ed.), Proceedings of the 12th international conference on logic programming (pp. 367–381). Cambridge, MA: MIT Press.

    Google Scholar 

  • Twining, W. L. (1999). Necessary but dangerous? Generalizations and narrative in argumentation about ‘facts’ in criminal process. Chapter 5 in M. Malsch & J. F. Nijboer (Eds.), Complex cases: Perspectives on the Netherlands criminal justice system (pp. 69–98). (Series Criminal Sciences). Amsterdam: THELA THESIS.

    Google Scholar 

  • Vila, L., & Yoshino, H. (2005). Time in automated legal reasoning. In M. Fisher, D. Gabbay, & L. Vila (Eds.), Handbook of temporal reasoning in artificial intelligence (electronic resource; Foundations of Artificial Intelligence, 1). Amsterdam: Elsevier.

    Google Scholar 

  • Vrij, A. (2000). Detecting lies and deceit: The psychology of lying and implications for professional practice. Wiley Series on the Psychology of Crime, Policing and Law. Chichester, West Sussex, England: Wiley. Second edition: 2008.

    Google Scholar 

  • Wade, K. A., Garry, M., Read, J. D., & Lindsay, D. S. (2002). A picture is worth a thousand lies: Using false photographs to create false childhood memories. Psychonomic Bulletin & Review, 9, 597–603.

    Google Scholar 

  • Walton, D. (2010). A dialogue model of belief. Argument & Computation, 1(1), 23–46.

    Google Scholar 

  • Walton, D. N. (2004). Abductive reasoning. Tuscaloosa, AL: University of Alabama Press.

    Google Scholar 

  • Wells, G. L., & Olson, E. A. (2001). The psychology of alibis or Why we are interested in the concept of alibi evidence. Ames, IA: Iowa State University, January 2001. http://www.psychology.iastate.edu/~glwells/alibiwebhtml.htm

    Google Scholar 

  • Williams, K. D., & Dolnik, L. (2001). Revealing the worst first. In J. P. Forgas & K. D. Williams (Eds.), Social influence: Direct and indirect processes (pp. 213–231). Lillington, NC: Psychology Press.

    Google Scholar 

  • Wooldridge, M. (2002). An introduction to multiagent systems. Chichester: Wiley 2nd edition, 2009. [Page numbers as referred to in this book are to the 1st edition.]

    Google Scholar 

  • Xu, M., Kaoru, H., & Yoshino, H. (1999). A fuzzy theoretical approach to case-based representation and inference in CISG. Artificial Intelligence and Law, 7(2/3), 115–128.

    Google Scholar 

  • Young, P., & Holmes, R. (1974). The English civil war: A military history of the three civil wars 1642–1651. London: Eyre Methuen; Ware, Hertfordshire: Wordsworth Editions, 2000.

    Google Scholar 

  • Benferhat, S., Cayrol, C., Dubois, D., Lang, J., & Prade, H. (1993). Inconsistency management and prioritized syntax-based entailment. In Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI’93), pp. 640–645.

    Google Scholar 

  • Gaines, D. M. (1994). Juror simulation. BSc Project Report, Computer Science Department, Worcester Polytechnic Institute.

    Google Scholar 

  • Prakken, H. (2005). Coherence and flexibility in dialogue games for argumentation. Journal of Logic and Computation, 15, 1009–1040.

    Google Scholar 

  • Undeutsch, U. (1982). Statement reality analysis. In A. Trankell (Ed.), Reconstructing the past: The role of psychologists in criminal trials (pp. 27–56). Deventer, The Netherlands: Kluwer (now Dordrecht & Berlin: Springer).

    Google Scholar 

  • de Kleer, J. (1984). Choices without backtracking. In Proceedings of the fourth national conference on artificial intelligence Austin, TX. Menlo Park, CA: AAAI Press, pp. 79–84.

    Google Scholar 

  • de Kleer, J. (1988). A general labeling algorithm for assumption-based truth maintenance. In Proceedings of the 7th national conference on artificial intelligence, pp. 188–192.

    Google Scholar 

  • Jameson, A. (1983). Impression monitoring in evaluation-oriented dialog: The role of the listener’s assumed expectations and values in the generation of informative statements. In Proceedings of the eighth International Joint Conference on Artificial Intelligence (IJCAI’83), Karlsruhe, Germany. San Mateo, CA: Morgan Kaufmann, Vol. 2, pp. 616–620. http://ijcai.org/search.php

  • Iacoviello, F. M. (1997). La motivazione della sentenza penale e il suo controllo in cassazione. Milan: Giuffrè.

    Google Scholar 

  • Iacoviello, F. M. (2006). Regole più chiare sui vizi di motivazione. In Il Sole 24 Ore, Guida al Diritto, 10/2006, p. 96.

    Google Scholar 

  • McNeal, G. S. (2007). Unfortunate legacies: Hearsay, ex parte affidavits and anonymous witnesses at the IHT [i.e., Iraqi High Tribunal]. In G. Robertson (Ed.), Fairness and evidence in war crimes trials. Special issue of International Commentary on Evidence, 4(1). The Berkeley Electronic Press (article accessible on the Web at this address: http://www.bepress.com/ice/vol4/iss1/art5)

  • Stein, A. (2000). Evidential rules for criminal trials: Who should be in charge? In S. Doran & J. Jackson (Eds.), The judicial role in criminal proceedings (pp. 127–143). Oxford: Hart Publishing.

    Google Scholar 

  • Bex, F. J., & Walton, D. (2010). Burdens and standards of proof for inference to the best explanation. In R. Winkels (Ed.), Legal knowledge and information systems. JURIX 2010: The 23rd annual conference (pp. 37–46). (Frontiers in Artificial Intelligence and Applications, 223.) Amsterdam: IOS Press.

    Google Scholar 

  • Fischhoff, B., & Beyth, R. (1975). “I knew it would happen”: Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13, 1–16.

    Google Scholar 

  • Merton, R. K. (1948) The self-fulfilling prophecy. The Antioch Review, 8, 193–210.

    Google Scholar 

  • Twining, W. (1997). Freedom of proof and the reform of criminal evidence. In E. Harnon & A. Stein (Eds.), Rights of the accused, crime control and protection of victims, special volume of the Israel Law Review, 31(1–3), 439–463.

    Google Scholar 

  • Carr, D. (2008). Narrative explanation and its malcontents. History and Theory, 47, 19–30.

    Google Scholar 

  • Wade, K. A., Sharman, S. J., Garry, M., Memon, A., Merckelbach, H., & Loftus, E. (2007). False claims about false memories. Consciousness and Cognition, 16, 18–28.

    Google Scholar 

  • Loftus, E. F. (1987). Trials of an expert witness. In the My Turn column, in Newsweek, 109, 29 June 1987, pp. 10–11.

    Google Scholar 

  • Neimark, J. (1996). The diva of disclosure, memory researcher Elizabeth Loftus. Psychology Today, 29(1). Article downloadable from: http://faculty.washington.edu/eloftus/Articles/psytoday.htm

  • Keppens, J., Shen, Q, & Shafer, B. (2005). Probabilistic abductive computation of evidence collection strategies in crime investigation. In Proceedings of the 10th international conference on artificial intelligence and law, pp. 215–224.

    Google Scholar 

  • Kuflik, T., Nissan, E., & Puni, G. (1989). Finding excuses with ALIBI: Alternative plans that are deontically more defensible. In Proceedings of the International Symposium on Communication, Meaning and Knowledge vs. Information Technology, Lisbon, September. Then again in Computers and Artificial Intelligence, 10(4), 297–325, 1991. Then in a selection from the Lisbon conference: Lopes Alves, J. (Ed.). (1992). Information technology & society: Theory, uses, impacts (pp. 484–510). Lisbon: Associação Portuguesa para o Desenvolvimento das Comunicações (APDC), & Sociedade Portuguesa de Filosofia (SPF).

    Google Scholar 

  • Lutomski, L. S. (1989). The design of an attorney’s statistical consultant. In Proceedings of the second international conference of artificial intelligence and law. New York: ACM Press, pp. 224–233.

    Google Scholar 

  • Fakher-Eldeen, F., Kuflik, T., Nissan, E., Puni, G., Salfati, R., Shaul, Y., et al. (1993). Interpretation of imputed behaviour in ALIBI (1 to 3) and SKILL. Informatica e Diritto (Florence), Year 19, 2nd Series, 2(1/2), 213–242.

    Google Scholar 

  • Nissan, E., Cassinis, R., & Morelli, L. M. (2008). Have computation, animatronics, and robotic art anything to say about emotion, compassion, and how to model them? The survivor project. Pragmatics & Cognition, 16(1), 3–37 (2008). As a continuation of 15(3) (2007), special issue on “Mechanicism and autonomy: What can robotics teach us about human cognition and action?”, third in the series Cognition and Technology.

    Google Scholar 

  • Nissan, E., & Dragoni, A. F. (2000). Exoneration, and reasoning about it: A quick overview of three perspectives. Session on Intelligent Decision Support for Legal Practice (IDS 2000), In Proceedings of the international ICSC congress “Intelligent Systems & Applications” (ISA’2000), Wollongong, Australia, December 2000, Vol. 1, pp. 94–100.

    Google Scholar 

  • Shebelsky, R. C. (1991). [Joke under the rubric ‘Laughter, the Best Medicine’.] Reader’s Digest (U.S. edition), November 1991, p. 103.

    Google Scholar 

  • Uther, H.-J. (2004). The types of international folktales: A classification and bibliography. Based on the system of Antti Aarne and Stith Thompson. Part I: Animal Tales, Tales of Magic, Religious Tales, and Realistic Tales, with an Introduction. Part II: Tales of the Stupid Ogre, Anecdotes and Jokes, and Formula Tales. Part III: Appendices. (Folklore Fellows Communications, Vols. 284–286.) Helsinki, Finland: Suomalainen Tiedeakatemia = Academia Scientiarum Fennica.

    Google Scholar 

  • Sycara, K. P. (1998). Multiagent systems. AI Magazine, Summer 1998, pp. 79–92.

    Google Scholar 

  • Rousseau, D. (1996). Personality in synthetic agents. Technical Report KSL 96 21, Knowledge Systems Laboratory, Stanford University.

    Google Scholar 

  • Rousseau, D., Moulin, B., & Lapalme, G. (1997). Interpreting communicative acts and building a conversational model. Journal of Natural Language Engineering.

    Google Scholar 

  • Moulin, B., & Rousseau, D. (1994). A multi-agent approach for modelling conversations. In Proceedings of the international avignon conference AI 94, Natural language processing sub-conference, Paris, France, June 1994, pp. 35–50.

    Google Scholar 

  • Gardner, A. von der Lieth. (1987). An artificial intelligence approach to legal reasoning. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Hayes-Roth, B., & van Gent, R. (1997). Story-making and improvisational puppets. In W. L. Johnson (Ed.), Autonomous Agents ’97. (pp. 1–7)Marina del Rey, CA. New York: ACM Press.

    Google Scholar 

  • Poulin, D., Mackaay [sic], E., Bratley, P., & Frémont, J. (1992). Time server: A legal time specialist. In A. Martino (Ed.), Expert systems in law (pp. 295–312). Amsterdam: North-Holland.

    Google Scholar 

  • Knight, B., Ma, J., & Nissan, E. (1998). Representing temporal knowledge in legal discourse. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time., Special issue, Information and Communications Technology Law, 7(3), 199–211.

    Google Scholar 

  • Zarri, G. P. (1998). Representation of temporal knowledge in events: The formalism, and its potential for legal narratives. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time, special issue, Information and Communications Technology Law, 7(3), 213–241.

    Google Scholar 

  • Farook, D. Y., & Nissan, E. (1998). Temporal structure and enablement representation for mutual wills: A Petri-net approach. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time, special issue, Information and Communications Technology Law, 7(3), 243–267.

    Google Scholar 

  • Valette, R., & Pradin-Chézalviel, B. (1998). Time Petri nets for modelling civil litigation. In A. A. Martino & E. Nissan (Eds.), Formal models of legal time, special issue, Information and Communications Technology Law, 7(3), 269–280.

    Google Scholar 

  • Cohn, A. G., Gotts, N. M., Cui, Z., Randell, D. A., Bennett, B., & Gooday, J. M. (1994). Exploiting temporal continuity in qualitative spatial calculi. In R. G. Golledge & M. J. Egenhofer (Eds.), Spatial and temporal reasoning in geographical information systems. Amsterdam: Elsevier.

    Google Scholar 

  • Nowakowski [sic], M. (1980). Possibility distributions in the linguistic theory of actions. International Journal of Man-Machine Studies, 12, 229–239.

    Google Scholar 

  • Colby, K. M. (1983). Limits on the scope of PARRY as a model of paranoia. [Response to Manschreck (1983).] The Behavioral and Brain Sciences, 6(2), 341–342.

    Google Scholar 

  • Pollock, J. (1989). How to build a person: A prolegomenon. Cambridge, MA: Bradford (MIT Press).

    Google Scholar 

  • Izard, C. E. (1971). The face of emotion. New York: Appleton-Century-Crofts.

    Google Scholar 

  • Faught, W. S. (1975). Affect as motivation for cognitive and conative processes. In Proceedings of the fourth international joint conference on artificial intelligence, Tbilisi, Georgia, USSR, pp. 893–899.

    Google Scholar 

  • Shiraev, E., & Levy, D. (2007). Cross-cultural psychology: Critical thinking and contemporary applications (3rd ed.). Boston: Allyn and Bacon.

    Google Scholar 

  • Plantinga, A. (1993b). Warrant and proper function. Oxford: Oxford University Press.

    Google Scholar 

  • Culhane, S. E., & Hosch, H. M. (2002). An alibi witness’s influence on jurors’ verdicts. University of Texas-El Paso. [Cited before publication in a passage I quoted from Olson & Wells (2002).]

    Google Scholar 

  • Petacco, A. (1972). Joe Petrosino. (In Italian.) Milan: Arnoldo Mondadori Editore.

    Google Scholar 

  • Rokach, L., & Maimon, O. Z. (2008). Data mining with decision trees: Theory and applications. (Series in Machine Perception and Artificial Intelligence, Vol. 69.) Singapore: World Scientific.

    Google Scholar 

  • Balding, D. J., & Donnelly, P. (1995). Inferring identity from DNA profile evidence. Proceedings of the National Academy of Sciences, USA, 92(25), 11741–11745.

    Google Scholar 

  • Philipps, L. (1999). Approximate syllogisms: On the logic of everyday life. Artificial Intelligence and Law, 7(2/3), 227–234.

    Google Scholar 

  • Allen, R. J. (2008a). Explanationism all the way down. Episteme, 3(5), 320–328.

    Google Scholar 

  • Belis, M., & Snow, P. (1998). An intuitive data structure for the representation and explanation of belief and evidentiary support. In Proceedings of the seventh international conference on Information Processing and Management of Uncertainty in knowledge-based systems (IPMU 1998), Paris, 6–10 July 1998. Paris: EDK, pp. 64–71.

    Google Scholar 

  • Pearl, J. (1993). From conditional oughts to qualitative decision theory. In Uncertainty in AI: Proceedings of the Ninth Conference, Washington, DC, July 1993, pp. 12–20.

    Google Scholar 

  • Kvart, I. (1994). Overall positive causal impact. Canadian Journal of Philosophy, 24(2), 205–227.

    Google Scholar 

  • Cozman, F. J. (2001). JavaBayes: Bayesian networks in Java. http://www-2.cs.cmu.edu/~javabayes/

  • Fox, F. (1971, April). Quaker, Shaker, rabbi: Warder Cresson, the story of a Philadelphia mystic. Pennsylvania Magazine of History and Biography, 147–193.

    Google Scholar 

  • Nissan, E., & Shimony, S. E. (1997). VegeDog: Formalism, vegetarian dogs, and partonomies in transition. Computers and Artificial Intelligence, 16(1), 79–104.

    Google Scholar 

  • Atkinson, K., & Bench-Capon, T. J. M. (2007a). Argumentation and standards of proof. In Proceedings of the 11th International Conference on Artificial Intelligence and Law (ICAIL 2007), Stanford, CA, June 4–8, 2007. New York: ACM Press, pp. 107–116.

    Google Scholar 

  • Keppens, J., Shen, Q., & Lee, M. (2005). Compositional Bayesian modelling and its application to decision support in crime investigation. In Proceedings of the 19th international workshop on qualitative reasoning, pp. 138–148.

    Google Scholar 

  • Nissan, E. (2011a). The rod and the crocodile: Temporal relations in textual hermeneutics: An application of Petri nets to semantics. Semiotica, 184(1/4), 187–227.

    Google Scholar 

  • Budescu, D. V., & Wallsten, T. S. (1985). Consistency in interpretation of probabilistic phrases. Organizational Behaviour and Human Decision Processes, 36, 391–405.

    Google Scholar 

  • Davis, G., & Pei, J. (2003). Bayesian networks and traffic accident reconstruction. Proceedings of the ninth international conference on artificial intelligence and law, Edinburgh, Scotland (pp. 171–176). New York: ACM Press.

    Google Scholar 

  • Grady, G., & Patil, R. S. (1987). An expert system for screening employee pension plans for the Internal Revenue Service. Proceedings of the first international conference on artificial intelligence and law (pp. 137–143). New York: ACM Press.

    Google Scholar 

  • Heckerman, D. (1997) Bayesian networks for data mining. Data Mining and Knowledge Discovery, 1, 79–119.

    Google Scholar 

  • Reddy, W. M. (2001). The navigation of feeling: A framework for the history of emotions. Cambridge: Cambridge University Press.

    Google Scholar 

  • Stearns, P. N., & Stearns, C. Z. (1985). Emotionality: Clarifying the history of emotions and emotional standards. American History Review, 90, 813–836.

    Google Scholar 

  • Wallsten, T. S., Budescu, D. V., Rapoport, A., Zwick, R. and Forsyth, B. (1986). Measuring the vague meanings of probability terms. Journal of Experimental Psychology: General, 115(4), 348–365.

    Google Scholar 

  • Zimmer, A. C. (1984). A Model for the interpretation of verbal predictions. International Journal of Man-Machine Studies, 20, 121–134.

    Google Scholar 

  • Nourkova, V. V., Bernstein D. M., & Loftus, E. F. (2004). Altering traumatic memories. Cognition and Emotion, 18, 575–585.

    Google Scholar 

  • Martins, J. P., & Shapiro, S. C. (1983). Reasoning in multiple belief spaces. In Proceedings of the eighth International Joint Conference on Artificial Intelligence (IJCAI’83), Karlsruhe, Germany. San Mateo, CA: Morgan Kaufmann, pp. 370–373. http://ijcai.org/search.php

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ephraim Nissan .

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media B.V.

About this chapter

Cite this chapter

Nissan, E. (2012). Models of Forming an Opinion. In: Computer Applications for Handling Legal Evidence, Police Investigation and Case Argumentation. Law, Governance and Technology Series, vol 5. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-8990-8_2

Download citation

Publish with us

Policies and ethics