## Abstract

This article aims to achieve two goals: to show that probability is not the only way of dealing with uncertainty (and even more, that there are kinds of uncertainty which are for principled reasons not addressable with probabilistic means); and to provide evidence that logic-based methods can well support reasoning with uncertainty. For the latter claim, two paradigmatic examples are presented: logic programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation of the state of affairs of the intended model, and a neural-symbolic implementation of input/output logic for dealing with uncertainty in dynamic normative contexts.

This is a preview of subscription content, access via your institution.

## Notes

However, these authors—somewhat paradoxically—in the end come to the view that whatever uncertainty is the topic, probability is the framework for modelling it; cf. Sect. 2.2 for some considerations on the corresponding argument and conclusion.

In terms of concrete examples, the work by Nilsson (1986) comes to mind as a prominent instance falling within the domain of Halpern’s plausibility measures.

*Intended model*is the psychological notion which corresponds to*minimal model*.*Model*is to be read as semantic model.*Preferred model*is the logical notion used by Shoham (1987). Keeping in mind the terms belonging to different fields, we use intended, minimal and preferred as synonyms.*Abnormality*is a technical term for exceptions and should not be taken as having any other overtones. Some terminology: in the conditional \(p \wedge \lnot ab \rightarrow q\),*ab*is the schematic abnormality clause. A distinct*ab*is indexed to each conditional, and stands for a disjunction of a list of defeaters for that conditional; CWA\(_{ab}\) is the CWA applied to abnormality clauses; \(\lnot\) is the 3-valued Kleene connective, whereas negation-as-failure is an inference pattern that results in negative conclusions by CWA reasoning from absence of positive evidence; falsum (\(\bot\)) and verum (\(\top\)) are proposition symbols which always take the values false or true respectively; turnstile (\(\vdash\)) and semantic turnstile (\(\models\)) are symbols indicating syntactic and semantic consequence respectively.Another remark concerning terminology: while in this context the use of the terms ‘head’ and ‘body’ is commonplace in computer science, we will in the following restrict ourselves to ‘antecedent’ and ‘consequent’ in order to maintain homogeneity also with terminology in philosophy and logic.

Here, \(\leftrightarrow\) denotes a classical biconditional in the object language.

Basically, in LP queries can be answered in time linear with the length of the shortest inferential path in the KB.

This direction of reasoning is from effect to cause, from goals to subgoals, or simply backwards in time. The typical backward inferences are modus tollens (\(p \rightarrow q, \lnot q\, \models \lnot p\)) and affirmation of the consequent (\(p \rightarrow q, q\, \models p\)), initiated by the consequent. The psychological findings that these inferences are more difficult might well be a result of the micro-scale of the tasks being used (Sloman and Lagnado 2015), or of the slightly more complex form of the \(\hbox {CWA}_{ab}\) needed (Stenning and van Lambalgen 2008, pp. 176–177).

Nota bene: This stands in harsh contrast to Oaksford and Chater (1998)’s above quoted conventional characterisation of CL as ‘reasoning in certainty’. When this way of conceptualising monotonic CL in contrast to nonmonotonic logics was introduced in the mid-1970s and 1980s—for instance, in the wake of Minsky (1974)’s frames or McCarthy (1980)’s circumscription approach— the underlying concern was not with characterising kinds of uncertainty, but with contrasting two systems on the one specific property of (non)monotonicity.

Terminology yet again: for the purpose of this article we use law, norm, rule, etc. as synonymous.

To give an intuitive example of a CTD, we report the so-called

*dog-sign*example by Prakken and Sergot (1997) already hinted at in the introduction: “Suppose that: there must be no dog around the house, and if there is no dog, there must be no warning sign, but if there is a dog, there must be a warning sign.” Obviously, if there is a dog, the conditional obligation that there must be no sign does not become unconditional, since its condition is not fulfilled. On the other hand, it can also be inferred that if no obligations are violated, there will be no sign (modulo exceptions, of course).Makinson and van der Torre (2003a) consider three kinds of permissive norms, namely negative, positive, and static positive permission. In this article, we restrict discussions to the above, and should note that much future work is left to be done when it comes to the provision of connectionist representations for normative and deontic reasoning systems

Given a rule, e.g. \(B \leftarrow A\), input and output vectors are created having ‘1’ in the position corresponding to

*A*in the input vector, and ‘1’ in the position corresponding to*B*in the output vector.The presented approach to LP modelling of discourse does not tackle the learning of KB rules, as discourse comprehension generally is assumed to proceed with a mature KB. But an account of learning is nevertheless an important goal for LP models of discourse.

## References

Alchourrón, C. E., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions.

*The Journal of Symbolic Logic*,*50*(02), 510–530.Antoniou, G., Billington, D., & Maher, M. (1998). Sceptical logic programming based default reasoning: defeasible logic rehabilitated. In R. Miller, M. Shanahan (Eds.), C

*OMMONSENSE 98, The 4th symposium on logical formalizations of commonsense reasoning, London*.Apt, K. R., & Pedreschi, D. (1993). Reasoning about termination of pure prolog programs.

*Information and Computation*,*106*, 109–157.Baggio, G., Stenning, K., & van Lambalgen, M. (2016). The cognitive interface. In M. Aloni & P. Dekker (Eds.),

*Cambridge handbook of formal semantics*. Cambridge: Cambridge University Press.Boella, G, & van der Torre, L (2005). Permission and authorization in normative multiagent systems. In

*Procs. of int. conf. on artificial intelligence and law ICAIL*(pp. 236–237).Boella, G., & van der Torre, L. (2006). A game theoretic approach to contracts in multiagent systems.

*IEEE Transactions on Systems, Man, and Cybernetics, Part C*,*36*(1), 68–79.Boella, G., Pigozzi, G., & van der Torre, L. (2009). Normative framework for normative system change. In

*8th Int. joint conf. on autonomous agents and multiagent systems AAMAS 2009, IFAAMAS*(pp. 169–176).Bradley, R., & Drechsler, M. (2014). Types of uncertainty.

*Erkenntnis*,*79*, 1225–1248.Doets, K. (1994).

*From logic to logic programming*. Cambridge, MA: MIT Press.Gabbay, D., Horty, J., Parent, X., van der Meyden, R., & van der Torre, L. (Eds.). (2013).

*Handbook of deontic logic and normative systems*. London: College Publications.Garcez, A., Broda, K., & Gabbay, D. M. (2001). Symbolic knowledge extraction from trained neural networks: A sound approach.

*Artificial Intelligence*,*125*, 155–207.Garcez, A., Broda, K., & Gabbay, D. (2002).

*Neural-symbolic learning systems: Foundations and applications. Perspectives in neural computing*. Berlin: Springer.Garcez, A., Gabbay, D., & Lamb, L. (2005). Value-based argumentation frameworks as neural-symbolic learning systems.

*Journal of Logic and Computation*,*15*(6), 1041–1058.Garcez, A., Lamb, L. C., & Gabbay, D. M. (2009).

*Neural-symbolic cognitive reasoning*. Berlin: Springer.Garcez, A., Besold, T.R., de Raedt, L., Földiak, P., Hitzler, P., Icard, et al. (2015). Neural-symbolic learning and reasoning: Contributions and challenges. In: AAAI Spring 2015 symposium on knowledge representation and reasoning: Integrating symbolic and neural approaches, AAAI technical reports (vol SS-15-03). AAAI Press.

Gelfond, M., & Lifschitz, V. (1988). The stable model semantics for logic programming. In

*Proceedings of the 5th logic programming symposium, MIT Press*(pp. 1070–1080).Gelfond, M., & Lifschitz, V. (1991). Classical negation in logic programs and disjunctive databases.

*New Generation Computing*,*9*, 365–385.Gigerenzer, G., Todd, P. M., & The ABC Research Group. (1999).

*Simple heuristics that make us smart*. Oxford: Oxford University Press.Gigerenzer, G., Hertwig, R., & Pachur, T. (2011).

*Heuristics: The foundations of adaptive behavior*. Oxford: Oxford University Press.Graves, A., Mohamed, A., & Hinton, G.E. (2013). Speech recognition with deep recurrent neural networks. CoRR arXiv:abs/1303.5778.

Halpern, J. (2005).

*Reasoning about uncertainty*. Cambridge, MA: MIT Press.Hansen, J. (2006). Deontic logics for prioritized imperatives.

*Artificial Intelligence and Law*,*14*(1–2), 1–34.Haykin, S. (1999).

*Neural networks: A comprehensive foundation*. Upper Saddle River: Prentice Hall.Horty, J. F. (1993). Deontic logic as founded on nonmonotonic logic.

*Annals of Mathematics and Artificial Intelligence*,*9*(1–2), 69–91.Jörgensen, J. (1937). Imperatives and logic.

*Erkenntnis*,*7*, 288–296.Juslin, P., Nilsson, Håkan, & Winman, A. (2009). Probability theory, not the very guide of life.

*Psychological Review*,*116*(4), 856–874.Kahneman, D., & Tversky, A. (1982). The concept of probability in psychological experiments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.),

*The concept of probability in psychological experiments*(pp. 509–520). Cambridge: Cambridge University Press.Kern-Isberner, G., & Lukasiewicz, T. (2017). Many facets of reasoning under uncertainty, inconsistency, vagueness, and preferences: A brief survey.

*Künstliche Intelligenz*. doi:10.1007/s13218-016-0480-6.Knight, F. (1921).

*Risk, uncertainty and profit*. New York: Hart, Schaffner and Marx.Kowalski, R. A. (1988). The early years of logic programming.

*Communications of the ACM*,*31*, 38–42.Kraus, S., Lehmann, D., & Magidor, M. (1990). Nonmonotonic reasoning, preferential models and cumulative logics.

*Artificial Intelligence*,*44*(1), 167–207.Lindahl, L., & Odelstad, J. (2003). Normative systems and their revision: An algebraic approach.

*Artificial Intelligence and Law*,*11*(2–3), 81–104.Lloyd, J. W. (1987).

*Foundations of logic programming*. Berlin: Springer.Makinson, D., & van der Torre, L. (2000). Input/output logics.

*Journal of Philosophical Logic*,*29*(4), 383–408.Makinson, D., & van der Torre, L. (2001). Constraints for input-output logics.

*Journal of Philosophical Logic*,*30*(2), 155–185.Makinson, D., & van der Torre, L. (2003a). Permissions from an input-output perspective.

*Journal of Philosophical Logic*,*32*(4), 391–416.Makinson, D., & van der, Torre L. (2003b). What is input/output logic? In B. Löwe, W. Malzkorn & T. Räsch (Eds.),

*Foundations of the formal sciences II: Applications of mathematical logic in philosophy and linguistics, trends in logic*(Vol. 17). Kluwer.McCarthy, J. (1980). Circumscription: A form of non-monotonic reasoning.

*Artificial Intelligence*,*13*(1), 27–39.Minsky, M. (1974). A framework for representing knowledge. Tech. Rep. 306, AI Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA.

Mousavi, S., & Gigerenzer, G. (2014). Risk, uncertainty, and heuristics.

*Journal of Business Research*,*67*, 1671–1678.Mozina, M., Zabkar, J., & Bratko, I. (2007). Argument based machine learning.

*Artificial Intelligence*,*171*(10–15), 922–937.Nilsson, N. J. (1986). Probabilistic logic.

*Artificial intelligence*,*28*(1), 71–87.Nute, D. (1994). Defeasible logic. In D. Gabbay & J. Robinson (Eds.),

*Handbook of logic in artificial intelligence and logic programming*(Vol. 3, pp. 353–396). Oxford: Oxford University Press.Nute, D. (Ed.). (1997).

*Defeasible deontic logic, synthese library*(Vol. 263). Alphen aan den Rijn: Kluwer.Oaksford, M., & Chater, N. (1998).

*Rationality in an uncertain world: Essays in the cognitive science of human understanding*. Hove: Psychology Press.Pearl, J. (2000).

*Causality: Models, reasoning, and inferece*. Cambridge: Cambridge University Press.Pijnacker, J., Geurts, B., van Lambalgen, M., Buitelaar, J., & Hagoort, P. (2010). Exceptions and anomalies: An ERP study on context sensitivity in autism.

*Neuropsychologia*,*48*, 2940–2951.Pinosio, R. (in prep.) A common core shared by logic programming and probabilistic causal models.

Prakken, H., & Sergot, M. (1997). Dyadic deontic logic and contrary-to-duty obligations. In D. Nute (Ed.),

*Defeasible deontic logic*(pp. 223–262). Berlin: Springer.Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. Rumelhart, J. McClelland & PDP Research Group (Eds.),

*Parallel Distributed Processing*(Vol 1. pp. 318–362). Cambridge: MIT Press.Sen, S., & Airiau, S. (2007). Emergence of norms through social learning. In

*Procs. of the 20th International Joint Conference on Artificial Intelligence—IJCAI*(pp. 1507–1512).Shanahan, M. (2002). Reinventing Shakey. In J. Minker (Ed.),

*Logic-based artificial intelligence*. Dordrecht: Kluwer.Shoham, Y. (1987). A semantical approach to non-monotonic logics. In

*Proceedings of the tenth international joint conference on artificial intelligence (IJCAI)*(pp. 388–392).Shoham, Y., & Tennenholtz, M. (1997). On the emergence of social conventions: Modeling, analysis, and simulations.

*Artificial Intelligence*,*94*(1–2), 139–166.Sloman, S., & Lagnado, D. (2015). Causality in thought.

*The Annual Review of Psychology*,*66*, 1–25.Stenning, K., & van Lambalgen, M. (2008).

*Human reasoning and Cognitive Science*. Cambridge, MA: MIT Press.Stenning, K., & van Lambalgen, M. (2010). The logical response to a noisy world. In M. Oaksford (Ed.),

*Cognition and conditionals: Probability and logic in human thought*(pp. 85–102). Oxford: Oxford University Press.Stenning, K., & Varga, A. (2016). Many logics for the many things that people do in reasoning. In L. Ball & V. Thompson (Eds.),

*International Handbook of Thinking and Reasoning*. Abingdon-on-Thames: Psychology Press.Stenning, K., Martignon, L., & Varga, A. (2017). Adaptive reasoning: integrating fast and frugal heuristics with a logic of interpretation. Decision.

Tosatto, S. C., Boella, G., van der Torre, L., & Villata, S. (2012). Abstract normative systems: Semantics and proof theory. In G. Brewka, T. Eiter, & S. A. McIlraith (Eds.),

*Principles of knowledge representation and reasoning: Proceedings of the thirteenth international conference*. AAAI Press.Towell, G. G., & Shavlik, J. W. (1994). Knowledge-based artificial neural networks.

*Artificial Intelligence*,*70*(1), 119–165.Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases.

*Science*,*185*(4157), 1124–1131.Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.

*Psychological Review*,*90*(4), 293.van der Torre, L. (1997). Reasoning about obligations. PhD thesis, Erasmus University Rotterdam.

van der Torre, L., & Tan, Y. (1999). Deontic update semantics. In P. McNamara & H. Prakken (Eds.),

*Norms, logics and information systems. new studies on deontic logic and computer science*. Amsterdam: IOS Press.van der Torre, L. (2010). Deontic redundancy: A fundamental challenge for deontic logic. In

*Deontic Logic in Computer Science, 10th International Conference (*\(\Delta\)*EON 2010)*.van Lambalgen, M., & Hamm, F. (2004).

*The proper treatment of events*. Oxford: Blackwell.Varga, A. (2013). A formal model of infants’ acquisition of practical knowledge from observation. PhD thesis, Central European University, Budapest.

von Wright, G. H. (1951). Deontic logic.

*Mind*,*60*, 1–15.Weston, J., Chopra, S., & Bordes, A. (2014). Memory networks. CoRR arXiv:abs/1410.3916.

## Acknowledgements

We want to thank the following people for their indispensable contributions to different parts of the work reported in this article: Guido Boella, Silvano Colombo Tosatto, Valerio Genovese, Laura Martignon, Alan Perotti, and Alexandra Varga.

## Author information

### Authors and Affiliations

### Corresponding author

## Rights and permissions

## About this article

### Cite this article

Besold, T.R., Garcez, A.d., Stenning, K. *et al.* Reasoning in Non-probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples.
*Minds & Machines* **27, **37–77 (2017). https://doi.org/10.1007/s11023-017-9428-3

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s11023-017-9428-3

### Keywords

- Uncertainty in reasoning
- Interpretation
- Logic programming
- Dynamic norms
- Neural-symbolic integration