Abstract
The Little Nell Problem was formulated by Drew McDermott in 1982. It reveals unexpected complexities in the interaction of the beliefs and intentions of a planning agent. This paper discusses the problem and proposes a solution.
Similar content being viewed by others
Notes
Of course, an intention can be based on multiple goals. But if we allow goals to be conjunctive, we can assume without loss of generality that intentions are based on a single goal. That is what I will do. Also, I will assume that goals are dated propositions—propositions to the effect that a state obtains at one or more specified times. When I say that “a goal will occur,” this means only that its associated times are later than some reference time, which is specified by the context.
Not everyone accepts this condition on intentions, and in fact in planning theory it is useful to think of goals as degenerate partial plans. (Here, I call these things ‘tentative goals’.) By analogy, it could seem sensible also to allow degenerate intentions. Readers who feel uncomfortable with this way of framing the process of intention formation can think of this as a terminological decision.
See Thomason (1984).
For an account of how this works with reasons for belief, see Horty (2012).
It might well be possible to modify Cohen and Levesque’s theory to accommodate more of the mechanisms for intention revision listed in (2), but I believe that a thoroughgoing attempt to do this would have to result in a very different theory, because of the importance they attach to the persistence of intentions.
See, especially, Turner (1999) and the references in this paper.
But, as a referee points out, it points in the direction of a different problem, having to do with commitment cycles in multiagent reasoning.
As a referee pointed out, a similar idea is incorporated in causal-link planners, a popular type of planning algorithm. See Barrett and Weld (1994).
References
Barrett, A., & Weld, D. S. (1994). Partial-order planning: Evaluating possible efficiency gains. Artificial Intelligence, 67(1), 71–112.
Bratman, M. E. (1984). Two faces of intention. The Philosophical Review, 93(3), 375–405.
Cohen, P. R., & Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42(3), 213–261.
de Kleer, J. (1986). An assumption-based TMS. Artificial Intelligence, 28(1), 127–162.
Goodman, N. (1947). The problem of counterfactual conditionals. The Journal of Philosophy, 44, 113–118.
Haas, A. (1985). Possible events, actual events, and robots. Computational Intelligence, 1(2), 59–70.
Horty, J. F. (2012). Reasons as defaults. Oxford: Oxford University Press.
Lent, J., & Thomason, R. H. (2015). Action models for conditionals. Journal of Logic, Language, and Information, 24(2), 211–231.
Lewis, D. K. (1973). Counterfactuals. Cambridge, MA: Harvard University Press.
Mcdermott, D. (1982). A temporal logic for reasoning about processes and plans. Cognitive Science, 6, 101–155.
Mitchell, T. M. (1986). Version space: An approach to concept learning. Ph.D. dissertation, Computer Science Department, Stanford University, Stanford, CA.
Thomason, R. H. (1984). Combinations of tense and modality. In D. Gabbay & F. Günthner (Eds.), Handbook of philosophical logic, Volume II: Extensions of classical logic (pp. 135–165). Dordrecht: D. Reidel.
Turner, H. (1999). A logic of universal causation. Artificial Intelligence, 113(1–2), 87–123.
Acknowledgments
I am grateful to two referees of this paper for helpful comments.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Thomason, R.H. The Little Nell Problem: reasonable and resolute maintenance of agent intentions. Synthese 195, 433–440 (2018). https://doi.org/10.1007/s11229-016-1229-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-016-1229-3