Requirements for belief models in cooperative dialogue

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Models of rationality typically rely on underlying logics that allow simulated agents to entertain beliefs about one another to any depth of nesting. Such models seem to be overly complex when used for belief modelling in environments in which cooperation between agents can be assumed, i.e., most HCI contexts. We examine some existing dialogue systems and find that deeply-nested beliefs are seldom supported, and that where present they appear to be unnecessary except in some situations involving deception.

Use of nested beliefs is associated with nested reasoning (i.e., reasoning about other agents' reasoning). We argue that for cooperative dialogues, representations of individual nested beliefs of the third level (i.e., what A thinks B thinks A thinks B thinks) and beyond are in principle unnecessary unless directly available from the environment, because the corresponding nested reasoning is redundant.

Since cooperation sometimes requires that agents reason about what is mutually believed, we propose a representation in which the second and all subsequent nesting levels are merged into a single category. In situations affording individual deeply-nested beliefs, such a representation restricts agents to human-like referring and repair strategies, where an unrestricted agent might make an unrealistic and perplexing utterance.