User Modeling and User-Adapted Interaction

, Volume 17, Issue 5, pp 439–474 | Cite as

Inferences, suppositions and explanatory extensions in argument interpretation

  • Sarah George
  • Ingrid Zukerman
  • Michael Niemann
Original Paper


We describe a probabilistic approach for the interpretation of user arguments that integrates three aspects of an interpretation: inferences, suppositions and explanatory extensions. Inferences fill in information that connects the propositions in a user’s argument, suppositions postulate new information that is likely believed by the user and is necessary to make sense of his or her argument, and explanatory extensions postulate information the user may have implicitly considered when constructing his or her argument. Our system receives as input an argument entered through a web interface, and produces an interpretation in terms of its underlying knowledge representation—a Bayesian network. Our evaluations show that suppositions and explanatory extensions are necessary components of interpretations, and that users consider appropriate the suppositions and explanatory extensions postulated by our system.


Discourse interpretation Suppositions Explanatory extensions Probabilistic approach Bayesian networks 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Charniak E. and Goldman R. (1993). A Bayesian model of plan recognition. Artif. Intell. 64(1): 53–79 CrossRefGoogle Scholar
  2. Chu-Carroll J. and Carberry S. (2000). Conflict resolution in collaborative planning dialogues. Int. J. Human Comput. Stud. 6(56): 969–1015 CrossRefGoogle Scholar
  3. Dean, T., Boddy, M.: An analysis of time-dependent planning. In: AAAI-88—Proceedings of the Seventh National Conference on Artificial Intelligence, pp. 49–54. St. Paul, Minnesota (1988)Google Scholar
  4. Druzdzel M. (1996). Qualitative verbal explanations in Bayesian belief networks. Artif. Intell. Simul. Behav. Quarterly 94: 43–54 Google Scholar
  5. Elzer, S., Carberry, S., Zukerman, I., Chester, D., Green, N., Demir, S.: A probabilistic framework for recognizing intention in information graphics. In: IJCAI05 Proceedings—the Nineteenth International Joint Conference on Artificial Intelligence, pp. 1042–1047. Edinburgh, Scotland (2005)Google Scholar
  6. Gardent, C., Kow, E.: Generating and selecting grammatical paraphrases. In: ENLG-05—Proceedings of the Tenth European Workshop on Natural Language Generation, pp. 49–57. Aberdeen, Scotland (2005)Google Scholar
  7. George, S., Zukerman, I., Niemann, M.: An anytime algorithm for interpreting arguments. In: PRICAI2004—Proceedings of the Eighth Pacific Rim International Conference on Artificial Intelligence, pp. 311–321. Auckland, New Zealand (2004)Google Scholar
  8. George, S., Zukerman, I., Niemann, M.: Modeling suppositions in users’ Arguments. In: UM05—Proceedings of the 10th International Conference on User Modeling, pp. 19–29. Edinburgh, Scotland (2005)Google Scholar
  9. Gertner, A., Conati, C., VanLehn, K.: Procedural help in andes: generating hints using a Bayesian network student model. In: AAAI98 – Proceedings of the Fifteenth National Conference on Artificial Intelligence, pp. 106–111. Madison, Wisconsin (1998)Google Scholar
  10. Getoor, L., Friedman, N., Koller, D., Taskar, B.: Learning probabilistic models of relational structure. In: Proceedings of the Eighteenth International Conference on Machine Learning, pp. 170–177. Williamstown, Massachusetts (2001)Google Scholar
  11. Grice, H.P.: Logic and conversation. In: Cole P., Morgan J. (eds.) Syntax and Semantics, Volume 3: Speech Acts, pp. 41–58. Academic Press (1975)Google Scholar
  12. Gurney J., Perlis D. and Purang K. (1997). Interpreting presuppositions using active logic: from contexts to utterances. Comput. Intell. 13(3): 391–413 CrossRefGoogle Scholar
  13. Hobbs J.R., Stickel M.E., Appelt D.E. and Martin P. (1993). Interpretation as abduction. Artif. Intell. 63(1–2): 69–142 CrossRefGoogle Scholar
  14. Horvitz, E., Paek, T.: A computational architecture for conversation. In: UM99—Proceedings of the Seventh International Conference on User Modeling, pp. 201–210. Banff, Canada, (1999)Google Scholar
  15. Horvitz, E., Suermondt, H., Cooper, G.: Bounded conditioning: flexible inference for decision under scarce resources. In: UAI89—Proceedings of the 1989 Workshop on Uncertainty in Artificial Intelligence, pp. 182–193. Windsor, Canada (1989)Google Scholar
  16. Jitnah, N., Zukerman, I., McConachy, R., George, S.: Towards the generation of rebuttals in a Bayesian argumentation system. In: Proceedings of the First International Natural Language Generation Conference, pp. 39–46. Mitzpe Ramon, Israel (2000)Google Scholar
  17. Joshi, A., Webber, B.L., Weischedel, R.M.: Living up to expectations: computing expert responses. In: AAAI84—Proceedings of the Fourth National Conference on Artificial Intelligence, pp. 169–175. Austin, Texas (1984)Google Scholar
  18. Kahneman, D., Slovic, P., Tversky, A.: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press (1982)Google Scholar
  19. Kaplan S.J. (1982). Cooperative responses from a portable natural language query system. Artif. Intell. 19: 165–187 CrossRefGoogle Scholar
  20. McConachy, R., Korb, K.B., Zukerman, I.: Deciding what not to say: an attentional-probabilistic approach to argument presentation. In: Proceedings of the Twentieth Annual Conference of the Cognitive Science Society, pp. 669–674. Madison, Wisconsin (1998)Google Scholar
  21. McRoy S.W. and Hirst G. (1995). The repair of speech act misunderstandings by abductive inference. Comput. Linguistics 21(4): 435–478 Google Scholar
  22. Mercer R.E. (1991). Presuppositions and default reasoning: a study in lexical pragmatics. In: Pustejovski, J. and Bergler, S. (eds) ACL SIG Workshop on Lexical Semantics and Knowledge Representation (SIGLEX), pp 321–339. Berkeley, California Google Scholar
  23. Motro A. (1986). SEAVE: a mechanism for verifying user presuppositions in query systems. ACM Trans. Inform. Syst. (TOIS) 4(4): 312–330 CrossRefGoogle Scholar
  24. Ng, H., Mooney, R.: On the role of coherence in abductive explanation. In: AAAI-90—Proceedings of the Eighth National Conference on Artificial Intelligence, pp. 337–342. Boston, Massachusetts (1990)Google Scholar
  25. Paiva, D.S.: Investigating NLG architectures: taking style into consideration. In: EACL’99—Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics, pp. 237–240. Bergen, Norway (1999)Google Scholar
  26. Pearl J. (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publishers, San Mateo, California Google Scholar
  27. Pollack, M.: Plans as complex mental attitudes. In: Cohen, P., Morgan, J., Pollack, M. (eds.) Intentions in Communication, pp. 77–103. MIT Press (1990)Google Scholar
  28. Quilici, A.: Detecting and responding to plan-oriented misconceptions. In: Kobsa, A., Wahlster, W. (eds.) User Models in Dialog Systems, pp. 108–132. Springer-Verlag (1989)Google Scholar
  29. Taskar, B., Abbeel, P., Koller, D.: Discriminative probabilistic models for relational data. In: Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI), pp. 485–490. Alberta, Canada (2002)Google Scholar
  30. van Beek, P.: A model for generating better explanations. In: Proceedings of the Twenty-Fifth Annual Meeting of the Association for Computational Linguistics, pp. 215–220. Stanford, California (1987)Google Scholar
  31. Wallace C. (2005). Statistical and Inductive Inference by Minimum Message Length. Springer, Berlin, Germany MATHGoogle Scholar
  32. Zukerman I. and George S. (2005). A probabilistic approach for argument interpretation. User Model. User-Adapted Interact. Special Issue Lang-Based Interact. 15(1–2): 5–53 Google Scholar
  33. Zukerman, I., George, S., George, M.: Incorporating a user model into an information theoretic framework for argument interpretation. In: UM03—Proceedings of the Ninth International Conference on User Modeling, pp. 106–116. Johnstown, Pennsylvania (2003)Google Scholar
  34. Zukerman, I., Jitnah, N., McConachy, R., George, S.: Recognizing intentions from rejoinders in a Bayesian interactive argumentation system. In: PRICAI2000—Proceedings of the Sixth Pacific Rim International Conference on Artificial Intelligence, pp. 252–263. Melbourne, Australia (2000)Google Scholar
  35. Zukerman I. and McConachy R. (2001). WISHFUL: a discourse planning system that considers a user’s inferences. Comput. Intell. 1(17): 1–61 CrossRefGoogle Scholar
  36. Zukerman, I., Niemann, M., George, S.: Improving the presentation of argument interpretations based on user trials. In: AI’04—Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, pp. 587–598. Cairns, Australia (2004)Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2007

Authors and Affiliations

  • Sarah George
    • 1
    • 2
  • Ingrid Zukerman
    • 1
  • Michael Niemann
    • 1
  1. 1.Faculty of Information TechnologyMonash UniversityClaytonAustralia
  2. 2.CVS DudeToowongAustralia

Personalised recommendations