An overview of cooperative answering in databases

  • Jack Minker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1495)


The field of cooperative answering goes back to work started by Joshi and Webber [12] in natural language processing in the early 1980s at the University of Pennsylvania. The work was applied to databases and information systems at the University of Pennsylvania by Kaplan [14, 15] and Mays [17]. Other early work at the University of Pennsylvania and at other universities is discussed in [25, 13, 26, 16, 18, 24]. Databases and knowledge base systems are often difficult to use because they do not attempt to cooperate with their users. A database or a knowledge base query system provides literal answers to queries posed to them. Such answers to queries may not always be the best answers. Instead, an answer with extra or alternative information may be more useful and less misleading to a user.

This lecture surveys foundational work that has been done toward developing database and knowledge base systems with the ability to exhibit cooperative behavior. In the 1970s, Grice [11] proposed maxims of cooperative conversation. These maxims provide the starting point for the field of cooperative answering.

To develop a general system for data and knowledge bases, it is important to specify both the sources of information needed to provide cooperative behaviors, and what constitutes cooperative behavior. Several sources of knowledge apply: the basic knowledge in a system is given by explicit data, referred to as data in a relational database, or facts (extensional data) in a deductive database; by general rules that permit new relations (or new predicates) to be developed from existing data, referred to as views in relational databases and as intensional data in deductive databases; and by integrity constraints that must be consistent with the extensional and intensional data. Integrity constraints may be obtained either through the user or the database administrator, or by a data mining capability. Whereas integrity constraints must be consistent with every instance of the database schema, another source of knowledge is state constraints, which may apply to the current state, and need not apply to a subsequent state upon update. Two additional sources of knowledge arise from information about the users. One describes the class of the user, for example, an engineer or a child, each of whom expect different kinds of answers to queries, and user constraints that must be satisfied. User constraints need not be consistent with the database, but reflect the interests, preferences and desires of the user.

Alternative cooperative behaviors are explained and illustrated by examples. The cooperative behaviors discussed are:
  1. 1.

    Misconceptions. A misconception is a query for which it is not possible, in any state of the database to have an answer.

  2. 2.

    State Misconceptions. A state misconception is a query for which it is not possible to have an answer, given the current state of the database.

  3. 3.

    False Presuppositions. A query has a false presupposition when it fails to have an answer, but is neither a misconception nor a state misconception.

  4. 4.

    Intensional Answer. An intensional answer is a generalization of a query which provides a rule that satisfies the query, but does not necessarily provide data that satisfies the query.

  5. 5.

    Relaxed Answer. A relaxed answer to a query is an answer that may or may not satisfy the original query, but provides an alternative answer that may meet the needs of the user.

  6. 6.

    Scalar Implicature. A scalar implicature is a range that may meet the query.

  7. 7.

    User Goals, Inferences and Preferences. User goals, interests and preferences should be adhered to when answering a query.

A brief description is provided of three systems that have been implemented, which exhibit cooperative behavior for relational and deductive databases. The systems and their features are:
  1. 1.

    Cooperative AnsweRing Meta Interpreter (CARMIN), developed at the University of Maryland by Minker and his students [9, 8, 10].

  2. (a)


  3. (b)

    State Misconceptions

  4. (c)

    False Presuppositions

  5. (d)

    Intensional Answers

  6. (e)

    Relaxed Answers

  7. (f)

    User Goals, Interests and Limited Preferences

  8. 2.

    CoBase, developed at UCLA by Chu and his students [3,2,1,4,5]. A language, CoSQL has been implemented and interfaced with an existing database (Oracle) and SQL.

  9. (a)

    Intensional Answers

  10. (b)

    Relaxed Answers

  11. (c)

    User Goals, Inferences and Preferences

  12. 3.

    FLEX, developed at George Mason University by Motro [23, 19–22].

  13. (a)

    Well Formedness Test of Queries and Automatic Modification

  14. (b)

    Relaxed Queries

  15. (c)

    False Presuppositions


For a discussion of the state-of-the-art of cooperative answering systems, see

A functional description is provided of a cooperative database system. It is currently possible to develop a cooperative capability that interfaces with any of the existing database systems. Since proposed and future versions of relational databases will include capabilities to handle recursion and semantic query optimization, the ability to include cooperative capabilities will, in the not too distant future, be incorporated into such systems.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    W. W. Chu and Q. Chen. A structured approach for cooperative query answering. IEEE Transactions on Knowledge and Data Engineering, 6(5), Oct. 1994.Google Scholar
  2. 2.
    W. W. Chu, Q. Chen, and A. Y. Hwang. Query answering via cooperative data inference. Journal of Intelligent Information Systems (JIIS), 3(1):57–87, Feb. 1994.CrossRefGoogle Scholar
  3. 3.
    W. W. Chu, Q. Chen, and R.-C. Lee. Cooperative query answering via type abstraction hierarchy. In S. M. Deen, editor, Cooperating Knowledge Based Systems 1990, pages 271–290. Springer-Verlag, University of Keele, U.K., 1991.Google Scholar
  4. 4.
    W. W. Chu, Q. Chen, and M. A. Merzbacher. CoBase: A cooperative database system. In R. Demolombe and T. Imielinski, editors, Nonstandard Queries and Nonstandard Answers, Studies in Logic and Computation 3, chapter 2, pages 41–73. Clarendon Press, Oxford, 1994.Google Scholar
  5. 5.
    W. W. Chu, H. Yang, K. Chiang, M. Minock, G. Chow, and C. Larson. CoBase: A scalable and extensible cooperative information system. Journal of Intelligent Information Systems (JIIS), 6(2/3):223–259, May 1996.CrossRefGoogle Scholar
  6. 6.
    T. Gaasterland, P. Godfrey, and J. Minker. An overview of cooperative answering. Journal of Intelligent Information Systems, 1(2):123–157, 1992. Invited paper.CrossRefGoogle Scholar
  7. 7.
    T. Gaasterland, P. Godfrey, and J. Minker. Relaxation as a platform for cooperative answering. Journal of Intelligent Information Systems, 1:293–321, 1992.CrossRefGoogle Scholar
  8. 8.
    T. Gaasterland, P. Godfrey, J. Minker, and L. Novik. A cooperative answering system. In A. Voronkov, editor, Proceedings of the Logic Programming and Automated Reasoning Conference, Lecture Notes in Artificial Intelligence 624, pages 478–480. Springer-Verlag, St. Petersburg, Russia, July 1992.Google Scholar
  9. 9.
    T. Gaasterland, P. Godfrey, J. Minker, and L. Novik. Cooperative answers in database systems. In Proceedings of the Space Operations, Applications, and Research Conference, Houston, Texas, Aug. 1992.Google Scholar
  10. 10.
    P. Godfrey, J. Minker, and L. Novik. An architecture for a cooperative database system. In W. Litwin and T. Risch, editors, Proceedings of the First International Conference on Applications of Databases, Lecture Notes in Computer Science 819, pages 3–24. Springer Verlag, Vadstena, Sweden, June 1994.Google Scholar
  11. 11.
    H. Grice. Logic and Conversation. In P. Cole and J. Morgan, editors, Syntax and Semantics. Academic Press, 1975.Google Scholar
  12. 12.
    A. Joshi, B. Webber, and I. Sag, editors. Elements of Discourse Understanding. Cambridge University Press, 1981.Google Scholar
  13. 13.
    A. K. Joshi, B. L. Webber, and R. M. Weischedel. Living up to expectations: Computing expert responses. In Proceedings of the National Conference on Artificial Intelligence, pages 169–175, University of Texas at Austin, Aug. 1984. The American Association for Artificial Intelligence.Google Scholar
  14. 14.
    S. J. Kaplan. Appropriate responses to inappropriate questions. In Joshi et al. [12], pages 127–144.Google Scholar
  15. 15.
    S. J. Kaplan. Cooperative responses from a portable natural language query system. Artificial Intelligence, 19(2):165–187, Oct. 1982.CrossRefGoogle Scholar
  16. 16.
    W. Lehnert. A computational theory of human question answering. In Joshi et al. [12], pages 145–176.Google Scholar
  17. 17.
    E. Mays. Correcting misconceptions about database structure. In Proceedings of the CSCSI '80, 1980.Google Scholar
  18. 18.
    K. McKeown. Generating Natural Language Text in Response to Questions about Database Queries. PhD thesis, University of Pennsylvania, 1982.Google Scholar
  19. 19.
    A. Motro. Extending the relational model to support goal queries. In Proceedings from the First International Workshop on Expert Database Systems, pages 129–150. Benjamin/Cummings, 1986.Google Scholar
  20. 20.
    A. Motro. SEAVE: A mechanism for verifying user presuppositions in query systems. ACM Transactions on Office Information Systems, 4(4):312–330, October 1986.CrossRefGoogle Scholar
  21. 21.
    A. Motro. Using constraints to provide intensional answers to relational queries. In Proceedings of the Fifteenth International Conference on Very Large Data Bases, Aug. 1989.Google Scholar
  22. 22.
    A. Motro. FLEX: A tolerant and cooperative user interface to databases. IEEE Transactions on Knowledge and Data Engineering, 2(2):231–246, June 1990.CrossRefGoogle Scholar
  23. 23.
    A. Motro. Panorama: A database system that annotates its answers to queries with their properties. Journal of Intelligent Information Systems (JIIS), 7(1):51–74, Sept. 1996.CrossRefGoogle Scholar
  24. 24.
    M. E. Pollack. Generating expert answers through goal inference. Technical report, SRI International, Stanford, California, Oct. 1983.Google Scholar
  25. 25.
    M. E. Pollack, J. Hirschberg, and B. Webber. User participation in the reasoning processes of expert systems. In Proceedings of the American Association of Artificial Intelligence, 1982.Google Scholar
  26. 26.
    B. L. Webber and E. Mays. Varieties of user misconceptions: Detection and correction. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence, pages 650–652, Karlsruhe, Germany, Aug. 1983.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Jack Minker
    • 1
  1. 1.Department of Computer Science and Institute of Advanced Computer StudiesUniversity of MarylandCollege Park

Personalised recommendations