Advertisement

Abstract

We propose a new approach to communication between agents that perform inductive inference. Consider a community of agents where each agent has a limited view of the overall world. When an agent in this community induces a hypothesis about the world, it necessarily reflects that agent's partial view of the world. If an agent communicates a hypothesis to another agent, and that hypothesis is in conflict with the receiving agent's view of the world, then the receiving agent has to modify or discard the hypothesis.

Previous systems have used voting methods or theory refinement techniques to integrate these partial hypotheses. However, these mechanisms risk destroying parts of the hypothesis that are correct. Our proposal is that an agent should communicate the bounds of an induced hypothesis, along with the hypothesis itself. These bounds allow the hypotheses to be judged in the context from which they were formed.

This paper examines using version space boundary sets to represent these bounds. Version space boundary sets may be manipulated using set operations. These operations can be used to evaluate and integrate multiple partial hypotheses. We describe a simple implementation of this approach, and draw some conclusions on its practicality. Finally, we describe a tentative set of KQML operators for communicating hypotheses and their bounds.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. P. Brazdil, M. Gams, S. Sian, L. Torgo, and W. Van de Velde (1991). Learning in Distributed Systems and Multi-Agent Environments, In Proceedings of the European Working Session on Learning (EWSL91), Springer-Verlag, pages 424–439, Porto, Portugal.Google Scholar
  2. P. Brazdil and L. Torgo (1990). Knowledge Acquisition via Knowledge Integration, in Current Trends in AI, B. Wielenga et al.(eds.), IOS Press, Amsterdam.Google Scholar
  3. N. Cesa-Bianchi, Y. Freund, D. P. Helmbold, D. Haussler, R. E. Schapire, and M. K. Warmuth (1995). How to Use Expert Advice, Technical Report UCSC-CRL-95-19, University of California, Santa Cruz, CA.Google Scholar
  4. P. K. Chan and S. J. Stolfo (1995). A Comparative Evaluation of Voting and Meta-Learning on Partitioned Data, In Proceedings of the Twelfth International Conference on Machine Learning (ML95), Morgan-Kaufmann, pages 90–98, Lake Tahoe, CA.Google Scholar
  5. W. Davies (1993). ANIMALS, An Integrated Multi-Agent Learning System, M.Sc. Thesis, Department of Computing Science, University of Aberdeen, UK.Google Scholar
  6. M. Des Jardins and Diana F. Gordon (1995). Evaluation and Selection of Biases in Machine Learning, Machine Learning, 20:1–17.Google Scholar
  7. T. Finin, J. Weber, G. Wiederhold, M. Geneserth, R. Fritzson, D. MacKay, J. McGuire, R. Pelavin, S. Shapiro, and C. Beck (1993). Draft Specification of the KQML Agent-Communication Language, Unpublished Draft.Google Scholar
  8. R. W. Floyd and R. Beigel (1994). The Language of Machines, Computer Science Press, NY.Google Scholar
  9. M. Gams (1989). New Measurements Highlight the Importance of Redundant Knowledge, In Proceedings of the 4th European Working Session on Learning (EWSL89), Pitman-Morgan Kaufmann, pages 71–80, Montpellier, France.Google Scholar
  10. M. Genesereth (1995). Epilog for Lisp 2.0 Manual. Epistemics Inc., Palo Alto, CA.Google Scholar
  11. H. Hirsh (1989). Incremental Version Space Merging: A General Framework for Concept Learning, Ph.D. Thesis, Stanford University.Google Scholar
  12. S. Jain and A. Sharma (1993). Computational Limits on Team Identification of Languages, Technical Report 9301, School of Computer Science and Engineering, University of New South Wales, Australia.Google Scholar
  13. M. Kearns and H. S. Seung (1995). Learning from a Population of Hypotheses, Machine Learning, 18:255–276.Google Scholar
  14. N. Lavrac and S. Dzeroski (1994). Inductive Logic Programming: Techniques and Applications, Ellis Horwood, Herts, UK.Google Scholar
  15. R. S. Michalski (1993). Inferential Theory of Learning as a Conceptual Basis for Multistrategy Learning, Machine Learning, 11:111–151.Google Scholar
  16. T. M. Mitchell (1978). Version spaces: An Approach to Concept Learning, Ph.D. Thesis, Stanford University.Google Scholar
  17. T. M. Mitchell, R. M. Keller, and S. T. Kedar-Cabell (1986). Explanation-Based Generalization: A Unifying View, Machine Learning, 1:1–33.Google Scholar
  18. R. J. Mooney and D. Ourston (1991). A Multistrategy Approach to Theory Refinement, In Proceedings of the International Workshop on Multistrategy Learning, pages 115–130, Harper's Ferry, WV.Google Scholar
  19. S. Muggleton (1992). Inductive Logic Programming, Academic Press, London, UK.Google Scholar
  20. S. H. Nienhuys-Cheng and R. De Wolf (1996). Least Generalizations and Greatest Specializations of Sets of Clauses, Journal of Artificial Intelligence Research, 4:341–363Google Scholar
  21. M. Pazzani and D. Kibler (1992). The Utility of Knowledge in Inductive Learning, Machine Learning, 9: 57–94.Google Scholar
  22. F. J. Provost and D. N. Hennessy (1995). Distributed Machine Learning: Scaling up with Coarse Grained Parallelism, In Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology (ISMB94), AAAI Press, pages 340–348, Stanford, CA.Google Scholar
  23. J. R. Quinlan (1986). Induction of Decision Trees, Machine Learning, 1:81–106Google Scholar
  24. J. R. Quinlan (1990). Learning Logical Definitions from Relations, Machine Learning, 5:239–266Google Scholar
  25. J. R. Quinlan (1994). The Minimum Description Length Principle and Categorical Theories, In Machine Learning, Proceedings of the 11th International Workshop (ML94), Morgan Kaufmann, pages 233–241, New Brunswick, NJ.Google Scholar
  26. S. S. Sian (1991). Learning in Distributed Artificial Intelligence Systems, Ph.D. Thesis, Imperial College, UK.Google Scholar
  27. B. Silver, W. Frawley, G. Iba, J. Vittal, and K. Bradford (1990). ILS: A Framework for Multi-Paradigmatic Learning, In Machine Learning, Proceedings of the 7th International Workshop (ML90), Morgan Kaufmann, pages 348–356, Austin, TX.Google Scholar
  28. B. D. Smith (1995). Induction as Knowledge Integration, Ph.D. Thesis, University of Southern California.Google Scholar
  29. V. Svatek (1994). Integration of Rules from Expert and Rules Discovered in Data, Unpublished Draft, Prague University of Economics, Czech Republic.Google Scholar
  30. P. E. Utgoff (1989). Incremental Induction of Decision Trees, Machine Learning, 4:161–186.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Winton Davies
    • 1
  • Peter Edwards
    • 1
  1. 1.Department of Computing Science, King's CollegeUniversity of AberdeenAberdeenScotland, UK

Personalised recommendations