Autonomous Agents and Multi-Agent Systems

, Volume 29, Issue 2, pp 266–304 | Cite as

Coordinated inductive learning using argumentation-based communication

Article

Abstract

This paper focuses on coordinated inductive learning, concerning how agents with inductive learning capabilities can coordinate their learnt hypotheses with other agents. Coordination in this context means that the hypothesis learnt by one agent is consistent with the data known to the other agents. In order to address this problem, we present A-MAIL, an argumentation approach for agents to argue about hypotheses learnt by induction. A-MAIL integrates, in a single framework, the capabilities of learning from experience, communication, hypothesis revision and argumentation. Therefore, the A-MAIL approach is one step further in achieving autonomous agents with learning capabilities which can use, communicate and reason about the knowledge they learn from examples.

Keywords

Multiagent systems Computational argumentation Inductive learning Learning from communication Learning from argumentation Coordinated inductive learning 

Notes

Acknowledgments

Research partially funded by the projects Next-CBR (TIN2009-13692-C03-01) and Cognitio (TIN2012-38450- C03-03) [both co-funded with FEDER], Agreement Technologies (CONSOLIDER CSD2007-0022), and by the Grants 2009-SGR-1433 and 2009-SGR-1434 of the Generalitat de Catalunya.

References

  1. 1.
    Aamodt, A., & Plaza, E. (1994). Case-based reasoning: Foundational issues, methodological variations, and system approaches. Artificial Intelligence Communications, 7(1), 39–59.Google Scholar
  2. 2.
    Aït-Kaci, H., & Podelski, A. (1992). Towards a meaning of LIFE. Tech. Rep. 11, Digital Research Laboratory.Google Scholar
  3. 3.
    Amgoud, L., & Serrurier, M. (2007). Arguing and explaining classifications. In Proceedings of AAMAS ’07 (pp. 1–7). New York: ACM.Google Scholar
  4. 4.
    Aras, R., Dutech, A., & Charpillet, F. (2004). Stigmergy in multi agent reinforcement learning. In Proceedings of 4th hybrid intelligent systems (pp. 468–469). Los Alamitos: IEEE Computer Society.Google Scholar
  5. 5.
    Armengol, E., & Plaza, E. (2000). Bottom-up induction of feature terms. Machine Learning Journal, 41(1), 259–294.CrossRefMATHGoogle Scholar
  6. 6.
    Bache, K., & Lichman, M. (2013). UCI machine learning repository. http://archive.ics.uci.edu/ml.
  7. 7.
    Besnard, P., & Hunter, A. (2001). A logic-based theory of deductive arguments. Artificial Intelligence, 128(1), 203–235.CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Bourgne, G., El Fallah Segrouchni, A., & Soldano, H. (2007). SMILE: Sound multi-agent incremental learning. In Proceedings of AAMAS ’07 (pp. 239:1–239:8). New York: ACM.Google Scholar
  9. 9.
    Bourgne, G., Soldano, H., & Fallah-Seghrouchni, A. E. (2010). Learning better together. In Proceedings of ECAI’10. Frontiers in artificial Intelligence and applications (Vol. 215, pp. 85–90). Amsterdam: IOS Press.Google Scholar
  10. 10.
    Bowling, M., & Veloso, M. M. (2003). Simultaneous adversarial multi-robot learning. In Proceedings of IJCAI-03 (pp. 699–704). Edmonton: Morgan Kaufmann.Google Scholar
  11. 11.
    Bowling, M. H., & Veloso, M. M. (2002). Multiagent learning using a variable learning rate. Artificial Intelligence, 136(2), 215–250.CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Brazdil, P. B., & Torgo, L. (1990). Knowledge acquisition via knowledge integration. In B. Wielinga, J. Boose, B. Gaines, G. Schreiber, & M. van Someren (Eds.), Current trends in knowledge acquisition. Amsterdam: IOS Press.Google Scholar
  13. 13.
    Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka, Jr., E. R., & Mitchell, T. M. (2010). Toward an architecture for never-ending language learning. In AAAI.Google Scholar
  14. 14.
    Carpenter, B. (1991). Typed feature structures: An extension of first-order terms. In V. Saraswat & K. Ueda (Eds.), Logic programming: Proceedings of the 1991 international symposium (pp. 187–201). Cambridge: The MIT Press.Google Scholar
  15. 15.
    Carpenter, B. (1992). The logic of typed feature structures. Cambridge tracts in theoretical computer science. Cambridge: Cambridge University Press.Google Scholar
  16. 16.
    Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41–75.CrossRefMathSciNetGoogle Scholar
  17. 17.
    Chesñevar, C. I., Simari, G. R., & Godo, L. (2005). Computing dialectical trees efficiently in possibilistic defeasible logic programming. In Proceedings of LPNMR’05. Lecture notes in computer science (Vol. 3662, pp. 158–171). Heidelberg: Springer.Google Scholar
  18. 18.
    Clark, P., & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283.Google Scholar
  19. 19.
    Cohn, D. A., Ghahramani, Z., & Jordan, M. I. (1994). Active learning with statistical models. In G. Tesauro, D. Touretzky, & T. Leen (Eds.), Proceedings of NIPS’94 (pp. 705–712). Cambridge: The MIT Press.Google Scholar
  20. 20.
    Coste-Marquis, S., Devred, C., Konieczny, S., Lagasquie-Schiex, M. C., & Marquis, P. (2007). On the merging of Dung’s argumentation systems. Artificial Intelligence, 171, 730–753.CrossRefMATHMathSciNetGoogle Scholar
  21. 21.
    Davies, W., & Edwards, P. (1995). Distributed learning: An agent-based approach to data-mining. In ICML ’95 workshop on agents that learn from other agents.Google Scholar
  22. 22.
    Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2), 321–357.CrossRefMATHMathSciNetGoogle Scholar
  23. 23.
    Dunne, P. E., Hunter, A., McBurney, P., Parsons, S., & Wooldridge, M. (2009). Inconsistency tolerance in weighted argument systems. In C. Sierra, C. Castelfranchi, K. S. Decker, & J. S. Sichman (Eds.), Proceedings of AAMAS ’09, IFAAMAS (pp. 851–858), Taipei.Google Scholar
  24. 24.
    Hirsh, H. (1989). Incremental version-space merging: A general framework for concept learning. Ph.D. Thesis, Stanford University, Stanford, CA.Google Scholar
  25. 25.
    Hu, J., & Wellman, M. P. (1998) Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of ICML ’98 (pp. 242–250). San Francisco: Morgan Kaufmann.Google Scholar
  26. 26.
    Karunatillake, N. C., Jennings, N. R., Rahwan, I., & McBurney, P. (2009). Dialogue games that agents play within a society. Artificial intelligence, 173(9), 935–981.CrossRefGoogle Scholar
  27. 27.
    van der Laag, P. R. J., & Nienhuys-Cheng, S. H. (1994). Existence and nonexistence of complete refinement operators. In Proceedings of ECML-94. Lecture notes in computer science (Vol. 784, pp. 307–322). Berlin: Springer.Google Scholar
  28. 28.
    Larson, J., & Michalski, R. S. (1977). Inductive inference of VL decision rules. SIGART Bulletin, 63, 38–44.CrossRefGoogle Scholar
  29. 29.
    Lavrač, N., & Džeroski, S. (1994). Inductive logic programming. Techniques and applications. New York: Ellis Horwood.Google Scholar
  30. 30.
    Leake, D. B., & Ram, A. (Eds.). (1995). Goal-driven learning. Cambridge: The MIT Press.Google Scholar
  31. 31.
    Leake, D. B., & Sooriamurthi, R. (2001). When two case bases are better than one: Exploiting multiple case bases. In Proceedings of ICCBR’01. Lecture notes in computer science (Vol. 2080, pp. 321–335). Berlin: Springer.Google Scholar
  32. 32.
    Leake, D. B., & Sooriamurthi, R. (2002). Managing multiple case bases: Dimensions and issues. In Proceeding of FLAIRS’02 (pp. 106–110). Menlo Park: AAAI Press.Google Scholar
  33. 33.
    Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Proceedings of ICML-94 (pp. 157–163). San Francisco: Morgan Kaufmann.Google Scholar
  34. 34.
    Manning, C., Raghavan, P., & Schutze, M. (2009). Probabilistic information retrieval. Cambridge: Cambridge University Press.Google Scholar
  35. 35.
    McGinty, L., & Smyth, B. (2001) Collaborative case-based reasoning: Applications in personalized route planning. In Proceedings of ICCBR’01. Lecture notes in computer science (Vol. 2080, pp. 362–376). Berlin: Springer.Google Scholar
  36. 36.
    Michie, D., Muggleton, S., Page, D., & Srinivasan, A. (1994). To the international computing community: A new East-West challenge. Tech. rep., Oxford University Computing Laboratory, Oxford. ftp://ftp.comlab.ox.ac.uk/pub/Packages/ILP/trains.tar.Z.
  37. 37.
    Modi, P. J., & Shen, W. M. (2001). Collaborative multiagent learning for classification tasks. In J. P. Müller, E. Andre, S. Sen, & C. Frasson (Eds.), Proceedings of ICAA’01 (pp. 37–38). New York: ACM Press.Google Scholar
  38. 38.
    Mozina, M., Zabkar, J., & Bratko, I. (2007). Argument based machine learning. Artificial Intelligence, 171(10–15), 922–937.CrossRefMATHMathSciNetGoogle Scholar
  39. 39.
    Ontañón, S., Dellunde, P., Godo, L., & Plaza, E. (2012). A defeasible reasoning model of inductive concept learning from examples and communication. Artificial intelligence, 193, 129–148.CrossRefMATHMathSciNetGoogle Scholar
  40. 40.
    Ontañón, S., Plaza, E. (2004). Justification-based selection of training examples for case base reduction. In J. F. Boulicaut, F. Esposito, F. Giannotti, & D. Pedreschi (Eds.), Machine learning: ECML 2004. Lecture notes in artificial intelligence (Vol. 3201, pp. 310–321). Berlin: Springer.Google Scholar
  41. 41.
    Ontañón, S., & Plaza, E. (2007). Learning and joint deliberation through argumentation in multiagent systems. In E. H. Durfee, M. Yokoo, M. N. Huhns, & O. Shehory (Eds.), Proceedings of AAMAS’07 (pp. 971–978). Honolulu: IFAAMAS.Google Scholar
  42. 42.
    Ontañón, S., & Plaza, E. (2010) Concept convergence in empirical domains. In B. Pfahringer, G. Holmes, & A. G. Hoffmann (Eds.), Discovery science. Lecture notes in computer science (Vol. 6332, pp. 281–295). Berlin: Springer.Google Scholar
  43. 43.
    Ontañón, S., & Plaza, E. (2010). Towards argumentation-based multiagent induction. In Proceedings of the 2010 conference on ECAI 2010: 19th European conference on artificial intelligence (pp. 1111–1112). Amsterdam: IOS Press.Google Scholar
  44. 44.
    Ontañón, S., & Plaza, E. (2012). Similarity measures over refinement graphs. Machine Learning, 87(1), 57–92.CrossRefMATHMathSciNetGoogle Scholar
  45. 45.
    Plaza, E., Arcos, J. L., & Martín, F. (1997) Cooperative case-based reasoning. In G. Weiss (Ed.), Distributed artificial intelligence meets machine learning. Learning in multi-agent environments. Lecture notes in artificial intelligence (Vol. 1221, pp. 180–201). Berlin: Springer.Google Scholar
  46. 46.
    Plaza, E., & Ontañón, S. (2006). Learning collaboration strategies for committees of learning agents. Autonomous Agents and Multi-Agent Systems, 13(3), 429–461.CrossRefGoogle Scholar
  47. 47.
    Prakken, H. (2005). Coherence and flexibility in dialogue games for argumentation. Journal of Logic and Computation, 15, 1009–1040.CrossRefMATHMathSciNetGoogle Scholar
  48. 48.
    Prassad, M. V. N., Lesser, V. R., & Lander, S. (1995). Retrieval and reasoning in distributed case bases. Tech. rep., UMass Computer Science Department.Google Scholar
  49. 49.
    Provost, F. J., & Hennessy, D. (1996). Scaling up: Distributed machine learning with cooperation. In W. J. Clancey & D. S. Weld (Eds.), Proceedings of AAAI’96 (pp. 74–79). Menlo Park/Cambridge: AAAI Press/The MIT Press.Google Scholar
  50. 50.
    Quinlan, J. R. (1990). Learning logical definitions from relations. Machine Learning, 5, 239–266.Google Scholar
  51. 51.
    Rotstein, N. D., Moguillansky, M. O., & Simari, G. R. (2009). Dialectical abstract argumentation: A characterization of the marking criterion. In C. Boutilier (Ed.), Proceedings of IJCAI’09 (pp. 898–903). Menlo Park: AAAI Press.Google Scholar
  52. 52.
    Sian, S. S. (1991). Extending learning to multiple agents: Issues and a model for multi-agent machine learning (MA-ML). In Y. Kodratoff (Ed.), Machine learning—EWSL-91. Lecture notes in computer science (Vol. 482, pp. 440–456). Berlin: Springer.Google Scholar
  53. 53.
    Smyth, B., & Keane, M. T. (1995). Remenbering to forget: A competence-preserving case delection policy for case-based reasoning systems. In Proceedings of IJCAI-95 (pp. 377–382).Google Scholar
  54. 54.
    Stone, P., & Sen, S. (Eds.). (2000). In Proceedings of AGENTS-2000/ECML-2000 joint workshop on learning agents, 3 June 2000, Barcelona.Google Scholar
  55. 55.
    Stone, P., & Veloso, M. M. (1998). Towards collaborative and adversarial learning: A case study in robotic soccer. International Journal of Human-Computer Studies, 48(1), 83–104.CrossRefGoogle Scholar
  56. 56.
    Stone, P., & Veloso, M. M. (2000). Multiagent systems: A survey from a machine learning perspective. Autonomous Robots, 8(3), 345–383.CrossRefGoogle Scholar
  57. 57.
    Thimm, M., & Kern-Isberner, G. (2008). A distributed argumentation framework using defeasible logic programming. In Computational models of argument: Proceedings of COMMA 2008. Frontiers in artificial intelligence and applications (Vol. 172, pp. 381–392). Amsterdam: IOS Press.Google Scholar
  58. 58.
    Wardeh, M., Bench-Capon, T. J. M., & Coenen, F. (2009). PADUA: A protocol for argumentation dialogue using association rules. Artificial Intelligence in Law, 17(3), 183–215.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2014

Authors and Affiliations

  1. 1.Computer Science DepartmentDrexel UniversityPhiladelphiaUSA
  2. 2.IIIA (Artificial Intelligence Research Institute)CSIC (Spanish Council for Scientific Research)Bellaterra, CataloniaSpain

Personalised recommendations