Multiagent Incremental Learning in Networks

* Final gross prices may vary according to local VAT.

Get Access


This paper investigates incremental multiagent learning in structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication constraints. Recently, different mechanisms have been proposed that allow groups of agents to coordinate their hypotheses. Although these mechanisms have been shown to guarantee (theoretically) convergence to globally consistent states of the system, others notions of effectiveness can be considered to assess their quality. Furthermore, this guaranteed property should not come at the price of a great loss of efficiency (for instance a prohibitive communication cost). We explore these questions theoretically and experimentally (using different boolean formulas learning problems).