Learning to Negotiate Optimally in Non-stationary Environments

  • Vidya Narayanan
  • Nicholas R. Jennings
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4149)


We adopt the Markov chain framework to model bilateral negotiations among agents in dynamic environments and use Bayesian learning to enable them to learn an optimal strategy in incomplete information settings. Specifically, an agent learns the optimal strategy to play against an opponent whose strategy varies with time, assuming no prior information about its negotiation parameters. In so doing, we present a new framework for adaptive negotiation in such non-stationary environments and develop a novel learning algorithm, which is guaranteed to converge, that an agent can use to negotiate optimally over time. We have implemented our algorithm and shown that it converges quickly in a wide range of cases.


Multiagent System Negotiation Process Bayesian Learning Learning Agent Automate Negotiation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Filar, J., Vrieze, K.: Competitive Markov Decision Processes. Springer, Heidelberg (1996)Google Scholar
  2. 2.
    Fudenberg, D., Levine, D.K.: The Theory of Learning in Games. MIT Press, Cambridge (1998)MATHGoogle Scholar
  3. 3.
    Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge (1991)Google Scholar
  4. 4.
    Hu, J., Wellman, M.P.: Multiagent reinforcement learning: Theoretical framework and an algorithm. In: Proceedings of the 11th International Conference on Machine Learning, pp. 242–250 (1998)Google Scholar
  5. 5.
    Kalai, E., Lehrer, E.: Rational learning leads to nash equilibrium. Econometrica 61(5), 1019–1045 (1993)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Karlin, S., Taylor, H.: First Course in Stochastic Processes. Academic Press, London (1974)Google Scholar
  7. 7.
    Kraus, S., Subrahmanian, V.S.: Multiagent reasoning with probability, time, and beliefs. International Journal of Intelligent Systems 10(5), 459–499 (1995)MATHCrossRefGoogle Scholar
  8. 8.
    Kulkarni, V.: Modelling and Analysis of Stochastic Systems. Chapman Hall/CRC (1996)Google Scholar
  9. 9.
    Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the 11th International Conference on Machine Learning, pp. 157–163 (1994)Google Scholar
  10. 10.
    Narayanan, V., Jennings, N.R.: An adaptive bilateral negotiation model for e-commerce settings. In: Proc. 7th Int. IEEE Conf. on E-Commerce Technology, Munich, Germany, pp. 34–39 (2005)Google Scholar
  11. 11.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  12. 12.
    Weinberg, M., Rosenschein, J.: Best-response multiagent learning in non-stationary environments. In: The Third International Joint Conference on Autonomous Agents and Mutli-Agent Systems, pp. 506–513 (2004)Google Scholar
  13. 13.
    Zeng, D., Sycara, K.: Bayesian learning in negotiation. Int. J. Human-Computer Studies 48, 125–141 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Vidya Narayanan
    • 1
  • Nicholas R. Jennings
    • 1
  1. 1.Intelligence, Agents, Multimedia, School of Electronics and Computer ScienceUniversity of SouthamptonUK

Personalised recommendations