Advertisement

Simulating Sellers’ Behavior in a Reverse Auction B2B Exchange

  • Subhajyoti Bandyopadhyay
  • Alok R. Chaturvedi
  • John M. Barron
  • Jackie Rees
  • Shailendra Mehta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2660)

Abstract

Previous research in reverse auction B2B exchanges found that in an environment where sellers collectively can cater to the total demand, with the final (i.e. the highest-priced bidding) seller catering to a residual, the sellers resort to a mixed strategy equilibrium [2]. While price randomization in industrial bids is an accepted norm, it may be argued that managers in reality do not resort to advanced game theoretic calculations to bid for an order. What is more likely is that managers learn that strategy and over time finally converge towards the theoretic equilibrium. To test this assertion, we model the two-player game in a synthetic environment, where the agents use a simple reinforcement learning algorithm to put progressively more weights on selecting price bands where they make higher profits. We find that after a sufficient number of iterations, the agents do indeed converge towards the theoretic equilibrium.

Keywords

Nash Equilibrium Reinforcement Learn Artificial Agent Reservation Price Reverse Auction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Allen, B., Deneckere, R., Faith, T. and Kovenock, D., “Capacity precommitment as a barrier to entry: a Bertrand-Edgeworth approach,” The North American Winter Meetings of the Econometric Society, January 1992Google Scholar
  2. 2.
    Bandyopadhyay, S., Barron, J.M. and Chaturvedi, A.R., “A game-theoretic analysis of competition among sellers in an online exchange,” Working Paper, 2002Google Scholar
  3. 3.
    Bell, A. M., “Reinforcement Learning Rules in a Repeated Game,” Computational Economics, Vol. 18, pp. 89–111, 2001.zbMATHCrossRefGoogle Scholar
  4. 4.
    Carriero, N. and Gelernter, D., How to write parallel programs: A guide to the perplexed, ACM Computing Surveys, 21, 3 (September 1989), 323–357.CrossRefGoogle Scholar
  5. 5.
    Chaturvedi, A and Mehta, S., The SEAS Environment 2, Technical Report, Purdue University, West Lafayette, IN, 2002.Google Scholar
  6. 6.
    Chaturvedi, A. R. and Mehta, S., “Simulations in Economics and Management: Using the SEAS Simulation Environment,” Communications of the ACM, March 1999.Google Scholar
  7. 7.
    Epstein, J. M. and R. Axtell, Growing Artificial Societies: Social Science from the Bottom Up, Brookings Institution Press, Washington DC, 1996.Google Scholar
  8. 8.
    Erev, I. and Rapoport, A. “Coordination, ‘Magic,’ and Reinforcement Learning in a Market Entry Game,” Games and Economic Behavior, Vol. 23, pp. 146–175, 1998.zbMATHCrossRefGoogle Scholar
  9. 9.
    Erev, I. and Roth, A.E., “Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria,” The American Economic Review, Vol. 88 No. 4, pp. 848–881, 1998Google Scholar
  10. 10.
    Helper, S. and MacDuffie, J.P. “B2B and Modes of Exchange: Evolutionary and Transformative Effects,” The Global Internet Economy, edited by Bruce Kogut (forthcoming)Google Scholar
  11. 11.
    Kerrigan, R., Roegner, E.V., Swinford, D.D. and Zawada, C.C. “B2Basics,” Mckinsey Quarterly No. 1, 2001, pp. 45–53Google Scholar
  12. 12.
    Kreps, D. and Scheinkman, J. “Quantity precommitment and Bertrand competition yield Cournot outcomes”, The Bell Journal of Economics, 1983, pp. 326–337Google Scholar
  13. 13.
    Kuhn, H.W. and Nasar, S. “Editor’s Introduction to Chapters 5, 6 and 7”, The Essential John Nash, pp. 48Google Scholar
  14. 14.
    Larson, P. “B2B E-Commerce: The Dawning of a Trillion-Dollar Industry”, The Motley Fool’s Internet Report, March 2000Google Scholar
  15. 15.
    Nash, J.: “Non-Cooperative Games” Annals of Mathematics, Vol. 54, September 1951, pp. 286–295CrossRefMathSciNetGoogle Scholar
  16. 16.
    Oliver, J. “A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce, ” Journal of Management Information Systems, Vol. 13, no. 3, pp. 83–112, 1996.Google Scholar
  17. 17.
    Rapoport, A., Daniel, T. E. and D. A. Searle, “Reinforcement-Based Adaptive Learning in Asymmetric Two-Person Bargaining with Incomplete Information,” Experimental Economics, Vol. 1, pp. 221–253, 1998.zbMATHGoogle Scholar
  18. 18.
    Roth, A.E. and Erev, I. “Learning in Extensive-Form Games: Experimental Data and Simple Dynamic Models in the Intermediate Term,” Games and Economic Behavior, Special Issue: Nobel Symposium, 8(1), pp. 164–212, 1995zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Sutton, R. S. and A. G. Barto, Reinforcement Learning: An Introduction, The MIT Press, Cambridge, MA 1998Google Scholar
  20. 20.
    Thorndike, E.L. “Animal Intelligence: An Experimental Study of the Associative Processes in Animals, ” Psychological Monographs, 2(8), 1898.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Subhajyoti Bandyopadhyay
    • 1
  • Alok R. Chaturvedi
    • 2
  • John M. Barron
    • 2
  • Jackie Rees
    • 2
  • Shailendra Mehta
    • 2
  1. 1.University of FloridaGainesville
  2. 2.1310 Krannert Graduate School of ManagementPurdue UniversityWest Lafayette

Personalised recommendations