Learning Best Response Strategies for Agents in Ad Exchanges

  • Stavros GerakarisEmail author
  • Subramanian Ramamoorthy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11450)


Ad exchanges are widely used in platforms for online display advertising. Autonomous agents operating in these exchanges must learn policies for interacting profitably with a diverse, continually changing, but unknown market. We consider this problem from the perspective of a publisher, strategically interacting with an advertiser through a posted price mechanism. The learning problem for this agent is made difficult by the fact that information is censored, i.e., the publisher knows if an impression is sold but no other quantitative information. We address this problem using the Harsanyi-Bellman Ad Hoc Coordination (HBA) algorithm [1, 3], which conceptualises this interaction in terms of a Stochastic Bayesian Game and arrives at optimal actions by best responding with respect to probabilistic beliefs maintained over a candidate set of opponent behaviour profiles. We adapt and apply HBA to the censored information setting of ad exchanges. Also, addressing the case of stochastic opponents, we devise a strategy based on a Kaplan-Meier estimator for opponent modelling. We evaluate the proposed method using simulations wherein we show that HBA-KM achieves substantially better competitive ratio and lower variance of return than baselines, including a Q-learning agent and a UCB-based online learning agent, and comparable to the offline optimal algorithm.


Ad exchanges Stochastic game Censored observations Harsanyi-Bellman Ad Hoc Coordination Kaplan-Meier estimator 


  1. 1.
    Albrecht, S.V., Ramamoorthy, S.: A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems. In: Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems, pp. 1155–1156. IFAAMAS (2013)Google Scholar
  2. 2.
    Albrecht, S.V., Ramamoorthy, S.: On convergence and optimality of best-response learning with policy types in multiagent systems. In: Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pp. 12–21 (2014)Google Scholar
  3. 3.
    Albrecht, S., Crandall, J., Ramamoorthy, S.: Belief and truth in hypothesised behaviours. Artif. Intell. 235, 63–94 (2016). Scholar
  4. 4.
    Amin, K., Kearns, M., Key, P., Schwaighofer, A.: Budget optimization for sponsored search: censored learning in MDPs. In: Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, pp. 54–63. AUAI Press (2012)Google Scholar
  5. 5.
    Amin, K., Rostamizadeh, A., Syed, U.: Learning prices for repeated auctions with strategic buyers. In: Advances in Neural Information Processing Systems, pp. 1169–1177 (2013)Google Scholar
  6. 6.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002)CrossRefGoogle Scholar
  7. 7.
    Barrett, S., Stone, P., Kraus, S.: Empirical evaluation of ad hoc teamwork in the pursuit domain. In: Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems, pp. 567–574. IFAAMAS (2011)Google Scholar
  8. 8.
    Cesa-Bianchi, N., Gentile, C., Mansour, Y.: Regret minimization for reserve prices in second-price auctions. In: Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1190–1204. SIAM (2013)Google Scholar
  9. 9.
    Cole, R., Roughgarden, T.: The sample complexity of revenue maximization. In: Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pp. 243–252. ACM (2014)Google Scholar
  10. 10.
    Insights from buyers and sellers on the RTB opportunity. Forrester Consulting, commissioned by Google, White Paper (2011)Google Scholar
  11. 11.
    Ganchev, K., Nevmyvaka, Y., Kearns, M., Vaughan, J.W.: Censored exploration and the dark pool problem. In: Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pp. 185–194 (2009)Google Scholar
  12. 12.
    Ghosh, A., Rubinstein, B.I., Vassilvitskii, S., Zinkevich, M.: Adaptive bidding for display advertising. In: Proceedings of the 18th International Conference on World Wide Web, pp. 251–260. ACM (2009)Google Scholar
  13. 13.
    Huang, Z., Mansour, Y., Roughgarden, T.: Making the most of your samples. In: Proceedings of the 16th ACM conference on Economics and Computation, pp. 45–60. ACM (2015)Google Scholar
  14. 14.
    Kaplan, E.L., Meier, P.: Nonparametric estimation from incomplete observations. J. Am. Stat. Assoc. 53, 457–481 (1958)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Mohri, M., Medina, A.M.: Learning theory and algorithms for revenue optimization in second-price auctions with reserve. In: Proceedings of the 31st International Conference on Machine Learning, pp. 262–270 (2014)Google Scholar
  16. 16.
    Muthukrishnan, S.: Ad exchanges: research issues. In: Leonardi, S. (ed.) WINE 2009. LNCS, vol. 5929, pp. 1–12. Springer, Heidelberg (2009). Scholar
  17. 17.
    Pin, F., Key, P.: Stochastic variability in sponsored search auctions: observations and models. In: Proceedings of the 12th ACM Conference on Electronic Commerce, pp. 61–70. ACM (2011)Google Scholar
  18. 18.
    Schain, M., Mansour, Y.: Ad exchange – proposal for a new trading agent competition game. In: David, E., Kiekintveld, C., Robu, V., Shehory, O., Stein, S. (eds.) AMEC/TADA -2012. LNBIP, vol. 136, pp. 133–145. Springer, Heidelberg (2013). Scholar
  19. 19.
    Stone, P., Kaminka, G.A., Kraus, S., Rosenschein, J.S., et al.: Ad hoc autonomous agent teams: collaboration without pre-coordination. In: AAAI (2010)Google Scholar
  20. 20.
    Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of InformaticsUniversity of EdinburghEdinburghUK

Personalised recommendations