Skip to main content
Log in

Poissonian Two-Armed Bandit: A New Approach

  • AUTOMATA THEORY
  • Published:
Problems of Information Transmission Aims and scope Submit manuscript

Abstract

We consider a new approach to the continuous-time two-armed bandit problem in which incomes are described by Poisson processes. For this purpose, first, the control horizon is divided into equal consecutive half-intervals in which the strategy remains constant, and the incomes arrive in batches corresponding to these half-intervals. For finding the optimal piecewise constant Bayesian strategy and its corresponding Bayesian risk, a recursive difference equation is derived. The existence of a limiting value of the Bayesian risk when the number of half-intervals grows infinitely is established, and a partial differential equation for finding it is derived. Second, unlike previously considered settings of this problem, we analyze the strategy as a function of the current history of the controlled process rather than of the evolution of the posterior distribution. This removes the requirement of finiteness of the set of admissible parameters, which was imposed in previous settings. Simulation shows that in order to find the Bayesian and minimax strategies and risks in practice, it is sufficient to partition the arriving incomes into 30 batches. In the case of the minimax setting, it is shown that optimal processing of arriving incomes one by one is not more efficient than optimal batch processing if the control horizon grows infinitely.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.

Similar content being viewed by others

References

  1. Berry, D.A. and Fristedt, B., Bandit Problems: Sequential Allocation of Experiments, London, New York: Chapman & Hall, 1985.

    Book  Google Scholar 

  2. Presman, E.L. and Sonin, I.M., Posledovatel’noe upravlenie po nepolnym dannym. Baiesovskii podkhod, Moscow: Nauka, 1982. Translated under the title Sequential Control with Incomplete Information,New York: Academic, 1990.

    Google Scholar 

  3. Sragovich, V.G., Adaptivnoe upravlenie (Adaptive Control), Moscow: Nauka, 1981. Translated under the title Mathematical Theory of Adaptive Control, Singapore: World Sci., 2006.

    MATH  Google Scholar 

  4. Nazin, A.V. and Poznyak, A.S., Adaptivnyi vybor variantov: rekurrentnye algoritmy (Adaptive Choice between Alternatives: Recursive Algorithms), Moscow: Nauka, 1986.

    Google Scholar 

  5. Tsetlin, M.L., Issledovaniya po teorii avtomatov i modelirovaniyu biologicheskikh sistem, Moscow: Nauka, 1969. Translated under the title Automaton Theory and Modeling of Biological Systems,New York: Academic, 1973.

    MATH  Google Scholar 

  6. Varshavsky, V.I., Kollektivnoe povedenie avtomatov (Collective Behavior of Automata), Moscow: Nauka, 1973. Translated under the title Kollektives Verhalten von Automaten, Warschawski, W.I., Berlin: Akademie, 1978.

    Google Scholar 

  7. Presman, E.L., Poisson Version of the Two-Armed Bandit Problem with Discounting, Teor. Veroyatn. Primen., 1990, vol. 35, no. 2, pp. 318⁠–⁠328 [Theory Probab. Appl. (Engl. Transl.), 1990, vol. 35, no. 2, pp. 307⁠–⁠317]. https://doi.org/10.1137/1135038

    MathSciNet  MATH  Google Scholar 

  8. Chernoff, H. and Ray, S.N., A Bayes Sequential Sampling Inspection Plan, Ann. Math. Statist., 1965, vol. 36, no. 5, pp. 1387⁠–⁠1407. https://doi.org/10.1214/aoms/1177699898

    Article  MathSciNet  Google Scholar 

  9. Mandelbaum, A., Continuous Multi-Armed Bandits and Multiparameter Processes, Ann. Probab., 1987, vol. 15, no. 4, pp. 1527⁠–⁠1556. https://doi.org/10.1214/aop/1176991992

    Article  MathSciNet  Google Scholar 

  10. Lai, T.L., Adaptive Treatment Allocation and the Multi-Armed Bandit Problem, Ann. Statist., 1987, vol. 15, no. 3, pp. 1091⁠–⁠1114. https://doi.org/10.1214/aos/1176350495

    Article  MathSciNet  Google Scholar 

  11. Vogel, W., An Asymptotic Minimax Theorem for the Two Armed Bandit Problem, Ann. Math. Statist., 1960, vol. 31, pp. 444⁠–⁠451. https://doi.org/10.1214/aoms/1177705907

    Article  MathSciNet  Google Scholar 

  12. Borovkov, A.A., Matematicheskaya statistika. Dopolnitel’nye glavy (Mathematical Statistics: Advanced Chapters), Moscow: Nauka, 1984.

    Google Scholar 

  13. Kolnogorov, A.V., Finding Minimax Strategy and Minimax Risk in a Random Environment (The Two-Armed Bandit Problem), Avtomat. i Telemekh., 2011, no. 5, pp. 127⁠–⁠138 [Autom. Remote Control (Engl. Transl.), 2011, vol. 72, no. 5, pp. 1017⁠–⁠1027]. https://doi.org/10.1134/S0005117911050092

    MathSciNet  MATH  Google Scholar 

  14. Fabius, J., and van Zwet, W.R., Some Remarks on the Two-Armed Bandit, Ann. Math. Statist., 1970, vol. 41, no. 6, pp. 1906⁠–⁠1916. https://doi.org/10.1214/aoms/1177696692

    Article  MathSciNet  Google Scholar 

  15. Kolnogorov, A.V., On a Limiting Description of Robust Parallel Control in a Random Environment, Avtomat. i Telemekh., 2015, no. 7, pp. 111⁠–⁠126 [Autom. Remote Control (Engl. Transl.), 2015, vol. 76, no. 7, pp. 1229⁠–⁠1241]. https://doi.org/10.1134/S0005117915070085

    MathSciNet  MATH  Google Scholar 

  16. Kolnogorov, A.V., Gaussian Two-Armed Bandit: Limiting Description, Probl. Peredachi Inf., 2020, vol. 56, no. 3, pp. 86⁠–⁠111 [Probl. Inf. Transm. (Engl. Transl.), 2020, vol. 56, no. 3, pp. 278⁠–⁠301]. https://doi.org/10.1134/S0032946020030059

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The author is grateful to a reviewer for his/her attention to the paper and valuable remarks.

Funding

Supported in part by the Russian Foundation for Basic Research, project no. 20-01-00062.

Author information

Authors and Affiliations

Authors

Additional information

Translated from Problemy Peredachi Informatsii, 2022, Vol. 58, No. 2, pp. 66–91 https://doi.org/10.31857/S0555292322020065.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kolnogorov, A. Poissonian Two-Armed Bandit: A New Approach. Probl Inf Transm 58, 160–183 (2022). https://doi.org/10.1134/S0032946022020065

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0032946022020065

Keywords

Navigation