Applied Intelligence

, Volume 39, Issue 4, pp 782–792 | Cite as

On incorporating the paradigms of discretization and Bayesian estimation to create a new family of pursuit learning automata

  • Xuan Zhang
  • Ole-Christoffer Granmo
  • B. John Oommen


There are currently two fundamental paradigms that have been used to enhance the convergence speed of Learning Automata (LA). The first involves the concept of utilizing the estimates of the reward probabilities, while the second involves discretizing the probability space in which the LA operates. This paper demonstrates how both of these can be simultaneously utilized, and in particular, by using the family of Bayesian estimates that have been proven to have distinct advantages over their maximum likelihood counterparts. The success of LA-based estimator algorithms over the classical, Linear Reward-Inaction (L RI )-like schemes, can be explained by their ability to pursue the actions with the highest reward probability estimates. Without access to reward probability estimates, it makes sense for schemes like the L RI to first make large exploring steps, and then to gradually turn exploration into exploitation by making progressively smaller learning steps. However, this behavior becomes counter-intuitive when pursuing actions based on their estimated reward probabilities. Learning should then ideally proceed in progressively larger steps, as the reward probability estimates turn more accurate. This paper introduces a new estimator algorithm, the Discretized Bayesian Pursuit Algorithm (DBPA), that achieves this by incorporating both the above paradigms. The DBPA is implemented by linearly discretizing the action probability space of the Bayesian Pursuit Algorithm (BPA) (Zhang et al. in IEA-AIE 2011, Springer, New York, pp. 608–620, 2011). The key innovation of this paper is that the linear discrete updating rules mitigate the counter-intuitive behavior of the corresponding linear continuous updating rules, by augmenting them with the reward probability estimates. Extensive experimental results show the superiority of DBPA over previous estimator algorithms. Indeed, the DBPA is probably the fastest reported LA to date. Apart from the rigorous experimental demonstration of the strength of the DBPA, the paper also briefly records the proofs of why the BPA and the DBPA are ϵ-optimal in stationary environments.


Learning automata Pursuit schemes Bayesian reasoning Estimator algorithms Discretized learning ε-optimality 


  1. 1.
    Zhang X, Granmo O-C, Oommen BJ (2012) Discretized Bayesian pursuit—a new scheme for reinforcement learning. In: IEA-AIE 2012, Dalian, China, Jun 2012, pp 784–793 Google Scholar
  2. 2.
    Zhang X, Granmo O-C, Oommen BJ (2011) The Bayesian pursuit algorithm: a new family of estimator learning automata. In: IEA-AIE 2011. Springer, New York, pp 608–620 Google Scholar
  3. 3.
    Thathachar M, Sastry P (1986) Estimator algorithms for learning automata. In: The platinum jubilee conference on systems and signal processing, Bangalore, India, Dec 1986, pp 29–32 Google Scholar
  4. 4.
    Tsetlin M (1963) Finite automata and the modeling of the simplest forms of behavior. Usp Mat Nauk 8:1–26 MathSciNetGoogle Scholar
  5. 5.
    Narendra KS, Thathachar MAL (1989) Learning automat: an introduction. Prentice Hall, New York Google Scholar
  6. 6.
    Thathachar M, Arvind M (1997) Solution of goore game using models of stochastic learning automata. J Indian Inst Sci 76:47–61 MathSciNetGoogle Scholar
  7. 7.
    Oommen BJ, Granmo O-C, Pedersen A (2006) Empirical verification of a strategy for unbounded resolution in finite player goore games. In: The 19th Australian joint conference on artificial intelligence, Hobart, Tasmania, Dec 2006, pp 1252–1258 Google Scholar
  8. 8.
    Oommen BJ, Granmo O-C, Pedersen A (2007) Using stochastic AI techniques to achieve unbounded resolution in finite player goore games and its applications. In: IEEE symposium on computational intelligence and games, Honolulu, HI Apr 2007 Google Scholar
  9. 9.
    Granmo O-C, Glimsdal S (2012, to appear) Accelerated Bayesian learning for decentralized two-armed bandit based decision making with applications to the goore game. Appl Intel Google Scholar
  10. 10.
    Granmo O-C, Oommen BJ, Pedersen A (2012) Achieving unbounded resolution in finite player goore games using stochastic automata, and its applications. Seq Anal 31:190–218 MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Narendra MAL, Thathacha KS (1987) Learning automata. Prentice-Hall, Englewood Cliffs Google Scholar
  12. 12.
    Beigy H, Meybodi MR (2000) Adaptation of parameters of BP algorithm using learning automata. In: Sixth Brazilian symposium on neural networks. JR, Brazil, Nov 2000 Google Scholar
  13. 13.
    Song Y, Fang Y, Zhang Y (2007) Stochastic channel selection in cognitive radio networks. In: IEEE global telecommunications conference, Washington, DC, USA, Nov 2000, pp 4878–4882 Google Scholar
  14. 14.
    Oommen BJ, Roberts TD (2000) Continuous learning automata solutions to the capacity assignment problem. IEEE Trans Comput 49:608–620 CrossRefGoogle Scholar
  15. 15.
    Granmo O-C, Oommen BJ, Myrer S-A, Olsen MG (2007) Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation. IEEE Trans Syst Man Cybern, Part B, Cybern 37(1):166–175 CrossRefGoogle Scholar
  16. 16.
    Granmo O-C, Oommen BJ, Myrer S-A, Olsen MG (2006) Determining optimal polling frequency using a learning automata-based solution to the fractional knapsack problem. In: The 2006 IEEE international conferences on cybernetics and intelligent systems (CIS) and robotics, automation and mechatronics (RAM), Bangkok, Thailand, Jun 2006, pp 1–7 CrossRefGoogle Scholar
  17. 17.
    Granmo O-C, Oommen BJ (2011) Learning automata-based solutions to the optimal web polling problem modeled as a nonlinear fractional knapsack problem. Eng Appl Artif Intell 24(7):1238–1251 CrossRefGoogle Scholar
  18. 18.
    Granmo O-C, Oommen BJ (2006) On allocating limited sampling resources using a learning automata-based solution to the fractional knapsack problem. In: The 2006 international intelligent information processing and web mining conference, advances in soft computing, vol 35. Ustron, Poland, Jun 2006, pp 263–272 Google Scholar
  19. 19.
    Granmo O-C, Oommen BJ (2010) Optimal sampling for estimation with constrained resources using a learning automaton-based solution for the nonlinear fractional knapsack problem. Appl Intell 33(1):3–20 CrossRefGoogle Scholar
  20. 20.
    Yazidi A, Granmo O-C, Oommen BJ (2012) Service selection in stochastic environments: a learning-automaton based solution. Appl Intell 36:617–637 CrossRefGoogle Scholar
  21. 21.
    Vafashoar R, Meybodi MR, Momeni AAH (2012) CLA-DE: a hybrid model based on cellular learning automata for numerical optimization. Appl Intell 36:735–748 CrossRefGoogle Scholar
  22. 22.
    Torkestani JA (2012) An adaptive focused web crawling algorithm based on learning automata. Appl Intell 37:586–601 CrossRefGoogle Scholar
  23. 23.
    Li J, Li Z, Chen J (2011) Microassembly path planning using reinforcement learning for improving positioning accuracy of a 1 cm3 omni-directional mobile microrobot. Appl Intell 34:211–225 CrossRefGoogle Scholar
  24. 24.
    Erus G, Polat F (2007) A layered approach to learning coordination knowledge in multiagent environments. Appl Intell 27:249–267 CrossRefGoogle Scholar
  25. 25.
    Hong J, Prabhu VV (2004) Distributed reinforcement learning control for batch sequencing and sizing in just-in-time manufacturing systems. Appl Intell 20:71–87 CrossRefGoogle Scholar
  26. 26.
    Kim CO, Kwon I-H, Baek J-G (2008) Asynchronous action-reward learning for nonstationary serial supply chain inventory control. Appl Intell 28:1–16 CrossRefGoogle Scholar
  27. 27.
    Lakshmivarahan S (1981) Learning algorithms theory and applications. Springer, New York CrossRefMATHGoogle Scholar
  28. 28.
    Narendra KS, Thathachar MAL (1974) Learning automata–a survey. IEEE Trans Syst Man Cybern 4:323–334 MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Thathachar MAL, Sastry PS (1985) A class of rapidly converging algorithms for learning automata. IEEE Trans Syst Man Cybern SMC-15:168–175 MathSciNetCrossRefGoogle Scholar
  30. 30.
    Sastry PS (1985) Systems of learning automata: Estimator algorithms applications. PhD thesis, Dept Elec Eng, Indian Institute of Science Google Scholar
  31. 31.
    Thathachar MAL, Sastry PS (1984) A new approach to designing reinforcement schemes for learning automata. In: IEEE int conf cybern syst, Bombay, India, Jan 1984, pp 1–7 Google Scholar
  32. 32.
    Granmo O-C (2010) Solving two-armed Bernoulli bandit problems using a Bayesian learning automaton. Int J Intel Comput Cybern 3(2):207–234 MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Thompson WR (1933) On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25:285–294 MATHGoogle Scholar
  34. 34.
    Thathachar MAL, Oommen BJ (1979) Discretized reward-inaction learning automata. J Cybern Inf Sci, 24–29 Google Scholar
  35. 35.
    Oommen BJ, Lanctot JK (1990) Discretized pursuit learning automata. IEEE Trans Syst Man Cybern 20:931–938 MathSciNetCrossRefMATHGoogle Scholar
  36. 36.
    Oommen BJ, Agache M (2001) Continuous and discretized pursuit learning schemes: various algorithms and their comparison. IEEE Trans Syst Man Cybern, Part B, Cybern 31(3):277–287 CrossRefGoogle Scholar
  37. 37.
    Oommen BJ (1990) Absorbing and ergodic discretized two-action learning automata. IEEE Trans Syst Man Cybern SMC-16:282–296 MathSciNetGoogle Scholar
  38. 38.
    Rajaraman K, Sastry PS (1996) Finite time analysis of the pursuit algorithm for learning automata. IEEE Trans Syst Man Cybern, Part B, Cybern 26:590–598 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Xuan Zhang
    • 1
  • Ole-Christoffer Granmo
    • 1
  • B. John Oommen
    • 2
    • 3
  1. 1.Department of ICTUniversity of AgderGrimstadNorway
  2. 2.School of Computer ScienceCarleton UniversityOttawaCanada
  3. 3.University of AgderGrimstadNorway

Personalised recommendations