On Using the Theory of Regular Functions to Prove the ε-Optimality of the Continuous Pursuit Learning Automaton
There are various families of Learning Automata (LA) such as Fixed Structure, Variable Structure, Discretized etc. Informally, if the environment is stationary, their ε-optimality is defined as their ability to converge to the optimal action with an arbitrarily large probability, if the learning parameter is sufficiently small/large. Of these LA families, Estimator Algorithms (EAs) are certainly the fastest, and within this family, the set of Pursuit algorithms have been considered to be the pioneering schemes. The existing proofs of the ε-optimality of all the reported EAs follow the same fundamental principles. Recently, it has been reported that the previous proofs for the ε-optimality of all the reported EAs have a common flaw. In other words, people have worked with this flawed reasoning for almost three decades. The flaw lies in the condition which apparently supports the so-called “monotonicity” property of the probability of selecting the optimal action, explained in the paper. In this paper, we provide a new method to prove the ε-optimality of the Continuous Pursuit Algorithm (CPA), which was the pioneering EA. The new proof follows the same outline of the previous proofs, but instead of examining the monotonicity property of the action probabilities, it rather examines their submartingale property, and then, unlike the traditional approach, invokes the theory of Regular functions to prove the ε-optimality. We believe that the proof is both unique and pioneering, and that it can form the basis for formally demonstrating the ε-optimality of other EAs.
KeywordsPursuit Algorithms Continuous Pursuit Algorithm ε-optimality
Unable to display preview. Download preview PDF.
- 1.Oommen, B.J., Granmo, O.C., Pedersen, A.: Using stochastic AI techniques to achieve unbounded resolution in finite player goore games and its applications. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, HI (2007)Google Scholar
- 2.Beigy, H., Meybodi, M.R.: Adaptation of parameters of bp algorithm using learning automata. In: Sixth Brazilian Symposium on Neural Networks, JR, Brazil (2000)Google Scholar
- 9.Dean, T., Angluin, D., Basye, K., Engelson, S., Aelbling, L., Maron, O.: Inferring finite automata with stochastic output functions and an application to map learning. Maching Learning 18, 81–108 (1995)Google Scholar
- 10.Thathachar, M.A.L., Sastry, P.S.: Estimator algorithms for learning automata. In: The Platinum Jubilee Conference on Systems and Signal Processing, Bangalore, India, pp. 29–32 (1986)Google Scholar
- 12.Lanctot, J.K., Oommen, B.J.: On discretizing estimator-based learning algorithms. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 2, 1417–1422 (1991)Google Scholar
- 17.Narendra, K.S., Thathachar, M.A.L.: Learning Automat: An Introduction. Prentice Hall (1989)Google Scholar
- 19.Zhang, X., Granmo, O.C., Oommen, B.J., Jiao, L.: A Formal Proof of the ε-Optimality of Continuous Pursuit Algorithms Using the Theory of Regular Functions. The Unabridged Version of this Paper (Submitted for Publication. It can be made available to the Referees if needed)Google Scholar