VQQL. Applying Vector Quantization to Reinforcement Learning
Purchase on Springer.com
$29.95 / €24.95 / £19.95*
* Final gross prices may vary according to local VAT.
Reinforcement learning has proven to be a set of successful techniques for finding optimal policies on uncertain and/or dynamic domains, such as the RoboCup. One of the problems on using such techniques appears with large state and action spaces, as it is the case of input information coming from the Robosoccer simulator. In this paper, we describe a new mechanism for solving the states generalization problem in reinforcement learning algorithms. This clustering mechanism is based on the vector quantization technique for signal analog-to-digital conversion and compression, and on the Generalized Lloyd Algorithm for the design of vector quantizers. Furthermore, we present the VQQL model, that integrates Q-Learning as reinforcement learning technique and vector quantization as state generalization technique. We show some results on applying this model to learning the interception task skill for Robosoccer agents.
- Jacky Baltes and Yuming Lin. Path-tracking control of non-holonomic car-like robot with reinforcement learning. In Manuela Veloso, editor, Working notes of the IJCAI’99 Third International Workshop on Robocup, pages 17–21, Stockholm, Sweden, July-August 1999. IJCAI Press.
- Craig Boutilier, Richard Dearden, and Moises Goldszmidt. Exploiting structure in policy construction. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), pages 1104–1111, Montreal, Quebec, Canada, August 1995. Morgan Kaufmann.
- David Chapman and Leslie P. Kaelbling. Input generalization in delayed reinforcement learning: An algorithm and performance comparisons. Proceedings of the International Joint Conference on Artificial Intelligence, 1991.
- C. Claussen, S. Gutta, and H. Wechsler. Reinforcement learning using funtional approximation for generalization and their application to cart centering and fractal compression. In Thomas Dean, editor, Proceedings of Sixteenth International Joint Coference on Artificial Intelligence, volume 2, pages 1362–1367, Stockholm, Sweden, August 1999.
- Thomas Dean and Robert Givan. Model minimization in markov decision processes. In Proceedings of the American Association of Artificial Intelligence (AAAI-97). AAAI Press, 1997.
- Marco Dorigo. Message-based bucket brigade: An algorithm for the appointment of credit problem. In Yves Kodratoff, editor, Machine Learning. European Workshop on Machine Learning, LNAI 482, pages 235–244. Springer-Verlag, 1991.
- Allen Gersho and Robert M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers, 1992.
- T. Kohonen. The self-organizing map. In Proceedings of IEEE, volume 2, pages 1464–1480, 1990. CrossRef
- Long-Ji Lin. Scaling-up reinforcement learning for robot control. In Proceedings of the Tenth International Conference on Machine Learning, pages 182–189, Amherst, MA, June 1993. Morgan Kaufman.
- Yoseph Linde, André Buzo, and Robet M. Gray. An algorithm for vector quantizer design. In IEEE Transactions on Communications, Vol. Com-28,No1, pages 84–95, 1980. CrossRef
- S. P. Lloyd. Least squares quantization in pcm. In IEEE Transactions on Information Theory, number 28 in IT, pages 127–135, March 1982.
- S. Mahavedan and J. Connell. Automatic programming of behavior-based robots using reinforcement learning. Artificial Intelligence, 55:311–365, 1992. CrossRef
- Tom M. Mitchell and Sebastian B. Thrun. Explanation based learning: A comparison of symbolic and neural network approaches. In Proceedings of the Tenth International Conference on Machine Learning, pages 197–204, University of Massachusetts, Amherts, MA, USA, 1993. Morgan Kaufmann.
- Andrew W. Moore. Variable resolution dynamic programming: Efficiently learning action maps in multivariate real-valued spaces. Proceedings in Eighth International Machine Learning Workshop, 1991.
- Andrew W. Moore. The party-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. In J.D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems, pages 711–718, San Mateo, CA, 1994. Morgan Kaufmann.
- Itsuki Noda. Soccer Server Manual, version 4.02 edition, January 1999.
- Peter Stone and Manuela Veloso. Team-partitioned, opaque-transition reinforcement learning. In M. Asada and H. Kitano, editors, RoboCup-98: Robot Soccer World Cup II, Berlin, 1999. Springer Verlag.
- C. J. C. H. Watkins and P. Dayan. Technical note: Q-learning. Machine Learning, 8(3/4):279–292, May 1992. CrossRef
- VQQL. Applying Vector Quantization to Reinforcement Learning
- Book Title
- RoboCup-99: Robot Soccer World Cup III
- pp 292-303
- Print ISBN
- Online ISBN
- Series Title
- Lecture Notes in Computer Science
- Series Volume
- Series ISSN
- Springer Berlin Heidelberg
- Copyright Holder
- Springer-Verlag Berlin Heidelberg
- Additional Links
- Industry Sectors
- eBook Packages
- Editor Affiliations
- 1. School of Computer Science Computer Science Department, Carnegie Mellon University
- 2. Department of Electronics and Informatics (DEI), The University of Padua
- 3. Sony Computer Science Laboratories, Inc.
- Author Affiliations
- 6. Universidad Carlos III de Madrid, Avda. de la Universidad, 30, 28912, Leganés.Madrid, Spain
To view the rest of this content please follow the download PDF link above.