Temporal difference learning in Chinese Chess
Reinforcement learning, in general, has not been totally successful at solving complex real-world problems which can be described by nonlinear functions. However, temporal difference learning is a type of reinforcement learning algorithm that has been researched and applied to various prediction problems with promising results. This paper discusses the application of temporal-difference-learning in training a neural network to play a scaled-down version of Chinese Chess. Preliminary results show that this technique is favorable for producing desired results. In test cases where minimal factors of the game are presented, the network responds favorably. However, when introducing more complexity, the network does not function as well, but generally produces reasonable results. These results indicate that temporal difference learning has the potential to solve real-world problems of equal or greater complexity. Continuing research will most likely lead to more responsive and accurate systems in the future.
Unable to display preview. Download preview PDF.
- 1.S. Thrun. Learning to Play the Game of Chess. Advances in Neural Information Processing Systems, 7:1069–76, 1995.Google Scholar
- 2.M. Schmidt. Temporal Difference Learning and Chess. Technical Report, Aarhus University, Computer Science Department, June 20, 1994.Google Scholar
- 3.M. Schmidt. Neural Networks and Chess. Thesis, Aarhus University, Computer Science Department, July 19, 1993.Google Scholar
- 4.C.L. Isbell. Explorations of the Practical Issues of Learning Prediction Control Tasks Using Temporal Difference Learning Methods. Master Thesis, Massachusetts Institute of Technology, December 1992.Google Scholar
- 5.G. Tesauro. Temporal Difference Learning and TD-Gammon. Communications of the ACM, 38(3):58–68.Google Scholar
- 6.R.S. Sutton. On Step-Size and Bias in Temporal-Difference Learning. Proceedings of the Eighth Yale Workshop on Adaptive and Learning Systems, pp91–96, 1994.Google Scholar