Skip to main content

Temporal difference learning in Chinese Chess

  • 3 Machine Learning
  • Conference paper
  • First Online:
Tasks and Methods in Applied Artificial Intelligence (IEA/AIE 1998)

Abstract

Reinforcement learning, in general, has not been totally successful at solving complex real-world problems which can be described by nonlinear functions. However, temporal difference learning is a type of reinforcement learning algorithm that has been researched and applied to various prediction problems with promising results. This paper discusses the application of temporal-difference-learning in training a neural network to play a scaled-down version of Chinese Chess. Preliminary results show that this technique is favorable for producing desired results. In test cases where minimal factors of the game are presented, the network responds favorably. However, when introducing more complexity, the network does not function as well, but generally produces reasonable results. These results indicate that temporal difference learning has the potential to solve real-world problems of equal or greater complexity. Continuing research will most likely lead to more responsive and accurate systems in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Thrun. Learning to Play the Game of Chess. Advances in Neural Information Processing Systems, 7:1069–76, 1995.

    Google Scholar 

  2. M. Schmidt. Temporal Difference Learning and Chess. Technical Report, Aarhus University, Computer Science Department, June 20, 1994.

    Google Scholar 

  3. M. Schmidt. Neural Networks and Chess. Thesis, Aarhus University, Computer Science Department, July 19, 1993.

    Google Scholar 

  4. C.L. Isbell. Explorations of the Practical Issues of Learning Prediction Control Tasks Using Temporal Difference Learning Methods. Master Thesis, Massachusetts Institute of Technology, December 1992.

    Google Scholar 

  5. G. Tesauro. Temporal Difference Learning and TD-Gammon. Communications of the ACM, 38(3):58–68.

    Google Scholar 

  6. R.S. Sutton. On Step-Size and Bias in Temporal-Difference Learning. Proceedings of the Eighth Yale Workshop on Adaptive and Learning Systems, pp91–96, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Angel Pasqual del Pobil José Mira Moonis Ali

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Trinh, T.B., Bashi, A.S., Deshpande, N. (1998). Temporal difference learning in Chinese Chess. In: Pasqual del Pobil, A., Mira, J., Ali, M. (eds) Tasks and Methods in Applied Artificial Intelligence. IEA/AIE 1998. Lecture Notes in Computer Science, vol 1416. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64574-8_447

Download citation

  • DOI: https://doi.org/10.1007/3-540-64574-8_447

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64574-0

  • Online ISBN: 978-3-540-69350-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics