Gatti, C. J. & Embrechts, M. J. (2014). An application of the temporal difference algorithm to the truck backer-upper problem. In Proceedings of the
\(22 nd\)
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 23–25 April. Bruges, Belgium: ESANN.
Google Scholar
Gatti, C. J., Embrechts, M. J., & Linton, J. D. (2013). An empirical analysis of reinforcement learning using design of experiments. In Proceedings of the
\(21st\)
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 24–26 April (pp. 221–226). Bruges, Belgium: ESANN.
Google Scholar
Loeppky, J. L., Sacks, J., & Welch, W. J. (2009). Choosing the sample size of a computer experiment: A practical guide. Technometrics, 51(4), 366–376.
CrossRef
MathSciNet
Google Scholar
Nguyen, D. & Widrow, B. (1990a). Neural networks for self-learning control systems. IEEE Control Systems Magazine, 10(3), 18–23.
CrossRef
Google Scholar
Nguyen, D. & Widrow, B. (1990b). The truck backer-upper: An example of self-learning in neural networks. In Miller, W. T., Sutton, R. S., & Werbos, P. J. (Eds.), Neural Networks for Control. Cambridge, MA: MIT Press.
Google Scholar
Patist, J. P. & Wiering, M. (2004). Learning to play draughts using temporal difference learning with neural networks and databases. In Proceedings of the 13th Belgian-Dutch Conference on Machine Learning, Brussels, Belgium, 8–9 January (pp. 87–94). doi: 10.1007/978-3-540-88190-2_13
Google Scholar
Schoenauer, M. & Ronald, E. (1994). Neuro-genetic truck backer-upper controller. In Proceedings of the IEEE Conference on Computational Intelligence, Orlando, FL, 27 June–2 July (Vol. 2, pp. 720–723). doi: 10.1109/ICEC.1994.349969
Google Scholar
Tesauro, G. (1992). Practical issues in temporal difference learning. Machine Learning, 8(3–4), 257–277.
MATH
Google Scholar
Thrun, S. (1995). Learning to play the game of Chess. In Advances in Neural Information Processing Systems 7 (pp. 1069–1076). Cambridge, MA: MIT Press.
Google Scholar
Thrun, S. & Schwartz, A. (1993). Issues in using function approximation for reinforcement learning. In Mozer, M., Smokensky, P., Touretzky, D., Elman, J., & Weigand, A. (Eds.), Proceedings of the 4th Connectionist Models Summer School, Pittsburgh, PA, 2–5 August (pp. 255–263). Hillsdale, NJ: Lawrence Erlbaum.
Google Scholar
Vollbrecht, H. (2003). Hierarchical reinforcement learning in continuous state spaces. Unpublished PhD dissertation, University of Ulm, Ulm, Germany.
Google Scholar
Wiering, M. A. (2010). Self-play and using an expert to learn to play backgammon with temporal difference learning. Journal of Intelligent Learning Systems & Applications, 2(2), 57–68.
CrossRef
Google Scholar
Wiering, M. A., Patist, J. P., & Mannen, H. (2007). Learning to play board games using temporal difference methods (Technical Report UU–CS–2005–048, Institute of Information and Computing Sciences, Utrecht University). Retrieved from http://www.ai.rug.nl/mwiering/group/articles/learning_games_TR.pdf.