ARKAQ-Learning: Autonomous State Space Segmentation and Policy Generation

  • Alp Sardağ
  • H. Levent Akın
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3733)


A real world environment is often partially observable by the agents either because of noisy sensors or incomplete perception. Autonomous strategy planning under uncertainty has two major challenges. First, autonomous segmentation of the state space for a given task; Second, emerging complex behaviors that deal with each state segment. This paper suggests a new approach that handles both by utilizing combination of various techniques, namely ARKAQ-Learning (ART 2-A networks augmented with Kalman Filters and Q-Learning). The algorithm is an online algorithm and it has low space and computational complexity. The algorithm was run for some well known partially observable Markov decision process problems. World Model Generator could reveal the hidden states, mapping non-Markovian model to Markovian internal state space. Policy Generator could build the optimal policy on the internal Markovian state model.


Optimal Policy Goal State Hide State World Model Markovian State 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Boyen, X., Koller, D.: Tractable inference for complex stochastic processes. In: Conference on Uncertainty in Artificial Intelligence, pp. 33–42 (1998)Google Scholar
  2. 2.
    Sallans, B.: Learning factored representations on partially observable Markov decision process. In: Neural Information Processing Systems, pp. 1050–1056. MIT Press, Cambridge (2000)Google Scholar
  3. 3.
    Kaelbling, L., Littman, M., Moore, A.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  4. 4.
    Watkins, C.J.: Learning with delayed rewards. PhD Thesis, Cambridge University (1989)Google Scholar
  5. 5.
    Singh, S., Jaakkola, T., Jordan, M.: Learning without state estimation in partially observable Markov decision processes. In: International Conference on Machine Learning, pp. 284–292 (1994)Google Scholar
  6. 6.
    McCallum, R.A.: Instance-based util distinctions for reinforcement learning with hidden state. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 387–395. Morgan Kaufmann, San Francisco (1995)Google Scholar
  7. 7.
    Lin, L., Mitchell, T.M.: Memory approaches to reinforcement learning in non-Markovian domains. Technical Report CMU-CS-92-138, School of Computer Science, Carnegie Mellon University (1992)Google Scholar
  8. 8.
    Rumelhart, D., Hinton, G., Williams, R.: Parallel distributed processing. In: Learning internal representations by error propagation, ch. 8. MIT Press, Cambridge (1986)Google Scholar
  9. 9.
    Grewal, A., Andrews, C.: Kalman Filtering, pp. 80–102. Prentice-Hall, Englewood Cliffs (1993)MATHGoogle Scholar
  10. 10.
    Maybeck, P.S.: Stochastic models, estimation and control, vol. 1, pp. 1–15. Academic Press, London (1979)MATHGoogle Scholar
  11. 11.
    Carpenter, G.A., Grossberg, S.: ART2. Self-Organization of Stable Category Recognition Codes for Analog Input Patterns. Applied Optics, 4919–4930 (1989)Google Scholar
  12. 12.
    Carpenter, G.A., Grossberg, S., Rosen, D.B.: ART 2-A: An Adaptive Resonance Algorithm for Rapid Category Learning and Recognition. In: Neural Networks, vol. 4, pp. 493–504. Pergamon Press, Oxford (1991)Google Scholar
  13. 13.
    Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)MATHGoogle Scholar
  14. 14.
    Bertsekas, D.: ynamic Programming: Deterministic and Stochastic Models. Prentice-Hall, Englewood Cliffs (1987)Google Scholar
  15. 15.
    Howard, R.A.: Dynamic Programming and Markov Processes. The MIT Press, Cambridge (1960)MATHGoogle Scholar
  16. 16.
    Sutton, R., Singh, S., Precup, D., Ravindran, B.: Improved switching among temporally abstract actions. In: Proceedings of Neural Information Processings Systems, pp. 1066–1072. MIT Press, Cambridge (1999)Google Scholar
  17. 17.
    Peshkin, L., Shelton, H.: Learning from scarce experience. In: Proceedings of the Nineteenth International Conference on Machine Learning, pp. 498–505 (2002)Google Scholar
  18. 18.
    Peshkin, L., Meuleau, N., Kaelbling, L.P.: Learning Policies with External Memory. In: Proceedings of the Sixteenth International Conference on Machine Learning, pp. 307–314 (1999)Google Scholar
  19. 19.
    Tesauro, G.: Programming Backgammon Using Self-Teaching Neural Nets. Artificial Intelligence, 181–199 (2002)Google Scholar
  20. 20.
    Carpenter, G.A., Grossberg, S.: A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, 3754 (1987)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Alp Sardağ
    • 1
  • H. Levent Akın
    • 1
  1. 1.Department of Computer EngineeringBoğaziçi UniversityBebek, IstanbulTurkey

Personalised recommendations