Learning the Difference between Partially Observable Dynamical Systems

  • Sami Zhioua
  • Doina Precup
  • François Laviolette
  • Josée Desharnais
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5782)

Abstract

We propose a new approach for estimating the difference between two partially observable dynamical systems. We assume that one can interact with the systems by performing actions and receiving observations. The key idea is to define a Markov Decision Process (MDP) based on the systems to be compared, in such a way that the optimal value of the MDP initial state can be interpreted as a divergence (or dissimilarity) between the systems. This dissimilarity can then be estimated by reinforcement learning methods. Moreover, the optimal policy will contain information about the actions which most distinguish the systems. Empirical results show that this approach is useful in detecting both big and small differences, as well as in comparing systems with different internal structure.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Desharnais, J., Laviolette, F., Zhioua, S.: Testing probabilistic equivalence through reinforcement learning. In: Arun-Kumar, S., Garg, N. (eds.) FSTTCS 2006. LNCS, vol. 4337, pp. 236–247. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artificial Intelligence 101, 99–134 (1998)MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Larsen, K.G., Skou, A.: Bisimulation through probabilistic testing. Inf. Comput. 94, 1–28 (1991)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Singh, S., James, M., Rudary, M.: Predictive state representations: a new theory for modeling dynamical systems. In: The 20th Conference on Uncertainty in Artificial Intelligence, Banff, Canada, pp. 512–519 (2004)Google Scholar
  5. 5.
    Cover, T.M., Thomas, J.A.: 12. In: Elements of Information Theory. Wiley, Chichester (1991)CrossRefGoogle Scholar
  6. 6.
    Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
  7. 7.
    Kearns, M.J., Mansour, Y., Ng, A.Y.: A sparse sampling algorithm for near-optimal planning in large markov decision processes. Machine Learning 49, 193–208 (2002)MATHCrossRefGoogle Scholar
  8. 8.
    Cassandra, T.: Tony’s POMDP Page (2009), http://www.cs.brown.edu/research/ai/pomdp/
  9. 9.
    Hoey, J., Bertoldi, A., Poupart, P., Mihailidis, A.: Assisting persons with dementia during handwashing using a partially observable markov decision process. In: The 5th International Conference on Computer Vision Systems, Bielefeld, March 21-24 (2007)Google Scholar
  10. 10.
    Desharnais, J., Gupta, V., Jagadeesan, R., Panangaden, P.: Metrics for labeled Markov processes. Theoretical Computer Science 318, 323–354 (2004)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Ferns, N., Castro, P., Panangaden, P., Precup, D.: Methods for computing state similarity in markov decision processes. In: Proceedings of the 22nd Conference on Uncertainty in Artificial intelligence, Cambridge, MA, USA, July 13 - 16 (2006)Google Scholar
  12. 12.
    Levinson, S., Rabiner, L.: An introduction to the application of the theory of probabilistic functions of a return process to automatic speech recognition. The Bell System Technical Journal 62, 1035–1074 (1983)MATHMathSciNetGoogle Scholar
  13. 13.
    Juang, B., Rabiner, L.: A probabilistic distance measure for hidden return models. AT&T Technical Journal 62, 391–408 (1985)MathSciNetGoogle Scholar
  14. 14.
    Blass, A., Gurevich, Y., Nachmanson, L., Veanes, M.: Play to test. Technical report, Microsoft Research (2005)Google Scholar
  15. 15.
    Veanes, M., Roy, P., Campbell, C.: Online testing with reinforcement learning. In: Havelund, K., Núñez, M., Roşu, G., Wolff, B. (eds.) FATES 2006 and RV 2006. LNCS, vol. 4262, pp. 240–253. Springer, Heidelberg (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Sami Zhioua
    • 1
  • Doina Precup
    • 1
  • François Laviolette
    • 2
  • Josée Desharnais
    • 2
  1. 1.School of Computer ScienceMcGill UniversityCanada
  2. 2.Department of Computer Science and Software EngineeringLaval UniversityCanada

Personalised recommendations