Learning the Difference between Partially Observable Dynamical Systems
We propose a new approach for estimating the difference between two partially observable dynamical systems. We assume that one can interact with the systems by performing actions and receiving observations. The key idea is to define a Markov Decision Process (MDP) based on the systems to be compared, in such a way that the optimal value of the MDP initial state can be interpreted as a divergence (or dissimilarity) between the systems. This dissimilarity can then be estimated by reinforcement learning methods. Moreover, the optimal policy will contain information about the actions which most distinguish the systems. Empirical results show that this approach is useful in detecting both big and small differences, as well as in comparing systems with different internal structure.
Unable to display preview. Download preview PDF.
- 4.Singh, S., James, M., Rudary, M.: Predictive state representations: a new theory for modeling dynamical systems. In: The 20th Conference on Uncertainty in Artificial Intelligence, Banff, Canada, pp. 512–519 (2004)Google Scholar
- 6.Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
- 8.Cassandra, T.: Tony’s POMDP Page (2009), http://www.cs.brown.edu/research/ai/pomdp/
- 9.Hoey, J., Bertoldi, A., Poupart, P., Mihailidis, A.: Assisting persons with dementia during handwashing using a partially observable markov decision process. In: The 5th International Conference on Computer Vision Systems, Bielefeld, March 21-24 (2007)Google Scholar
- 11.Ferns, N., Castro, P., Panangaden, P., Precup, D.: Methods for computing state similarity in markov decision processes. In: Proceedings of the 22nd Conference on Uncertainty in Artificial intelligence, Cambridge, MA, USA, July 13 - 16 (2006)Google Scholar
- 14.Blass, A., Gurevich, Y., Nachmanson, L., Veanes, M.: Play to test. Technical report, Microsoft Research (2005)Google Scholar