Trace Equivalence Characterization Through Reinforcement Learning

  • Josée Desharnais
  • François Laviolette
  • Krishna Priya Darsini Moturu
  • Sami Zhioua
Conference paper

DOI: 10.1007/11766247_32

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4013)
Cite this paper as:
Desharnais J., Laviolette F., Moturu K.P.D., Zhioua S. (2006) Trace Equivalence Characterization Through Reinforcement Learning. In: Lamontagne L., Marchand M. (eds) Advances in Artificial Intelligence. AI 2006. Lecture Notes in Computer Science, vol 4013. Springer, Berlin, Heidelberg

Abstract

In the context of probabilistic verification, we provide a new notion of trace-equivalence divergence between pairs of Labelled Markov processes. This divergence corresponds to the optimal value of a particular derived Markov Decision Process. It can therefore be estimated by Reinforcement Learning methods. Moreover, we provide some PAC-guarantees on this estimation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Josée Desharnais
    • 1
  • François Laviolette
    • 1
  • Krishna Priya Darsini Moturu
    • 1
  • Sami Zhioua
    • 1
  1. 1.IFT-GLO, Université LavalQuébec (QC)Canada

Personalised recommendations