Skip to main content

Faithful and Effective Reward Schemes for Model-Free Reinforcement Learning of Omega-Regular Objectives

  • Conference paper
  • First Online:
Automated Technology for Verification and Analysis (ATVA 2020)

Abstract

Omega-regular properties—specified using linear time temporal logic or various forms of omega-automata—find increasing use in specifying the objectives of reinforcement learning (RL). The key problem that arises is that of faithful and effective translation of the objective into a scalar reward for model-free RL. A recent approach exploits Büchi automata with restricted nondeterminism to reduce the search for an optimal policy for an -regular property to that for a simple reachability objective. A possible drawback of this translation is that reachability rewards are sparse, being reaped only at the end of each episode. Another approach reduces the search for an optimal policy to an optimization problem with two interdependent discount parameters. While this approach provides denser rewards than the reduction to reachability, it is not easily mapped to off-the-shelf RL algorithms. We propose a reward scheme that reduces the search for an optimal policy to an optimization problem with a single discount parameter that produces dense rewards and is compatible with off-the-shelf RL algorithms. Finally, we report an experimental comparison of these and other reward schemes for model-free RL with omega-regular objectives.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andersson, D., Miltersen, P.B.: The complexity of solving stochastic games on graphs. In: Algorithms and Computation, pp. 112–121 (2009)

    Google Scholar 

  2. Babiak, T., et al.: The hanoi omega-automata format. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 479–486. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21690-4_31

    Chapter  Google Scholar 

  3. Bozkurt, A.K., Wang, Y., Zavlanos, M.M., Pajic, M.: Control synthesis from linear temporal logic specifications using model-free reinforcement learning. CoRR, abs/1909.07299 (2019)

    Google Scholar 

  4. Brockman, G., et al.: OpenAI Gym. CoRR, abs/1606.01540 (2016)

    Google Scholar 

  5. Camacho, A., Toro Icarte, R., Klassen, T.Q., Valenzano, R., McIlraith, S.A.: LTL and beyond: formal languages for reward function specification in reinforcement learning. In: Joint Conference on Artificial Intelligence, pp. 6065–6073 (2019)

    Google Scholar 

  6. Courcoubetis, C., Yannakakis, M.: The complexity of probabilistic verification. J. ACM 42(4), 857–907 (1995)

    Article  MathSciNet  Google Scholar 

  7. cpphoafparser (2016). https://automata.tools/hoa/cpphoafparser. Accessed 05 Sept 2018

  8. De Giacomo, G., Vardi, M.Y.: Linear temporal logic and linear dynamic logic on finite traces. In: IJCAI, pp. 854–860 (2013)

    Google Scholar 

  9. Etessami, K., Wilke, T., Schuller, R.A.: Fair simulation relations, parity games, and state space reduction for Büchi automata. SIAM J. Comput. 34(5), 1159–1175 (2005)

    Article  MathSciNet  Google Scholar 

  10. Fu, J., Topcu, U.: Probably approximately correct MDP learning and control with temporal logic constraints. In: Robotics Science and Systems (2014)

    Google Scholar 

  11. Hahn, E.M., Li, G., Schewe, S., Turrini, A., Zhang, L.: Lazy probabilistic model checking without determinisation. In: Concurrency Theory, pp. 354–367 (2015)

    Google Scholar 

  12. Hahn, E.M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., Wojtczak, D.: Omega-regular objectives in model-free reinforcement learning. In: Vojnar, T., Zhang, L. (eds.) TACAS 2019. LNCS, vol. 11427, pp. 395–412. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17462-0_27

    Chapter  Google Scholar 

  13. Hahn, E.M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., Wojtczak, D.: Good-for-MDPs automata for probabilistic analysis and reinforcement learning. In: Tools and Algorithms for the Construction and Analysis of Systems, pp. 306–323 (2020)

    Google Scholar 

  14. Hasanbeig, M., Kantaros, Y., Abate, A., Kroening, D., Pappas, G.J., Lee, I.: Reinforcement learning for temporal logic control synthesis with probabilistic satisfaction guarantees. In: Conference on Decision and Control, December 2019

    Google Scholar 

  15. Henzinger, T.A., Kupferman, O., Rajamani, S.K.: Fair simulation. In: Mazurkiewicz, A., Winkowski, J. (eds.) CONCUR 1997. LNCS, vol. 1243, pp. 273–287. Springer, Heidelberg (1997). https://doi.org/10.1007/3-540-63141-0_19

    Chapter  Google Scholar 

  16. Henzinger, T.A., Piterman, N.: Solving games without determinization. In: Ésik, Z. (ed.) CSL 2006. LNCS, vol. 4207, pp. 395–410. Springer, Heidelberg (2006). https://doi.org/10.1007/11874683_26

    Chapter  Google Scholar 

  17. Hordijk, A., Yushkevich, A.A.: Handbook of Markov Decision Processes Methods and Applications, pp. 231–267. Springer, New York (2002). https://doi.org/10.1007/978-1-4615-0805-2

  18. Icarte, T.R., Klassen, T.Q., Valenzano, R.A., McIlraith, S.A.: Using reward machines for high-level task specification and decomposition in reinforcement learning. In: Conference on Machine Learning, pp. 2112–2121, July 2018

    Google Scholar 

  19. Irpan, A.: Deep reinforcement learning doesn’t work yet. https://www.alexirpan.com/2018/02/14/rl-hard.html (2018)

  20. Křetínský, J., Pérez, G.A., Raskin, J.-F.: Learning-based mean-payoff optimization in an unknown MDP under omega-regular constraints. In: CONCUR, vol. 118, LIPIcs, pp. 8:1–8:18 (2018)

    Google Scholar 

  21. Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_47

    Chapter  Google Scholar 

  22. Lavaei, A., Somenzi, F., Soudjani, S., Trivedi, A., Zamani, M.: Formal controller synthesis for unknown continuous-space MDPs via model-free reinforcement learning. In: International Conference on Cyber-Physical Systems, April 2020

    Google Scholar 

  23. Lewis, M.E.: Bias optimality. In: Feinberg, E.A., Shwartz, A. (eds.) Handbook of Markov Decision Processes, pp. 89–111. Springer, Boston (2002). https://doi.org/10.1007/978-1-4615-0805-2_3

    Chapter  Google Scholar 

  24. Liggett, T.M., Lippman, S.A.: Short notes: stochastic games with perfect information and time average payoff. SIAM Rev. 11(4), 604–607 (1969)

    Article  MathSciNet  Google Scholar 

  25. Manna, Z., Pnueli, A.: The Temporal Logic of Reactive and Concurrent Systems *Specification*. Springer (1991)

    Google Scholar 

  26. Milnerm, R.: An algebraic definition of simulation between programs. Int. Joint Conf. Artif. Intell. 23, 481–489 (1971)

    Google Scholar 

  27. Ng, A.Y., Harada, D., Russell, S.J.: Policy invariance under reward transformations: theory and application to reward shaping. In: International Conference on Machine Learning, pp. 278–287 (1999)

    Google Scholar 

  28. Perrin, D., Pin, J.-É.: Infinite Words: Automata, Semigroups. Elsevier, Logic and Games (2004)

    Google Scholar 

  29. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York (1994)

    Book  Google Scholar 

  30. Sadigh, D., Kim, E., Coogan, S., Sastry, S.S., Seshia, S.A.: A learning based approach to control synthesis of Markov decision processes for linear temporal logic specifications. In: CDC, pp. 1091–1096, December 2014

    Google Scholar 

  31. Sickert, S., Esparza, J., Jaax, S., Křetínský, J.: Limit-deterministic Büchi automata for linear temporal logic. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 312–332. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41540-6_17

    Chapter  Google Scholar 

  32. Somenzi, F., Trivedi, A.: Reinforcement learning and formal requirements. In: Zamani, M., Zufferey, D. (eds.) NSV 2019. LNCS, vol. 11652, pp. 26–41. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28423-7_2

    Chapter  Google Scholar 

  33. Strehl, A.L., Li, L., Wiewiora, E., Langford, J., Littman, M.L.: PAC model-free reinforcement learning. In: International Conference on Machine Learning, ICML, pp. 881–888 (2006)

    Google Scholar 

  34. RMACC Summit Supercomputer. https://rmacc.org/rmaccsummit

  35. Sutton, R.S., Barto, A.G.: Reinforcement Learnging: An Introduction. MIT Press, 2nd edn (2018)

    Google Scholar 

  36. van Hasselt, H.: Double \(Q\)-learning. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2010

    Google Scholar 

  37. Vardi, M.Y.: Automatic verification of probabilistic concurrent finite state programs. In: Foundations of Computer Science, pp. 327–338 (1985)

    Google Scholar 

  38. Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    MATH  Google Scholar 

Download references

Acknowledgment

This work utilized resources from the University of Colorado Boulder Research Computing Group, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. This work was supported by the Engineering and Physical Sciences Research Council through grant EP/P020909/1 and by the National Science Foundation through grant 2009022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mateo Perez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hahn, E.M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., Wojtczak, D. (2020). Faithful and Effective Reward Schemes for Model-Free Reinforcement Learning of Omega-Regular Objectives. In: Hung, D.V., Sokolsky, O. (eds) Automated Technology for Verification and Analysis. ATVA 2020. Lecture Notes in Computer Science(), vol 12302. Springer, Cham. https://doi.org/10.1007/978-3-030-59152-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59152-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59151-9

  • Online ISBN: 978-3-030-59152-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics