Synthesis of Optimal Resilient Control Strategies

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10482)

Abstract

Repair mechanisms are important within resilient systems to maintain the system in an operational state after an error occurred. Usually, constraints on the repair mechanisms are imposed, e.g., concerning the time or resources required (such as energy consumption or other kinds of costs). For systems modeled by Markov decision processes (MDPs), we introduce the concept of resilient schedulers, which represent control strategies guaranteeing that these constraints are always met within some given probability. Assigning rewards to the operational states of the system, we then aim towards resilient schedulers which maximize the long-run average reward, i.e., the expected mean payoff. We present a pseudo-polynomial algorithm that decides whether a resilient scheduler exists and if so, yields an optimal resilient scheduler. We show also that already the decision problem asking whether there exists a resilient scheduler is PSPACE-hard.

References

  1. 1.
    Altman, E.: Constrained Markov Decision Processes. Chapman and Hall, Boca Raton (1999)MATHGoogle Scholar
  2. 2.
    Attoh-Okine, N.: Resilience Engineering: Models and Analysis. Resilience Engineering: Models and Analysis. Cambridge University Press, Cambridge (2016)CrossRefGoogle Scholar
  3. 3.
    Baier, C., Dubslaff, C., Klüppelholz, S., Leuschner, L.: Energy-utility analysis for resilient systems using probabilistic model checking. In: Ciardo, G., Kindler, E. (eds.) PETRI NETS 2014. LNCS, vol. 8489, pp. 20–39. Springer, Cham (2014). doi: 10.1007/978-3-319-07734-5_2 CrossRefGoogle Scholar
  4. 4.
    Baier, C., Dubslaff, C., Korenčiak, Ľ., Kučera, A., Řehák, V.: Synthesis of optimal resilient control strategies. CoRR, abs/1707.03223 (2017)Google Scholar
  5. 5.
    Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press (2008)Google Scholar
  6. 6.
    Bloem, R., Chatterjee, K., Greimel, K., Henzinger, T.A., Hofferek, G., Jobstmann, B., Könighofer, B., Könighofer, R.: Synthesizing robust systems. Acta Inf. 51(3), 193–220 (2014)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Brázdil, T., Brožek, V., Chatterjee, K., Forejt, V., Kučera, A.: Markov decision processes with multiple long-run average objectives. LMCS 10(1) (2014)Google Scholar
  8. 8.
    Camara, J., de Lemos, R.: Evaluation of resilience in self-adaptive systems using probabilistic model-checking. In: SEAMS, pp. 53–62 (2012)Google Scholar
  9. 9.
    Chatterjee, K.: Markov decision processes with multiple long-run average objectives. In: Arvind, V., Prasad, S. (eds.) FSTTCS 2007. LNCS, vol. 4855, pp. 473–484. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-77050-3_39 CrossRefGoogle Scholar
  10. 10.
    Ehlers, R., Topcu, U.: Resilience to intermittent assumption violations in reactive synthesis. In: HSCC, pp. 203–212. ACM, New York (2014)Google Scholar
  11. 11.
    Etessami, K., Kwiatkowska, M., Vardi, M.Y., Yannakakis, M.: Multi-objective model checking of Markov decision processes. LMCS 4(4) (2008)Google Scholar
  12. 12.
    Forejt, V., Kwiatkowska, M., Norman, G., Parker, D., Qu, H.: Quantitative multi-objective verification for probabilistic systems. In: Abdulla, P.A., Leino, K.R.M. (eds.) TACAS 2011. LNCS, vol. 6605, pp. 112–127. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-19835-9_11 CrossRefGoogle Scholar
  13. 13.
    German, R.: Performance Analysis of Communication Systems with Non-Markovian Stochastic Petri Nets. Wiley, Hobokon (2000)MATHGoogle Scholar
  14. 14.
    Girault, A., Rutten, É.: Automating the addition of fault tolerance with discrete controller synthesis. Form. Methods Syst. Des. 35(2), 190–225 (2009)CrossRefMATHGoogle Scholar
  15. 15.
    Haase, C., Kiefer, S.: The odds of staying on budget. In: Halldórsson, M.M., Iwama, K., Kobayashi, N., Speckmann, B. (eds.) ICALP 2015. LNCS, vol. 9135, pp. 234–246. Springer, Heidelberg (2015). doi: 10.1007/978-3-662-47666-6_19 CrossRefGoogle Scholar
  16. 16.
    Huang, C.H., Peled, D.A., Schewe, S., Wang, F.: A game-theoretic foundation for the maximum software resilience against dense errors. IEEE Trans. Software Eng. 42(7), 605–622 (2016)CrossRefGoogle Scholar
  17. 17.
    Kallenberg, L.: Markov Decision Processes. Lect. Notes, University of Leiden (2011)Google Scholar
  18. 18.
    Longo, F., Ghosh, R., Naik, V.K., Rindos, A.J., Trivedi, K.S.: An approach for resiliency quantification of large scale systems. SIGMETRICS 44(4), 37–48 (2017)CrossRefGoogle Scholar
  19. 19.
    Puterman, M.L.: Markov Decision Processes. Wiley (1994)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.TU DresdenDresdenGermany
  2. 2.Masaryk UniversityBrnoCzech Republic

Personalised recommendations