Advertisement

Process-based risk measures and risk-averse control of discrete-time systems

  • Jingnan Fan
  • Andrzej RuszczyńskiEmail author
Full Length Paper Series B
  • 98 Downloads

Abstract

For controlled discrete-time stochastic processes we introduce a new class of dynamic risk measures, which we call process-based. Their main feature is that they measure risk of processes that are functions of the history of a base process. We introduce a new concept of conditional stochastic time consistency and we derive the structure of process-based risk measures enjoying this property. We show that they can be equivalently represented by a collection of static law-invariant risk measures on the space of functions of the state of the base process. We apply this result to controlled Markov processes and we derive dynamic programming equations. We also derive dynamic programming equations for multistage stochastic programming with decision-dependent distributions.

Keywords

Dynamic risk measures Time consistency Dynamic programming Multistage stochastic programming 

Mathematics Subject Classification

90C15 90C39 90C40 

References

  1. 1.
    Arlotto, A., Gans, N., Steele, J.M.: Markov decision problems where means bound variances. Oper. Res. 62(4), 864–875 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Artstein, Z., Wets, R.J.-B.: Stability results for stochastic programs and sensors, allowing for discontinuous objective functions. SIAM J. Optim. 4(3), 537–550 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Artzner, P., Delbaen, F., Eber, J.-M., Heath, D., Ku, H.: Coherent multiperiod risk adjusted values and Bellman’s principle. Ann. Oper. Res. 152, 5–22 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Aubin, J.-P., Frankowska, H.: Set-Valued Analysis. Birkhäuser, Boston (2009)zbMATHCrossRefGoogle Scholar
  5. 5.
    Bäuerle, N., Rieder, U.: More risk-sensitive Markov decision processes. Math. Oper. Res. 39(1), 105–120 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Bielecki, T., Hernández-Hernández, D., Pliska, S.R.: Risk sensitive control of finite state Markov chains in discrete time, with applications to portfolio management. Math. Methods Oper. Res. 50(2), 167–188 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Billingsley, P.: Convergence of Probability Measures. Wiley, Hoboken (2013)zbMATHGoogle Scholar
  8. 8.
    Çavus, Ö., Ruszczyński, A.: Computational methods for risk-averse undiscounted transient Markov models. Oper. Res. 62(2), 401–417 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Çavus, Ö., Ruszczyński, A.: Risk-averse control of undiscounted transient Markov models. SIAM J. Control Optim. 52(6), 3935–3966 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Chen, Z., Li, G., Zhao, Y.: Time-consistent investment policies in Markovian markets: a case of mean–variance analysis. J. Econ. Dyn. Control 40, 293–316 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Cheridito, P., Delbaen, F., Kupper, M.: Dynamic monetary risk measures for bounded discrete-time processes. Electron. J. Probab. 11, 57–106 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Cheridito, P., Kupper, M.: Composition of time-consistent dynamic monetary risk measures in discrete time. Int. J. Theor. Appl. Finance 14(01), 137–162 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Coraluppi, S.P., Marcus, S.I.: Risk-sensitive and minimax control of discrete-time, finite-state Markov decision processes. Automatica 35(2), 301–309 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Dai Pra, P., Meneghini, L., Runggaldier, W.J.: Explicit solutions for multivariate, discrete-time control problems under uncertainty. Syst. Control Lett. 34(4), 169–176 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Denardo, E.V., Rothblum, U.G.: Optimal stopping, exponential utility, and linear programming. Math. Program. 16(2), 228–244 (1979)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Di Masi, G.B., Stettner, Ł.: Risk-sensitive control of discrete-time Markov processes with infinite horizon. SIAM J. Control Optim. 38(1), 61–78 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Dupačová, J.: Optimization under exogenous and endogenous uncertainty. University of West Bohemia in Pilsen (2006)Google Scholar
  18. 18.
    Fan, J., Ruszczyński, A.: Risk measurement and risk-averse control of partially observable discrete-time Markov systems. Math. Methods Oper. Res. 88, 161–184 (2018)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Feinberg, E.A., Kasyanov, P.O., Zadoianchuk, N.V.: Berges theorem for noncompact image sets. J. Math. Anal. Appl. 397(1), 255–259 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Filar, J.A., Kallenberg, L.C.M., Lee, H.-M.: Variance-penalized Markov decision processes. Math. Oper. Res. 14(1), 147–161 (1989)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Föllmer, H., Penner, I.: Convex risk measures and the dynamics of their penalty functions. Stat. Decis. 24(1/2006), 61–96 (2006)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Goel, V., Grossmann, I.E.: A stochastic programming approach to planning of offshore gas field developments under uncertainty in reserves. Comput. Chem. Eng. 28(8), 1409–1429 (2004)CrossRefGoogle Scholar
  23. 23.
    Hellemo, L., Barton, P.I., Tomasgard, A.: Decision-dependent probabilities in stochastic programs with recourse. Comput. Manag. Sci. 15, 369–395 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    Howard, R.A., Matheson, J.E.: Risk-sensitive Markov decision processes. Manag. Sci. 18, 356–369 (1971/1972)MathSciNetzbMATHCrossRefGoogle Scholar
  25. 25.
    Jaquette, S.C.: Markov decision processes with a new optimality criterion: discrete time. Ann. Stat. 1, 496–505 (1973)MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    Jaquette, S.C.: A utility criterion for Markov decision processes. Manag. Sci. 23(1), 43–49 (1975/1976)MathSciNetzbMATHCrossRefGoogle Scholar
  27. 27.
    Jaśkiewicz, A., Matkowski, J., Nowak, A.S.: Persistently optimal policies in stochastic dynamic programming with generalized discounting. Math. Oper. Res. 38(1), 108–121 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Jobert, A., Rogers, L.C.G.: Valuations and dynamic convex risk measures. Math. Finance 18(1), 1–22 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    Jonsbråten, T.W., Wets, R.J.-B., Woodruff, D.L.: A class of stochastic programs with decision dependent random elements. Ann. Oper. Res. 82, 83–106 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Klöppel, S., Schweizer, M.: Dynamic indifference valuation via convex risk measures. Math. Finance 17(4), 599–627 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    Koopmans, T.C.: Stationary ordinal utility and impatience. Econometrica 28, 287–309 (1960)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Kupper, M., Schachermayer, W.: Representation results for law invariant time consistent functions. Math. Financ. Econ. 2(3), 189–210 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    Kuratowski, K., Ryll-Nardzewski, C.: A general theorem on selectors. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys. 13(1), 397–403 (1965)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Levitt, S., Ben-Israel, A.: On modeling risk in Markov decision processes. In: Rubinov, A.M., Glover, B.M. (eds.) Optimization and Related Topics (Ballarat/Melbourne, 1999). Applied Optimization, vol. 47, pp. 27–40. Kluwer, Dordrecht (2001)CrossRefGoogle Scholar
  35. 35.
    Lin, K., Marcus, S.I.: Dynamic programming with non-convex risk-sensitive measures. In: American Control Conference (ACC), 2013, pp. 6778–6783. IEEE (2013)Google Scholar
  36. 36.
    Maceira, M.E.P., Marzano, L.G.B., Penna, D.D.J., Diniz, A.L., Justino, T.C.: Application of CVaR risk aversion approach in the expansion and operation planning and for setting the spot price in the Brazilian hydrothermal interconnected system. In: Power Systems Computation Conference (PSCC), 2014, pp. 1–7. IEEE (2014)Google Scholar
  37. 37.
    Mannor, S., Tsitsiklis, J.N.: Algorithmic aspects of mean–variance optimization in Markov decision processes. Eur. J. Oper. Res. 231(3), 645–653 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    Marcus, S.I., Fernández-Gaucherand, E., Hernández-Hernández, D., Coraluppi, S., Fard, P.: Risk sensitive Markov decision processes. In: Byrnes, C.I., Datta, B.N., Martin, C.F. (eds.) Systems and Control in the Twenty-First Century (St. Louis, MO, 1996). Progress in Systems and Control Theory, vol. 22, pp. 263–279. Birkhäuser, Boston (1997)CrossRefGoogle Scholar
  39. 39.
    Ogryczak, W., Ruszczyński, A.: From stochastic dominance to mean-risk models: Semideviations as risk measures. Eur. J. Oper. Res. 116(1), 33–50 (1999)zbMATHCrossRefGoogle Scholar
  40. 40.
    Ogryczak, W., Ruszczyński, A.: On consistency of stochastic dominance and mean-semideviation models. Math. Program. 89(2), 217–232 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  41. 41.
    Ogryczak, W., Ruszczyński, A.: Dual stochastic dominance and related mean risk models. SIAM J. Optim. 13, 60–78 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    Pflug, G.Ch., Römisch, W.: Modeling, Measuring and Managing Risk. World Scientific, Singapore (2007)zbMATHCrossRefGoogle Scholar
  43. 43.
    Philpott, A.B., De Matos, V.L.: Dynamic sampling algorithms for multi-stage stochastic programs with risk aversion. Eur. J. Oper. Res. 218(2), 470–483 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  44. 44.
    Riedel, F.: Dynamic coherent risk measures. Stoch. Process. Their Appl. 112, 185–200 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  45. 45.
    Rockafellar, R.T., Uryasev, S.: Optimization of conditional value at risk. J. Risk 2, 21–41 (2000)CrossRefGoogle Scholar
  46. 46.
    Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, vol. 317. Springer, Berlin (1998)zbMATHGoogle Scholar
  47. 47.
    Roorda, B., Schumacher, J.M., Engwerda, J.: Coherent acceptability measures in multiperiod models. Math. Finance 15(4), 589–612 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    Runggaldier, W.J.: Concepts and methods for discrete and continuous time control under uncertainty. Insur. Math. Econ. 22(1), 25–39 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  49. 49.
    Ruszczyński, A.: Risk-averse dynamic programming for Markov decision processes. Math. Program. 125(2, Ser. B), 235–261 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  50. 50.
    Ruszczyński, A., Shapiro, A.: Conditional risk mappings. Math. Oper. Res. 31, 544–561 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  51. 51.
    Scandolo, G.: Risk measures in a dynamic setting. Ph.D. Thesis, Università degli Studi di Milano, Milan, Italy (2003)Google Scholar
  52. 52.
    Shapiro, A.: Time consistency of dynamic risk measures. Oper. Res. Lett. 40(6), 436–439 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  53. 53.
    Shapiro, A., Dentcheva, D., Ruszczyński, : Lectures on Stochastic Programming: Modeling and Theory, 2nd edn. SIAM, Philadelphia (2011)zbMATHGoogle Scholar
  54. 54.
    Shen, Y., Stannat, W., Obermayer, K.: Risk-sensitive Markov control processes. SIAM J. Control Optim. 51(5), 3652–3672 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  55. 55.
    Talluri, K.T., Van Ryzin, G.J.: The Theory and Practice of Revenue Management, vol. 68. Springer, Berlin (2006)zbMATHGoogle Scholar
  56. 56.
    Weber, S.: Distribution-invariant risk measures, information, and dynamic consistency. Math. Finance 16(2), 419–441 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  57. 57.
    White, D.J.: Mean, variance, and probabilistic criteria in finite Markov decision processes: a review. J. Optim. Theory Appl. 56(1), 1–29 (1988)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018

Authors and Affiliations

  1. 1.RUTCORRutgers UniversityPiscatawayUSA
  2. 2.Department of Management Science and Information SystemsRutgers UniversityPiscatawayUSA

Personalised recommendations