Annals of Operations Research

, Volume 176, Issue 1, pp 7–39

Inventory management with partially observed nonstationary demand

Article

Abstract

We consider a continuous-time model for inventory management with Markov modulated non-stationary demands. We introduce active learning by assuming that the state of the world is unobserved and must be inferred by the manager. We also assume that demands are observed only when they are completely met. We first derive the explicit filtering equations and pass to an equivalent fully observed impulse control problem in terms of the sufficient statistics, the a posteriori probability process and the current inventory level. We then solve this equivalent formulation and directly characterize an optimal inventory policy. We also describe a computational procedure to calculate the value function and the optimal policy and present two numerical illustrations.

Keywords

Inventory management Markov modulated Poisson process Hidden Markov model Partially observable demand Censored demand 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allam, S., Dufour, F., & Bertrand, P. (2001). Discrete-time estimation of a Markov chain with marked point process observations. Application to Markovian jump filtering. IEEE Transactions on Automatic Control, 46(6), 903–908. ISSN 0018-9286. CrossRefGoogle Scholar
  2. Arjas, E., Haara, P., & Norros, I. (1992). Filtering the histories of a partially observed marked point process. Stochastic Processes and their Applications, 40(2), 225–250. ISSN 0304-4149. CrossRefGoogle Scholar
  3. Aviv, Y., & Pazgal, A. (2005). A partially observed Markov decision process for dynamic pricing. Management Science, 51(9), 1400–1416. ISSN 0025-1909. CrossRefGoogle Scholar
  4. Azoury, K. S. (1985). Bayes solution to dynamic inventory models under unknown demand distribution. Management Science, 31(9), 1150–1160. ISSN 0025-1909. CrossRefGoogle Scholar
  5. Bayraktar, E., & Ludkovski, M. (2009, accepted). Sequential tracking of a hidden Markov chain using point process observations. Stochastic Processes and Their Applications. arXiv:0712.0413.
  6. Bayraktar, E., & Sezer, S. (2006). Quickest detection for a Poisson process with a phase-type change-time distribution (Technical Report). University of Michigan. arXiv:math/0611563.
  7. Bensoussan, A., Çakanyildirim, M., & Sethi, S. P. (2005a). On the optimal control of partially observed inventory systems. Comptes Rendus Mathematique. Academie Des Sciences. Paris, 341(7), 419–426. ISSN 1631-073X. Google Scholar
  8. Bensoussan, A., Liu, R. H., & Sethi, S. P. (2005b). Optimality of an (s,S) policy with compound Poisson and diffusion demands: a quasi-variational inequalities approach. SIAM Journal on Control and Optimization, 44(5), 1650–1676. ISSN 0363-0129 (electronic). CrossRefGoogle Scholar
  9. Bensoussan, A., Çakanyildirim, M., & Sethi, S. P. (2007a). A multiperiod newsvendor problem with partially observed demand. Mathematics of Operations Research, 32(2), 322–344. ISSN 0364-765X. CrossRefGoogle Scholar
  10. Bensoussan, A., Çakanyildirim, M., & Sethi, S. P. (2007b). Partially observed inventory systems: The case of zero-balance walk. SIAM Journal on Control and Optimization, 46(1), 176–209. ISSN 0363-0129. CrossRefGoogle Scholar
  11. Bensoussan, A., Çakanyildirim, M., Minjarez-Sosa, J., Royal, A., & Sethi, S. (2008). Inventory problems with partially observed demands and lost sales. Journal of Optimization Theory and Applications, 136(3), 321–340. CrossRefGoogle Scholar
  12. Bertsekas, D. P. (2005). Dynamic programming and optimal control (Vol. I, 3rd ed.). Belmont: Athena Scientific. ISBN 1-886529-26-4. Google Scholar
  13. Beyer, D., & Sethi, S. P. (2005). Average cost optimality in inventory models with Markovian demands and lost sales. In GERAD 25th Anniv. Ser. : Vol. 4. Analysis, control and optimization of complex dynamic systems (pp. 3–23). New York: Springer. CrossRefGoogle Scholar
  14. Bremaud, P. (1981). Point processes and queues. New York: Springer. Google Scholar
  15. Costa, O. L. V., & Davis, M. H. A. (1989). Impulse control of piecewise-deterministic processes. Mathematics of Control, Signals and Systems, 2(3), 187–206. ISSN 0932-4194. CrossRefGoogle Scholar
  16. Darroch, J. N., & Morris, K. W. (1968). Passage-time generating functions for continuous-time finite Markov chains. Journal of Applied Probability, 5(2), 414–426. CrossRefGoogle Scholar
  17. Davis, M. H. A. (1993). Markov models and optimization. London: Chapman & Hall. Google Scholar
  18. Elliott, R. J., & Malcolm, W. P. (2004). Robust M-ary detection filters and smoothers for continuous-time jump Markov systems. IEEE Transactions on Automatic Control, 49(7), 1046–1055. ISSN 0018-9286. CrossRefGoogle Scholar
  19. Elliott, R. J., & Malcolm, W. P. (2005). General smoothing formulas for Markov-modulated Poisson observations. IEEE Transactions on Automatic Control, 50(8), 1123–1134. ISSN 0018-9286. CrossRefGoogle Scholar
  20. Elliott, R. J., Aggoun, L., & Moore, J. B. (1995). Estimation and control. In Applications of mathematics (New York) : Vol. 29. Hidden Markov models. New York: Springer. ISBN 0-387-94364-1. Google Scholar
  21. Gatarek, D. (1992). Optimality conditions for impulsive control of piecewise-deterministic processes. Mathematics of Control, Signals and Systems, 5(2), 217–232. ISSN 0932-4194. CrossRefGoogle Scholar
  22. Hernández-Lerma, O. (1989). Applied mathematical sciences : Vol. 79. Adaptive Markov control processes. New York: Springer. ISBN 0-387-96966-7. Google Scholar
  23. Karlin, S., & Taylor, H. M. (1981). A second course in stochastic processes. New York: Academic Press. ISBN 0-12-398650-8. Google Scholar
  24. Lariviere, M. A., & Porteus, E. L. (1999). Stalking information: Bayesian inventory management with unobserved lost sales. Management Science, 45(3), 346–363. ISSN 0025-1909. CrossRefGoogle Scholar
  25. Lovejoy, W. S. (1990). Myopic policies for some inventory models with uncertain demand distributions. Management Science, 36(6), 724–738. ISSN 0025-1909. CrossRefGoogle Scholar
  26. Ludkovski, M., & Sezer, S. (2007). Finite horizon decision timing with partially observable Poisson processes (Technical Report). University of California Santa Barbara. This article is available at http://www.pstat.ucsb.edu/faculty/ludkovski/papers.html.
  27. Neuts, M. F. (1989). Structured stochastic matrices of M/G/1 type and their applications. New York: Dekker. Google Scholar
  28. Sethi, S. P., & Cheng, F. (1997). Optimality of (s,S) policies in inventory models with Markovian demand. Operations Research, 45(6), 931–939. ISSN 0030-364X. CrossRefGoogle Scholar
  29. Song, J.-S., & Zipkin, P. (1993). Inventory control in a fluctuating demand environment. Operations Research, 41(2), 351–370. ISSN 0030-364X. CrossRefGoogle Scholar
  30. Treharne, J. T., & Sox, C. R. (2002). Adaptive inventory control for nonstationary demand and partial information. Management Science, 48(5), 607–624. ISSN 0025-1909. CrossRefGoogle Scholar
  31. Zabczyk, J. (1983). Stopping problems in stochastic control. In Proceedings of the international congress of mathematicians (pp. 1425–1437). Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Department of MathematicsUniversity of MichiganAnn ArborUSA
  2. 2.Department of Statistics and Applied ProbabilityUniversity of California Santa BarbaraSanta BarbaraUSA

Personalised recommendations