Summary
Applied to the two-state version of a quality control model introduced byRoss [1971], we derive upper and lower bounds for the minimal expected total discounted cost, give regions for nonoptimality for each of the three actions (produce, inspect, revise), and construct nearly optimal decision rules easy to realize.
Zusammenfassung
Für ein vonRoss [1971] eingeführtes Modell der statistischen Qualitätskontrolle werden untere und obere Schranken für die minimalen erwarteten diskontierten Gesamtkosten gegeben, Suboptimalitätsbereiche für jede der drei möglichen Aktionen (produzieren, kontrollieren, ersetzen) hergeleitet und “gute” leicht zu realisierende Entscheidungsregeln konstruiert.
Similar content being viewed by others
References
Albright, S.C.: Structural Results for Partially Observable Markov Decision Processes. Oper. Res.27, 1979, 1041–1053.
Hinderer, K.: Foundations of non-stationary dynamic programming with discrete time parameter. Lecture Notes in Operations Research and Mathematical Systems, Bd. 33. Berlin-Heidelberg-New York 1970.
Ross, S.M.: Quality Control under Markovian Deterioration. Management Sci.17, 1971, 587–596.
Schäl, M.: Conditions for Optimality in Dynamic Programming and for the Limit ofn-stage Optimal Policies to be Optimal. Z. Wahrscheinlichkeitstheorie verw. Gebiete32, 1975, 179–196.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Waldmann, K.H. On two-state quality control under Markovian deterioration. Metrika 29, 249–260 (1982). https://doi.org/10.1007/BF01893384
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01893384