Performance of Lookahead Control Policies in the Face of Abstractions and Approximations
- Cite this paper as:
- Levner I., Bulitko ., Madani O., Greiner R. (2002) Performance of Lookahead Control Policies in the Face of Abstractions and Approximations. In: Koenig S., Holte R.C. (eds) Abstraction, Reformulation, and Approximation. SARA 2002. Lecture Notes in Computer Science, vol 2371. Springer, Berlin, Heidelberg
This paper explores the formulation of image interpretation as a Markov Decision Process (MDP) problem, highlighting the important assumptions in the MDP formulation. Furthermore state abstraction, value function and action approximations as well as lookahead search are presented as necessary solution methodologies. We view the task of image interpretation as a dynamic control problem where the optimal vision operator is selected responsively based on the problem solving state at hand. The control policy, therefore, maps problem-solving states to operators in an attempt to minimize the total problem-solving time while reliably interpreting the image. Real world domains, like that of image interpretation, usually have incredibly large state spaces which require methods of abstraction in order to be manageable by today’s information processing systems. In addition an optimal value function (V*) used to evaluate state quality is also generally unavailable requiring approximations to be used in conjunction with state abstraction. Therefore, the performance of the system is directly related to the types of abstractions and approximations present.
Unable to display preview. Download preview PDF.