Performance of Lookahead Control Policies in the Face of Abstractions and Approximations
This paper explores the formulation of image interpretation as a Markov Decision Process (MDP) problem, highlighting the important assumptions in the MDP formulation. Furthermore state abstraction, value function and action approximations as well as lookahead search are presented as necessary solution methodologies. We view the task of image interpretation as a dynamic control problem where the optimal vision operator is selected responsively based on the problem solving state at hand. The control policy, therefore, maps problem-solving states to operators in an attempt to minimize the total problem-solving time while reliably interpreting the image. Real world domains, like that of image interpretation, usually have incredibly large state spaces which require methods of abstraction in order to be manageable by today’s information processing systems. In addition an optimal value function (V*) used to evaluate state quality is also generally unavailable requiring approximations to be used in conjunction with state abstraction. Therefore, the performance of the system is directly related to the types of abstractions and approximations present.
Unable to display preview. Download preview PDF.
- [Bulitko et al. 2000]Bulitko, V., Caelli, T., McNabb, D. 2000. Forestry Information Management System (FIMS): An Introduction. Technical Report. University of Alberta.Google Scholar
- [Draper et al. 2000]Draper, B., Bins, J., Baek, K. 2000. ADORE: Adaptive Object Recognition, Videre, 1(4):86–99.Google Scholar
- [Good 1971]Good, I.J. 1971. Twenty-seven Principles of Rationality. In Foundations of Statistical Inference, Godambe V.P., Sprott, D.A. (editors), Holt, Rinehart and Winston, Toronton.Google Scholar
- [Gougeon 1993]Gougeon, F.A. 1993. Individual Tree Identification from High Resolution MEIS Images, In Proceedings of the International Forum on Airborne Multispectral Scanning for Forestry and Mapping, Leckie, D.G., and Gillis, M.D. (editors), pp. 117–128.Google Scholar
- [Hsu et al. 1995]Hsu, F.H., Campbell, M.S., Hoane, A.J.J. 1995. Deep Blue System Overview. Proceedings of the 9th ACM Int. Conf. on Supercomputing, pp. 240–244.Google Scholar
- [Korf 1990]
- [Larsen, Rudemo 1997]Larsen, M. and Rudemo, M. 1997. Using ray-traced templates to find individual trees in aerial photos. In Proceedings of the 10th Scandinavian Conference on Image Analysis, volume 2, pages 1007–1014.Google Scholar
- [Newborn 1997]Newborn, M. 1997. Kasparov vs. Deep Blue: Computer Chess Comes Out of Age. Springer-Verlag.Google Scholar
- [Pollock 1994]Pollock, R.J. 1994. A Model-based Approach to Automatically Locating Tree Crowns in High Spatial Resolution Images. Image and Signal Processing for Remote Sensing. Jacky Desachy, editor.Google Scholar
- [Schaeffer et al. 1992]
- [Sutton, Barto 2000]Sutton, R.S., Barto, A.G., 2000. Reinforcement Learning: An Introduction. MIT Press.Google Scholar