Accurate Interpolation in Appearance-Based Pose Estimation
One problem in appearance-based pose estimation is the need for many training examples, i.e. images of the object in a large number of known poses. Some invariance can be obtained by considering translations, rotations and scale changes in the image plane, but the remaining degrees of freedom are often handled simply by sampling the pose space densely enough. This work presents a method for accurate interpolation between training views using local linear models. As a view representation local soft orientation histograms are used. The derivative of this representation with respect to the image plane transformations is computed, and a Gauss-Newton optimization is used to optimize all pose parameters simultaneously, resulting in an accurate estimate.
KeywordsAugmented Reality Query Image Weighting Kernel Gradient Magnitude Local Linear Model
- 5.Lowe, D.G.: Object recognition from local scale-invariant features. In: IEEE Int. Conf. on Computer Vision, Sept. 1999, IEEE Computer Society Press, Los Alamitos (1999)Google Scholar
- 6.Moore, A.W., Schneider, J., Deng, K.: Efficient locally weighted polynomial regression predictions. In: Proc. 14th International Conference on Machine Learning, pp. 236–244. Morgan Kaufmann, San Francisco (1997)Google Scholar
- 9.Obdrzalek, S., Matas, J.: Object recognition using local affine frames on distinguished regions. In: British Machine Vision Conf. (2002)Google Scholar
- 10.Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face recognition. In: CVPR (1994)Google Scholar
- 11.Cipolla, R., Drummond, T.: Real-time visual tracking of complex structures. IEEETransactions on Pattern Analysis and Machine Intelligence 24(7) (2002)Google Scholar