Advertisement

Pose Sampling for Efficient Model-Based Recognition

  • Clark F. Olson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4842)

Abstract

In model-based object recognition and pose estimation, it is common for the set of extracted image features to be much larger than the set of object model features owing to clutter in the image. However, another class of recognition problems has a large model, but only a portion of the object is visible in the image, in which a small set of features can be extracted, most of which are salient. In this case, reducing the effective complexity of the object model is more important than the image clutter. We describe techniques to accomplish this by sampling the space of object positions. A subset of the object model is considered for each sampled pose. This reduces the complexity of the method from cubic to linear in the number of extracted features. We have integrated this technique into a system for recognizing craters on planetary bodies that operates in real-time.

Keywords

Object Recognition Model Feature Object Model Planetary Body Extract Image Feature 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cass, T.A.: Polynomial-time geometric matching for object recognition. International Journal of Computer Vision 21, 37–61 (1997)CrossRefGoogle Scholar
  2. 2.
    Huttenlocher, D.P., Ullman, S.: Recognizing solid objects by alignment with an image. International Journal of Computer Vision 5, 195–212 (1990)CrossRefGoogle Scholar
  3. 3.
    Olson, C.F.: Efficient pose clustering using a randomized algorithm. International Journal of Computer Vision 23, 131–147 (1997)CrossRefGoogle Scholar
  4. 4.
    Bolles, R.C., Cain, R.A.: Recognizing and locating partially visible objects: The local-feature-focus method. International Journal of Robotics Research 1, 57–82 (1982)CrossRefGoogle Scholar
  5. 5.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60, 91–110 (2004)CrossRefGoogle Scholar
  6. 6.
    Thompson, D.W., Mundy, J.L.: Three-dimensional model matching from an unconstrained viewpoint. In: Proceedings of the IEEE Conference on Robotics and Automation, vol. 1, pp. 208–220 (1987)Google Scholar
  7. 7.
    Havaldar, P., Medioni, G., Stein, F.: Perceptual grouping for generic recognition. International Journal of Computer Vision 20, 59–80 (1996)CrossRefGoogle Scholar
  8. 8.
    Lowe, D.G.: Three-dimensional object recognition from single two-dimensional images. Artificial Intelligence 31, 355–395 (1987)CrossRefGoogle Scholar
  9. 9.
    Olson, C.F.: Improving the generalized Hough transform through imperfect grouping. Image and Vision Computing 16, 627–634 (1998)CrossRefGoogle Scholar
  10. 10.
    Clemens, D.T., Jacobs, D.W.: Space and time bounds on indexing 3-d models from 2-d images. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 1007–1017 (1991)CrossRefGoogle Scholar
  11. 11.
    Flynn, P.J.: 3d object recognition using invariant feature indexing of interpretation tables. CVGIP: Image Understanding 55, 119–129 (1992)zbMATHCrossRefGoogle Scholar
  12. 12.
    Lamdan, Y., Schwartz, J.T., Wolfson, H.J.: Affine invariant model-based object recognition. IEEE Transactions on Robotics and Automation 6, 578–589 (1990)CrossRefGoogle Scholar
  13. 13.
    Jacobs, D.W.: Matching 3-d models to 2-d images. International Journal of Computer Vision 21, 123–153 (1997)CrossRefGoogle Scholar
  14. 14.
    Gigus, Z., Malik, J.: Computing the aspect graph for line drawings of polyhedral objects. IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 113–122 (1990)CrossRefGoogle Scholar
  15. 15.
    Kriegman, D.J., Ponce, J.: Computing exact aspect graphs of curved objects: Solids of revolution. International Journal of Computer Vision 5, 119–135 (1990)CrossRefGoogle Scholar
  16. 16.
    Murase, H., Nayar, S.K.: Visual learning and recognition of 3-d objects from appearance. International Journal of Computer Vision 14, 5–24 (1995)CrossRefGoogle Scholar
  17. 17.
    Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neuroscience 3, 71–86 (1991)CrossRefGoogle Scholar
  18. 18.
    Ullman, S., Basri, R.: Recognition by linear combinations of models. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 992–1006 (1991)CrossRefGoogle Scholar
  19. 19.
    Greenspan, M.: The sample tree: A sequential hypothesis testing approach to 3D object recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 772–779 (1998)Google Scholar
  20. 20.
    Peters, G.: Efficient pose estimation using view-based object representation. Machine Vision and Applications 16, 59–63 (2004)CrossRefGoogle Scholar
  21. 21.
    Olson, C.F.: Pose clustering guided by short interpretation trees. In: Proceedings of the 17th International Conference on Pattern Recognition, vol. 2, pp. 149–152 (2004)Google Scholar
  22. 22.
  23. 23.
    Cheng, Y., Johnson, A.E., Matthies, L.H., Olson, C.F.: Optical landmark detection and matching for spacecraft navigation. In: Proceedings of the 13th AAS/AIAA Space Flight Mechanics Meeting (2003)Google Scholar
  24. 24.
    Christensen, P.R., Gorelick, N.S., Mehall, G.L., Murray, K.C.: (THEMIS public data releases) Planetary Data System node, Arizona State University, http://themis-data.asu.edu
  25. 25.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Clark F. Olson
    • 1
  1. 1.University of Washington Bothell, Computing and Software Systems, 18115 Campus Way NE, Box 358534, Bothell, WA 98011-8246 

Personalised recommendations