Advertisement

Extracting the affine transformation from texture moments

  • Jun Sato
  • Roberto Cipolla
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 801)

Abstract

In this paper we propose a novel, efficient and geometrically intuitive method to compute the four components of an affine transformation from the change in simple statistics of images of texture. In particular we show how the changes in first, second and third moments of edge orientation and changes in density are directly related to the rotation (curl), scale (divergence) and deformation components of an affine transformation. A simple implementation is described which does not require point, edge or contour correspondences to be established. It is tested on a wide range of repetitive and non-repetitive visual textures which are neither isotropic nor homogeneous. As a demonstration of the power of this technique the estimated affine transforms are used as the first stage in shape from texture and structure from motion applications.

Keywords

Affine Transformation Surface Orientation Texture Element Moment Matrix Edge Orientation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    P. Anandan. A computational framework and an algorithm for the measurement of visual motion. In Int. Journal of Computer Vision, pages 283–310, 1989.Google Scholar
  2. 2.
    A. Blake and C. Marinos. Shape from texture: estimation, isotropy and moments. Artificial Intelligence, 45:323–380, 1990.Google Scholar
  3. 3.
    R. Cipolla and A. Blake. Surface orientation and time to contact from image divergence and deformation. In G. Sandini, editor, Proc. 2nd European Conference on Computer Vision, pages 187–202. Springer-Verlag, 1992.Google Scholar
  4. 4.
    T. Kanade and J.R. Kender. Mapping image properties into shape constraints: Skewed symmetry, affine-transformable patterns, and the shape-from-texture paradigm. In J. Beck et al, editor, Human and Machine Vision, pages 237–257. Academic Press, NY, 1983.Google Scholar
  5. 5.
    K. Kanatani. Detection of surface orientation and motion from texture by a stereological technique. Artificial Intelligence, 23:213–237, 1984.Google Scholar
  6. 6.
    J.J. Koenderink. Optic flow. Vision Research, 26(1):161–179, 1986.Google Scholar
  7. 7.
    J.J. Koenderink and A.J. Van Doom. Geometry of binocular vision and a model for stereopsis. Biological Cybernetics, 21:29–35, 1976.Google Scholar
  8. 8.
    T. Lindeberg and J. Garding. Shape from texture from a multi-scale perspective. Proc. 4th Int. Conf. on Computer Vision, pages 683–691, 1993.Google Scholar
  9. 9.
    J. Malik and R. Rosenholtz. A differential method for computing local shape-from-texture for planar and curved surfaces. Proc. Conf. Computer Vision and Pattern Recognition, pages 267–273, 1993.Google Scholar
  10. 10.
    A.P. Witkin. Recovering surface shape and orientation from texture. Artificial Intelligence, 17:17–45, 1981.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Jun Sato
    • 1
  • Roberto Cipolla
    • 1
  1. 1.Department of EngineeringUniversity of CambridgeCambridgeEngland

Personalised recommendations