A Statistical Parts-Based Appearance Model of Inter-subject Variability

  • Matthew Toews
  • D. Louis Collins
  • Tal Arbel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4190)

Abstract

In this article, we present a general statistical parts-based model for representing the appearance of an image set, applied to the problem of inter-subject MR brain image matching. In contrast with global image representations such as active appearance models, the parts-based model consists of a collection of localized image parts whose appearance, geometry and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case where one-to-one correspondence does not exist between subjects due to anatomical differences, as parts are not expected to occur in all subjects. The model can be learned automatically, discovering structures that appear with statistical regularity in a large set of subject images, and can be robustly fit to new images, all in the presence of significant inter-subject variability. As parts are derived from generic scale-invariant features, the framework can be applied in a wide variety of image contexts, in order to study the commonality of anatomical parts or to group subjects according to the parts they share. Experimentation shows that a parts-based model can be learned from a large set of MR brain images, and used to determine parts that are common within the group of subjects. Preliminary results indicate that the model can be used to automatically identify distinctive features for inter-subject image registration despite large changes in appearance.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Carneiro, G., Jepson, A.D.: Multi-scale phase-based local features. In: CVPR, vol. 1, pp. 736–743 (2003)Google Scholar
  2. 2.
    Collins, D.L., Kabani, N.J., Evans, A.C.: Automatic volume estimation of gross cerebral structures. In: Evans, A. (ed.) 4th International Conference of Functional Mapping of the Human Brain (1998)Google Scholar
  3. 3.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. Pattern Analysis and Machine Intelligence 23(6), 681–684 (2001)CrossRefGoogle Scholar
  4. 4.
    Fergus, R., Perona, P., Zisserman, A.: Object class recognition by unsupervised scale-invariant learning. In: CVPR, pp. 264–271 (2003)Google Scholar
  5. 5.
    Guorong, W., Qi, F., Shen, D.: Learning best features for deformable registration of mr brains. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3750, pp. 179–187. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  6. 6.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of the 4th Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  7. 7.
    Kadir, T., Brady, M.: Saliency, scale and image description. IJCV 45(2), 83–105 (2001)MATHCrossRefGoogle Scholar
  8. 8.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004)CrossRefGoogle Scholar
  9. 9.
    Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. IJCV 60(1), 63–86 (2004)CrossRefGoogle Scholar
  10. 10.
    Rueckert, D., Frangi, A.F., Schnabel, J.A.: Automatic construction of 3-D statistical deformation models of the brain using nonrigid registration. TMI 22(8), 1014–1025 (2003)Google Scholar
  11. 11.
    Talairach, J., Tournoux, P.: Co-planar stereotactic atlas of the human brain: 3-dimensional proportional system: an approach to cerebral imaging. Georg Thieme Verlag, Stuttgart (1988)Google Scholar
  12. 12.
    Toews, M., Arbel, T.: Detection over viewpoint via the object class invariant. In: ICPR (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Matthew Toews
    • 1
  • D. Louis Collins
    • 2
  • Tal Arbel
    • 1
  1. 1.Centre for Intelligent MachinesMcGill UniversityMontréalCanada
  2. 2.McConnell Brain Imaging Centre, Montreal Neurological InstituteMcGill UniversityMontréalCanada

Personalised recommendations