International Journal of Computer Vision

, Volume 56, Issue 1, pp 7-16

First online:

Weakly Supervised Learning of Visual Models and Its Application to Content-Based Retrieval

  • Cordelia SchmidAffiliated withINRIA Rhône-Alpes

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


This paper presents a method for weakly supervised learning of visual models. The visual model is based on a two-layer image description: a set of “generic” descriptors and their distribution over neighbourhoods. “Generic” descriptors represent sets of similar rotational invariant feature vectors. Statistical spatial constraints describe the neighborhood structure and make our description more discriminant. The joint probability of the frequencies of “generic” descriptors over a neighbourhood is multi-modal and is represented by a set of “neighbourhood-frequency” clusters. Our image description is rotationally invariant, robust to model deformations and characterizes efficiently “appearance-based” visual structure. The selection of distinctive clusters determines model features (common to the positive and rare in the negative examples). Visual models are retrieved and localized using a probabilistic score. Experimental results for “textured” animals and faces show a very good performance for retrieval as well as localization.

visual model two-layer image description weakly supervised learning