Machine Learning

, Volume 89, Issue 1, pp 87-122

Sequential approaches for learning datum-wise sparse representations

  • Gabriel Dulac-ArnoldAffiliated withUPMC, LIP6, Université Pierre et Marie Curie Email author 
  • , Ludovic DenoyerAffiliated withUPMC, LIP6, Université Pierre et Marie Curie
  • , Philippe PreuxAffiliated withLIFL (UMR CNRS) & INRIA Lille Nord-Europe, Université de Lille
  • , Patrick GallinariAffiliated withUPMC, LIP6, Université Pierre et Marie Curie


In supervised classification, data representation is usually considered at the dataset level: one looks for the “best” representation of data assuming it to be the same for all the data in the data space. We propose a different approach where the representations used for classification are tailored to each datum in the data space. One immediate goal is to obtain sparse datum-wise representations: our approach learns to build a representation specific to each datum that contains only a small subset of the features, thus allowing classification to be fast and efficient. This representation is obtained by way of a sequential decision process that sequentially chooses which features to acquire before classifying a particular point; this process is learned through algorithms based on Reinforcement Learning.

The proposed method performs well on an ensemble of medium-sized sparse classification problems. It offers an alternative to global sparsity approaches, and is a natural framework for sequential classification problems. The method extends easily to a whole family of sparsity-related problem which would otherwise require developing specific solutions. This is the case in particular for cost-sensitive and limited-budget classification, where feature acquisition is costly and is often performed sequentially. Finally, our approach can handle non-differentiable loss functions or combinatorial optimization encountered in more complex feature selection problems.


Classification Features selection Sparsity Sequential models Reinforcement learning