Learning Theory

Volume 3559 of the series Lecture Notes in Computer Science pp 366-381

Leaving the Span

  • Manfred K. WarmuthAffiliated withComputer Science Department, University of California
  • , S. V. N. VishwanathanAffiliated withMachine Learning Program, National ICT Australia

* Final gross prices may vary according to local VAT.

Get Access


We discuss a simple sparse linear problem that is hard to learn with any algorithm that uses a linear combination of the training instances as its weight vector. The hardness holds even if we allow the learner to embed the instances into any higher dimensional feature space (and use a kernel function to define the dot product between the embedded instances). These algorithms are inherently limited by the fact that after seeing k instances only a weight space of dimension k can be spanned.

Our hardness result is surprising because the same problem can be efficiently learned using the exponentiated gradient (EG) algorithm: Now the component-wise logarithms of the weights are essentially a linear combination of the training instances and after seeing k instances. This algorithm enforces additional constraints on the weights (all must be non-negative and sum to one) and in some cases these constraints alone force the rank of the weight space to grow as fast as 2 k .