Scaling Large Learning Problems with Hard Parallel Mixtures

  • Ronan Collobert
  • Yoshua Bengio
  • Samy Bengio
Conference paper

DOI: 10.1007/3-540-45665-1_2

Part of the Lecture Notes in Computer Science book series (LNCS, volume 2388)
Cite this paper as:
Collobert R., Bengio Y., Bengio S. (2002) Scaling Large Learning Problems with Hard Parallel Mixtures. In: Lee SW., Verri A. (eds) Pattern Recognition with Support Vector Machines. Lecture Notes in Computer Science, vol 2388. Springer, Berlin, Heidelberg

Abstract

A challenge for statistical learning is to deal with large data sets, e.g. in data mining. Popular learning algorithms such as Support Vector Machines have training time at least quadratic in the number of examples: they are hopeless to solve problems with a million examples. We propose a “hard parallelizable mixture” methodology which yields significantly reduced training time through modularization and parallelization: the training data is iteratively partitioned by a “gater” model in such a way that it becomes easy to learn an “expert” model separately in each region of the partition. A probabilistic extension and the use of a set of generative models allows representing the gater so that all pieces of the model are locally trained. For SVMs, time complexity appears empirically to locally grow linearly with the number of examples, while generalization performance can be enhanced. For the probabilistic version of the algorithm, the iterative algorithm provably goes down in a cost function that is an upper bound on the negative log-likelihood.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Ronan Collobert
    • 1
  • Yoshua Bengio
    • 1
  • Samy Bengio
    • 2
  1. 1.DIROUniversité de MontréalMontréalCanada
  2. 2.IDIAPMartignySwitzerland

Personalised recommendations