Scaling Large Learning Problems with Hard Parallel Mixtures

  • Ronan Collobert
  • Yoshua Bengio
  • Samy Bengio
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2388)

Abstract

A challenge for statistical learning is to deal with large data sets, e.g. in data mining. Popular learning algorithms such as Support Vector Machines have training time at least quadratic in the number of examples: they are hopeless to solve problems with a million examples. We propose a “hard parallelizable mixture” methodology which yields significantly reduced training time through modularization and parallelization: the training data is iteratively partitioned by a “gater” model in such a way that it becomes easy to learn an “expert” model separately in each region of the partition. A probabilistic extension and the use of a set of generative models allows representing the gater so that all pieces of the model are locally trained. For SVMs, time complexity appears empirically to locally grow linearly with the number of examples, while generalization performance can be enhanced. For the probabilistic version of the algorithm, the iterative algorithm provably goes down in a cost function that is an upper bound on the negative log-likelihood.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R. A. Cole, M. Noel, T. Lander, and T. Durham. New telephone speech corpora at CSLU. Proceedings of the European Conference on Speech Communication and Technology, EUROSPEECH, 1:821–824, 1995.Google Scholar
  2. 2.
    R. Collobert and S. Bengio. SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning Research, 1:143–160, 2001.CrossRefMathSciNetGoogle Scholar
  3. 3.
    S. E. Fahlman. Fast-learning variations on back-propagation: An empirical study. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 38–51, Pittsburg 1988, 1989. Morgan Kaufmann, San Mateo.Google Scholar
  4. 4.
    M. Haruno, D. M. Wolpert, and M. Kawato. Mosaic model for sensorimotor learning and control. Neural Computation, 13(10):2201–2220, 2001.MATHCrossRefGoogle Scholar
  5. 5.
    R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixture of local experts. Neural Computation, 3:79–87, 1991.CrossRefGoogle Scholar
  6. 6.
    Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79–87, 1991.CrossRefGoogle Scholar
  7. 7.
    J. T. Kwok. Support vector mixture for classification and regression problems. In Proceedings of the International Conference on Pattern Recognition (ICPR), pages 255–258, Brisbane, Queensland, Australia, 1998.Google Scholar
  8. 8.
    J. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In Smola, Bartlett, Schlkopf, and Schuurmans, editors, Advances in Large Margin Classifiers, pages 61–73. MIT Press, 1999.Google Scholar
  9. 9.
    A. Rida, A. Labbi, and C. Pellegrini. Local experts combination through density decomposition. In Proceedings of UAI’99. Morgan Kaufmann, 1999.Google Scholar
  10. 10.
    V. Tresp. A bayesian committee machine. Neural Comp., 12:2719–2741, 2000.CrossRefGoogle Scholar
  11. 11.
    V. N. Vapnik. The nature of statistical learning theory. Springer, 2nd edition, 1995.Google Scholar
  12. 12.
    C. K. I Williams and C.E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 514–520. MIT Press, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Ronan Collobert
    • 1
  • Yoshua Bengio
    • 1
  • Samy Bengio
    • 2
  1. 1.DIROUniversité de MontréalMontréalCanada
  2. 2.IDIAPMartignySwitzerland

Personalised recommendations