Machine Learning

, Volume 85, Issue 1, pp 149–173

Boosted multi-task learning

  • Olivier Chapelle
  • Pannagadatta Shivaswamy
  • Srinivas Vadrevu
  • Kilian Weinberger
  • Ya Zhang
  • Belle Tseng
Article

DOI: 10.1007/s10994-010-5231-6

Cite this article as:
Chapelle, O., Shivaswamy, P., Vadrevu, S. et al. Mach Learn (2011) 85: 149. doi:10.1007/s10994-010-5231-6

Abstract

In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing their commonalities through shared parameters and their differences with task-specific ones. This enables implicit data sharing and regularization. Our algorithm is derived using the relationship between 1-regularization and boosting. We evaluate our learning method on web-search ranking data sets from several countries. Here, multi-task learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Further, the proposed method obtains state-of-the-art results on a publicly available multi-task dataset. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.

Keywords

Multi-task learning Boosting Decision trees Web search Ranking 
Download to read the full article text

Copyright information

© The Author(s) 2010

Authors and Affiliations

  • Olivier Chapelle
    • 1
  • Pannagadatta Shivaswamy
    • 2
  • Srinivas Vadrevu
    • 1
  • Kilian Weinberger
    • 3
  • Ya Zhang
    • 4
  • Belle Tseng
    • 1
  1. 1.Yahoo! LabsSunnyvaleUSA
  2. 2.Department of Computer ScienceCornell UniversityIthacaUSA
  3. 3.Washington UniversitySaint LouisUSA
  4. 4.Shanghai Jiao Tong UniversityShanghaiChina