Advertisement

Learning from Relevant Tasks Only

  • Samuel Kaski
  • Jaakko Peltonen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4701)

Abstract

We introduce a problem called relevant subtask learning, a variant of multi-task learning. The goal is to build a classifier for a task-of-interest having too little data. We also have data for other tasks but only some are relevant, meaning they contain samples classified in the same way as in the task-of-interest. The problem is how to utilize this “background data” to improve the classifier in the task-of-interest. We show how to solve the problem for logistic regression classifiers, and show that the solution works better than a comparable multi-task learning model. The key is to assume that data of all tasks are mixtures of relevant and irrelevant samples, and model the irrelevant part with a sufficiently flexible model such that it does not distort the model of relevant data.

Keywords

multi-task learning relevant subtask learning 

References

  1. 1.
    Caruana, R.: Multitask learning. Mach. Learn. 28, 41–75 (1997)CrossRefGoogle Scholar
  2. 2.
    Marx, Z., Rosenstein, M.T., Kaelbling, L.P.: Transfer learning with an ensemble of background tasks. In: Inductive Transfer: 10 Years Later, NIPS workshop (2005)Google Scholar
  3. 3.
    Raina, R., Ng, A.Y., Koller, D.: Transfer learning by constructing informative priors. In: Inductive Transfer: 10 Years Later, NIPS workshop (2005)Google Scholar
  4. 4.
    Niculescu-Mizil, A., Caruana, R.: Inductive transfer for Bayesian network structure learning. In: Proceedings of AISTATS. Electronic proceedings (2007)Google Scholar
  5. 5.
    Rosenstein, M.T., Marx, Z., Kaelbling, L.P.: To transfer or not to transfer. In: Inductive Transfer: 10 Years Later, NIPS workshop (2005)Google Scholar
  6. 6.
    Yu, K., Tresp, V., Schwaighofer, A.: Learning Gaussian processes from multiple tasks. In: Proceedings of ICML 2005, pp. 1012–1019. ACM Press, New York (2005)CrossRefGoogle Scholar
  7. 7.
    Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning multiple tasks with kernel methods. J. Mach. Learn. Res. 6, 615–637 (2005)MathSciNetGoogle Scholar
  8. 8.
    Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multi-task learning for classification with Dirichlet process priors. J. Mach. Learn. Res. 8, 35–63 (2007)MathSciNetGoogle Scholar
  9. 9.
    Bakker, B., Heskes, T.: Task clustering and gating for Bayesian multitask learning. J. Mach. Learn. Res. 4, 83–99 (2003)CrossRefGoogle Scholar
  10. 10.
    Wu, P., Dietterich, T.G.: Improving SVM accuracy by training on auxiliary data sources. In: Proceedings of ICML 2004, pp. 871–878. Omnipress, Madison, WI (2004)Google Scholar
  11. 11.
    Liao, X., Xue, Y., Carin, L.: Logistic regression with an auxiliary data source. In: Proceedings of ICML 2005, pp. 505–512. Omnipress, Madison, WI (2005)Google Scholar
  12. 12.
    Kaski, S., Peltonen, J.: Learning from relevant tasks only. Technical Report E11, Helsinki University of Technology, Lab. of Comp. and Information Science (2007)Google Scholar
  13. 13.
    Basilico, J., Hofmann, T.: Unifying collaborative and content-based filtering. In: Proceedings of ICML 2004, pp. 65–72. Omnipress, Madison, WI (2004)Google Scholar
  14. 14.
    Melville, P., Mooney, R.J., Nagarajan, R.: Content-boosted collaborative filtering for improved recommendations. In: Proceedings of AAAI-2002, pp. 187–192. AAAI Press, Stanford (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Samuel Kaski
    • 1
  • Jaakko Peltonen
    • 1
  1. 1.Laboratory of Computer and Information Science, Helsinki University of Technology, P.O. Box 5400, FI-02015 TKKFinland

Personalised recommendations