Skip to main content

Multi-Task Learning Using Shared and Task Specific Information

  • Conference paper
Book cover Neural Information Processing (ICONIP 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7665))

Included in the following conference series:

  • 3271 Accesses

Abstract

Multi-task learning solves multiple related learning problems simultaneously by sharing some common structure for improved generalization performance of each task. We propose a novel approach to multi-task learning which captures task similarity through a shared basis vector set. The variability across tasks is captured through task specific basis vector set. We use sparse support vector machine (SVM) algorithm to select the basis vector sets for the tasks. The approach results in a sparse model where the prediction is done using very few examples. The effectiveness of our approach is demonstrated through experiments on synthetic and real multi-task datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Caruana, R.: Multitask Learning. Machine Learning 28(1), 41–75 (1997)

    Article  MathSciNet  Google Scholar 

  2. Ando, R.K., Zhang, T.: A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. J. Mach. Learn. Res. 6, 1817–1853 (2005)

    MathSciNet  MATH  Google Scholar 

  3. Bakker, B., Heskes, T.: Task Clustering and Gating for Bayesian Multitask Learning. J. Mach. Learn. Res. 4, 83–99 (2003)

    Google Scholar 

  4. Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning Multiple Tasks with Kernel Methods. J. Mach. Learn. Res. 6, 615–637 (2005)

    MathSciNet  MATH  Google Scholar 

  5. Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multi-task Learning for Classification with Dirichlet Process Priors. J. Mach. Learn. Res. 8, 35–63 (2007)

    MathSciNet  MATH  Google Scholar 

  6. Argyriou, A., Evgeniou, T., Pontil, M.: Convex Multi-task Feature Learning. Machine Learning 73(3), 243–272 (2008)

    Article  Google Scholar 

  7. Keerthi, S.S., Chapelle, O., DeCoste, D.: Building Support Vector Machines with Reduced Classifier Complexity. J. Mach. Learn. Res. 7, 1493–1515 (2006)

    MathSciNet  MATH  Google Scholar 

  8. Evgeniou, T., Pontil, M.: Regularized Multi-task Learning. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 109–117. ACM (2004)

    Google Scholar 

  9. Liao, X., Carin, L.: Radial Basis Function Network for Multi-task Learning. In: Advances in Neural Information Processing Systems 18, pp. 795–802. MIT Press (2006)

    Google Scholar 

  10. Seeger, M.: Low Rank Updates for the Cholesky Decomposition. Technical report, University of California at Berkeley (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Srijith, P.K., Shevade, S. (2012). Multi-Task Learning Using Shared and Task Specific Information. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds) Neural Information Processing. ICONIP 2012. Lecture Notes in Computer Science, vol 7665. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34487-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34487-9_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34486-2

  • Online ISBN: 978-3-642-34487-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics