Skip to main content

Kernel Online Multi-task Learning

  • Conference paper
  • First Online:
Computational Intelligence, Cyber Security and Computational Models

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 412))

  • 1332 Accesses

Abstract

Many real world learning problems can be divided into a number of dependent subtasks. The conventional machine learning strategy considers each learning problem as a single unit and does not incorporate information associated with the tasks that are closely related with it. Such anomalies have been rectified in the Multi-Task learning (MTL) paradigm, where the model learns a modeling problem, by making use of its associated tasks. Such modeling strategies improve the generalization capacity of the model. In this paper we proposed a mathematical framework for multi-task learning using kernel online learning approach. We applied the proposed algorithm on a synthetic dataset as well as real time data set and the results were found to be promising.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cesa-Bianchi, N., Conconi, A., Gentile, C.: On the generalization ability of on-line learning algorithms. IEEE Trans. Inform. Theory 50(9), 2050–2057 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. J. Mach. Learn. Res. 7, 551–585 (2006)

    MathSciNet  MATH  Google Scholar 

  3. Crammer, K., Singer, Y.: Ultraconservative online algorithms for multiclass problems. J. Mach. Learn. Res. 3, 951–991 (2003)

    MathSciNet  MATH  Google Scholar 

  4. Yang, L., Jin, R., Ye, J.: Online learning by ellipsoid method. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML-09, pp. 145. Montreal, QC, Canada (2009)

    Google Scholar 

  5. Zhao, P., Hoi, S.C.H., Jin, R.: DUOL: a double updating approach for online learning. In: Advances in Neural Information Processing Systems (Proceedings of NIPS-2009), pp. 2259–2267

    Google Scholar 

  6. Li, Y., Long, P.M.: The relaxed online maximum margin algorithm. In: Advances in Neural Information Processing Systems (Proceedings of NIPS-1999), pp. 498–504

    Google Scholar 

  7. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386–408 (1958)

    Article  MathSciNet  Google Scholar 

  8. Evgeniou, T., Pontil, M.: Regularized multitask learning. In: KDD-04, Aug. 2004

    Google Scholar 

  9. Li, G., Hoi, S.C.H., Chang, K., Liu, W., Jain, R.: Collaborative online multitask learning. IEEE Trans. Knowl. Data Eng. 26(8) (2014)

    Google Scholar 

  10. Pong, T.K., Tseng, P., Ji, S., Ye, J.: Trace norm regularization: reformulations, algorithms, and multi-task learning. SIAM J. Optim. 20(6), 3465–3489 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc.: Ser. B, 58(1), 267–288 (1996)

    Google Scholar 

  12. Argyriou, A., Evgeniou, T., Pontil, M.: Convex multi-task feature learning. Mach. Learn. 73(3), 243–272 (2008)

    Google Scholar 

  13. Jalali, A., Sanghavi, S., Ruan, C., Ravikumar, P.K.: A dirty model for multi-task learning. Adv. Neural Inf. Process. Syst. (NIPS 2010) 23 (2010)

    Google Scholar 

  14. Gong, P., Ye, J., Zhang, C.: Robust multi-task feature learning. In: International Conference on Knowledge Data and Data Mining (KDD’12), Aug. 12–16. Beijing, China (2012)

    Google Scholar 

  15. Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. IEEE Trans. Signal Process. 52(8) (2004)

    Google Scholar 

  16. Zhang, P., Peng, J.: SVM vs regularized least squares classification. In: Proceedings of the 17th International Conference on Pattern Recognition (ICPR-04), pp. 1051–4651 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Sumitra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this paper

Cite this paper

Sumitra, S., Aravindh, A. (2016). Kernel Online Multi-task Learning. In: Senthilkumar, M., Ramasamy, V., Sheen, S., Veeramani, C., Bonato, A., Batten, L. (eds) Computational Intelligence, Cyber Security and Computational Models. Advances in Intelligent Systems and Computing, vol 412. Springer, Singapore. https://doi.org/10.1007/978-981-10-0251-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-0251-9_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-0250-2

  • Online ISBN: 978-981-10-0251-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics