Abstract
Many real world learning problems can be divided into a number of dependent subtasks. The conventional machine learning strategy considers each learning problem as a single unit and does not incorporate information associated with the tasks that are closely related with it. Such anomalies have been rectified in the Multi-Task learning (MTL) paradigm, where the model learns a modeling problem, by making use of its associated tasks. Such modeling strategies improve the generalization capacity of the model. In this paper we proposed a mathematical framework for multi-task learning using kernel online learning approach. We applied the proposed algorithm on a synthetic dataset as well as real time data set and the results were found to be promising.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cesa-Bianchi, N., Conconi, A., Gentile, C.: On the generalization ability of on-line learning algorithms. IEEE Trans. Inform. Theory 50(9), 2050–2057 (2004)
Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. J. Mach. Learn. Res. 7, 551–585 (2006)
Crammer, K., Singer, Y.: Ultraconservative online algorithms for multiclass problems. J. Mach. Learn. Res. 3, 951–991 (2003)
Yang, L., Jin, R., Ye, J.: Online learning by ellipsoid method. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML-09, pp. 145. Montreal, QC, Canada (2009)
Zhao, P., Hoi, S.C.H., Jin, R.: DUOL: a double updating approach for online learning. In: Advances in Neural Information Processing Systems (Proceedings of NIPS-2009), pp. 2259–2267
Li, Y., Long, P.M.: The relaxed online maximum margin algorithm. In: Advances in Neural Information Processing Systems (Proceedings of NIPS-1999), pp. 498–504
Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386–408 (1958)
Evgeniou, T., Pontil, M.: Regularized multitask learning. In: KDD-04, Aug. 2004
Li, G., Hoi, S.C.H., Chang, K., Liu, W., Jain, R.: Collaborative online multitask learning. IEEE Trans. Knowl. Data Eng. 26(8) (2014)
Pong, T.K., Tseng, P., Ji, S., Ye, J.: Trace norm regularization: reformulations, algorithms, and multi-task learning. SIAM J. Optim. 20(6), 3465–3489 (2010)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc.: Ser. B, 58(1), 267–288 (1996)
Argyriou, A., Evgeniou, T., Pontil, M.: Convex multi-task feature learning. Mach. Learn. 73(3), 243–272 (2008)
Jalali, A., Sanghavi, S., Ruan, C., Ravikumar, P.K.: A dirty model for multi-task learning. Adv. Neural Inf. Process. Syst. (NIPS 2010) 23 (2010)
Gong, P., Ye, J., Zhang, C.: Robust multi-task feature learning. In: International Conference on Knowledge Data and Data Mining (KDD’12), Aug. 12–16. Beijing, China (2012)
Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. IEEE Trans. Signal Process. 52(8) (2004)
Zhang, P., Peng, J.: SVM vs regularized least squares classification. In: Proceedings of the 17th International Conference on Pattern Recognition (ICPR-04), pp. 1051–4651 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media Singapore
About this paper
Cite this paper
Sumitra, S., Aravindh, A. (2016). Kernel Online Multi-task Learning. In: Senthilkumar, M., Ramasamy, V., Sheen, S., Veeramani, C., Bonato, A., Batten, L. (eds) Computational Intelligence, Cyber Security and Computational Models. Advances in Intelligent Systems and Computing, vol 412. Springer, Singapore. https://doi.org/10.1007/978-981-10-0251-9_7
Download citation
DOI: https://doi.org/10.1007/978-981-10-0251-9_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-0250-2
Online ISBN: 978-981-10-0251-9
eBook Packages: EngineeringEngineering (R0)