Collaborating Differently on Different Topics: A Multi-Relational Approach to Multi-Task Learning
Multi-task learning offers a way to benefit from synergy of multiple related prediction tasks via their joint modeling. Current multi-task techniques model related tasks jointly, assuming that the tasks share the same relationship across features uniformly. This assumption is seldom true as tasks may be related across some features but not others. Addressing this problem, we propose a new multi-task learning model that learns separate task relationships along different features. This added flexibility allows our model to have a finer and differential level of control in joint modeling of tasks along different features. We formulate the model as an optimization problem and provide an efficient, iterative solution. We illustrate the behavior of the proposed model using a synthetic dataset where we induce varied feature-dependent task relationships: positive relationship, negative relationship, no relationship. Using four real datasets, we evaluate the effectiveness of the proposed model for many multi-task regression and classification problems, and demonstrate its superiority over other state-of-the-art multi-task learning models.
KeywordsRoot Mean Square Error Acute Myocardial Infarction Feature Subset Joint Modeling Task Parameter
Unable to display preview. Download preview PDF.
- 1.Kang, Z., Grauman, K., Sha, F.: Learning with whom to share in multi-task feature learning. In: International Conference on Machine Learning, pp. 521–528 (2011)Google Scholar
- 5.Lin, H., Baracos, V., Greiner, R., Chun-nam, Y.: Learning patient-specific cancer survival distributions as a sequence of dependent regressors. In: Advances in Neural Information Processing Systems, pp. 1845–1853 (2011)Google Scholar
- 7.Kumar, A., Daumé III, H.: Learning task grouping and overlap in multi-task learning. In: International Conference on Machine Learning (ICML) (2012)Google Scholar
- 8.Rai, P., Daume, H.: Infinite predictor subspace models for multitask learning. In: International Conference on Artificial Intelligence and Statistics, pp. 613–620 (2010)Google Scholar
- 9.Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 109–117. ACM (2004)Google Scholar
- 10.Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning multiple tasks with kernel methods. In: Journal of Machine Learning Research, pp. 615–637 (2005)Google Scholar
- 12.Jacob, L., Vert, J.-P., Bach, F.R.: Clustered multi-task learning: a convex formulation. In: Advances in neural information processing systems, pp. 745–752 (2009)Google Scholar
- 13.Zhou, J., Chen, J., Ye, J.: Clustered multi-task learning via alternating structure optimization. In: Advances in Neural Information Processing Systems, pp. 702–710 (2011)Google Scholar
- 14.Passos, A., Rai, P., Wainer, J., Daume, H.: Flexible modeling of latent task structures in multitask learning. In: Int’l Conference on Machine Learning, pp. 1103–1110 (2012)Google Scholar
- 15.Gupta, S., Phung, D., Venkatesh, S.: Factorial multi-task learning: a Bayesian nonparametric approach. In: International Conference on Machine Learning, pp. 657–665 (2013)Google Scholar
- 16.Zhang, Y., Yeung, D.-Y.: A convex formulation for learning task relationships in multi-task learning. In: Uncertainty in Artificial Intelligence, pp. 733–442 (2010)Google Scholar