Machine Learning

, Volume 28, Issue 1, pp 41–75 | Cite as

Multitask Learning

  • Rich Caruana

Abstract

Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.

inductive transfer parallel transfer multitask learning backpropagation k-nearest neighbor kernel regression supervised learning generalization 

References

  1. Abu-Mostafa, Y. S. (1990). “Learning from Hints in Neural Networks,” Journal of Complexity, 6(2), pp. 192–198.CrossRefGoogle Scholar
  2. Abu-Mostafa, Y. S. (1993). “Hints and the VC Dimension,” Neural Computation, 5(2).Google Scholar
  3. Abu-Mostafa, Y. S. (1995). “Hints,” Neural Computation, 7, pp. 639-671.Google Scholar
  4. Baluja, S. & Pomerleau, D. A. (1995). “Using the Representation in a Neural Network's Hidden Layer for TaskSpecific Focus of Attention,” Proceedings of the International Joint Conference on Artificial Intelligence 1995, IJCAI-95, Montreal, Canada, pp. 133-139.Google Scholar
  5. Baxter, J. (1994). “Learning Internal Representations,” Ph.D. Thesis, The Flinders Univeristy of South Australia.Google Scholar
  6. Baxter, J. (1995). “Learning Internal Representations,” Proceedings of the 8th ACM Conference on Computational Learning Theory, (COLT-95), Santa Cruz, CA.Google Scholar
  7. Baxter, J. (1996). “A Bayesian/Information Theoretic Model of Bias Learning,” Proceedings of the 9th International Conference on Computational Learning Theory, (COLT-96), Desenzano del Gardo, Italy.Google Scholar
  8. Breiman, L. & Friedman, J. H. (1995). “Predicting Multivariate Responses in Multiple Linear Regression,” ftp://ftp.stat.berkeley.edu/pub/users/breiman/curds-whey-all.ps.Z.Google Scholar
  9. Caruana, R. (1993). “Multitask Learning: A Knowledge-Based Source of Inductive Bias,” Proceedings of the 10th International Conference on Machine Learning, ML-93, University of Massachusetts, Amherst, pp. 41-48.Google Scholar
  10. Caruana, R. (1994).”Multitask Connectionist Learning,” Proceedings of the 1993 Connectionist Models Summer School, pp. 372-379.Google Scholar
  11. Caruana, R. (1995). “Learning Many Related Tasks at the Same Time with Backpropagation,” Advances in Neural Information Processing Systems 7 (Proceedings of NIPS-94), pp. 656-664.Google Scholar
  12. Caruana, R., Baluja, S., & Mitchell, T. (1996). “Using the Future to “Sort Out” the Present: Rankprop and Multitask Learning for Medical Risk Prediction,” Advances in Neural Information Processing Systems 8 (Proceedings of NIPS-95), pp. 959-965.Google Scholar
  13. Caruana, R. & de Sa, V. R. (1997). “Promoting Poor Features to Supervisors: Some Inputs Work Better as Outputs,” to appear in Advances in Neural Information Processing Systems 9 (Proceedings of NIPS-96).Google Scholar
  14. Caruana, R. (1997). “Multitask Learning,” Ph.D. Thesis, School of Computer Science, Carnegie Mellon University.Google Scholar
  15. Cooper, G. F. & Herskovits, E. (1992). “A Bayesian Method for the Induction of Probabilistic Networks from Data,” Machine Learning, 9, pp. 309-347.CrossRefGoogle Scholar
  16. Cooper, G. F., Aliferis, C. F., Ambrosino, R., Aronis, J., Buchanan, B. G., Caruana, R., Fine, M. J., Glymour, C., Gordon, G., Hanusa, B. H., Janosky, J. E., Meek, C., Mitchell, T., Richardson, T., and Spirtes, P. (1997). ”An Evaluation of Machine Learning Methods for Predicting Pneumonia Mortality,” Artificial Intelligence in Medicine 9, pp. 107-138.Google Scholar
  17. Craven, M. & Shavlik, J. (1994). “Using Sampling and Queries to Extract Rules from Trained Neural Networks,” Proceedings of the 11th International Conference on Machine Learning, ML-94, Rutgers University, New Jersey, pp. 37-45.Google Scholar
  18. Davis, I. & Stentz, A. (1995). “Sensor Fusion for Autonomous Outdoor Navigation Using Neural Networks,” Proceedings of IEEE's Intelligent Robots and Systems Conference.Google Scholar
  19. Dent, L., Boticario, J., McDermott, J., Mitchell, T., & Zabowski, D. (1992). “A Personal Learning Apprentice,” Proceedings of 1992 National Conference on Artificial Intelligence.Google Scholar
  20. de Sa, V. R. (1994). “Learning Classification with Unlabelled Data,” Advances in Neural Information Processing Systems 6, (Proceedings of NIPS-93), pp. 112-119.Google Scholar
  21. Dietterich, T. G., Hild, H., & Bakiri, G. (1990). “A Comparative Study of ID3 and Backpropagation for English Text-to-speech Mapping,” Proceedings of the Seventh International Conference on Artificial Intelligence, pp. 24-31.Google Scholar
  22. Dietterich, T. G., Hild, H., & Bakiri, G. (1995). “A Comparison of ID3 and Backpropagation for English Text-to-speech Mapping,” Machine Learning, 18(1), pp. 51-80.CrossRefGoogle Scholar
  23. Dietterich, T. G. & Bakiri, G. (1995). “Solving Multiclass Learning Problems via Error-Correcting Output Codes,” Journal of Artificial Intelligence Research, 2, pp. 263-286.Google Scholar
  24. Fine, M. J., Singer, D., Hanusa, B. H., Lave, J., & Kapoor, W. (1993). “Validation of a Pneumonia Prognostic Index Using the MedisGroups Comparative Hospital Database,” American Journal of Medicine.Google Scholar
  25. Fisher, D. H. (1987). “Conceptual Clustering, Learning from Examples, and Inference,” Proceedings of the 4th International Workshop on Machine Learning.Google Scholar
  26. Ghahramani, Z. & Jordan, M. I. (1994). “Supervised Learning from Incomplete Data Using an EM Approach,” Advances in Neural Information Processing Systems 6, (Proceedings of NIPS-93,) pp. 120-127.Google Scholar
  27. Ghahramani, Z. & Jordan, M. I. (1997). “Mixture Models for Learning from Incomplete Data,” Computational Learning Theory and Natural Learning Systems, Vol. IV, R. Greiner, T. Petsche and S.J. Hanson (eds.), Cambridge, MA, MIT Press, pp. 67-85.Google Scholar
  28. Ghosn, J. & Bengio, Y. (1997). “Multi-Task Learning for Stock Selection,” to appear in Advances in Neural Information Processing Systems 9, (Proceedings of NIPS-96).Google Scholar
  29. Hinton, G. E. (1986). “Learning Distributed Representations of Concepts,” Proceedings of the 8th International Conference of the Cognitive Science Society, pp. 1-12.Google Scholar
  30. Holmstrom, L. & Koistinen, P. (1992). “Using Additive Noise in Back-propagation Training,” IEEE Transactions on Neural Networks, 3(1), pp. 24-38.Google Scholar
  31. Jordan, M. & Jacobs, R. (1994). “Hierarchical Mixtures of Experts and the EM Algorithm,” Neural Computation, 6, pp. 181-214.Google Scholar
  32. Koller, D. & Sahami, M. (1996). “Toward Optimal Feature Selection,” Proceedings of the 13th International Conference on Machine Learning, ICML-96, Bari, Italy, pp. 284-292.Google Scholar
  33. Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackal, L. D. (1989). ”Backpropagation Applied to Handwritten Zip-Code Recognition,” Neural Computation, 1, pp. 541-551.Google Scholar
  34. Little, R. J. A. & Rubin, D. B. (1987). Statistical Analysis with Missing Data, Wiley, New York.Google Scholar
  35. Liu, H. & Setiono, R. (1996). “A Probibilistic Approach to Feature Selection—A Filter Solution,” Proceedings of the 13th International Conference on Machine Learning,ICML-96, Bari, Italy, pp. 319-327.Google Scholar
  36. Martin, J. D. (1994). “Goal-directed Clustering,” Proceedings of the 1994 AAAI Spring Symposium on Goal-directed Learning.Google Scholar
  37. Martin, J. D. & Billman, D. O. (1994). “Acquiring and Combining Overlapping Concepts,” Machine Learning, 16, pp. 1-37.Google Scholar
  38. Mitchell, T. (1980). “The Need for Biases in Learning Generalizations,” Rutgers University: CBM-TR-117.Google Scholar
  39. Mitchell, T., Caruana, R., Freitag, D., McDermott, J., & Zabowski, D. (1994). “Experience with a Learning Personal Assistant,” Communications of the ACM: Special Issue on Agents, 37(7), pp. 80-91.Google Scholar
  40. Munro, P. W. & Parmanto, B. (1997). “Competition Among Networks Improves Committee Performance,” to appear in Advances in Neural Information Processing Systems 9 (Proceedings of NIPS-96).Google Scholar
  41. Omohundro, S. M. (1996). “Family Discovery,” Advances in Neural Information Processing Systems 8, (Proceedings of NIPS-95), pp. 402-408.Google Scholar
  42. O'sullivan, J. & Thrun, S. (1996). “Discovering Structure in Multiple Learning Tasks: The TC Algorithm,” Proceedings of the 13th International Conference on Machine Learning, ICML-96, Bari, Italy, pp. 489-497.Google Scholar
  43. Pomerleau, D. A. (1992). “Neural Network Perception for Mobile Robot Guidance,” Carnegie Mellon University: CMU-CS-92-115.Google Scholar
  44. Pratt, L. Y., Mostow, J., & Kamm, C. A. (1991). “Direct Transfer of Learned Information Among Neural Networks,” Proceedings of AAAI-91.Google Scholar
  45. Pratt, L. Y. (1992). “Non-literal Transfer Among Neural Network Learners,” Colorado School of Mines: MCS92-04.Google Scholar
  46. Quinlan, J. R. (1986). “Induction of Decision Trees,” Machine Learning, 1, pp. 81-106.CrossRefGoogle Scholar
  47. Quinlan, J. R. (1992). C4.5: Programs for Machine Learning, Morgan Kaufman Publishers.Google Scholar
  48. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). “Learning Representations by Back-propagating Errors,” Nature, 323, pp. 533-536.Google Scholar
  49. Sejnowski, T. J. & Rosenberg, C. R. (1986). “NETtalk: A Parallel Network that Learns to Read Aloud,” John Hopkins: JHU/EECS-86/01.Google Scholar
  50. Sharkey, N. E. & Sharkey, A. J. C. (1992). “Adaptive Generalisation and the Transfer of Knowledge,” University of Exeter: R257.Google Scholar
  51. Sill, J. & Abu-Mostafa, Y. (1997). “Monotonicity Hints,” to appear in Neural Information Processing Systems 9 (Proceedings of NIPS-96).Google Scholar
  52. Simard, P., Victorri, B., LeCun, Y., & Denker, J. (1992). “Tangent Prop—A Formalism for Specifying Selected Invariances in an Adaptive Neural Network,” Advances in Neural Information Processing Systems 4 (Proceedings of NIPS-91), pp. 895-903.Google Scholar
  53. Spirtes, P., Glymour, C., & Scheines, R. (1993). Causation, Prediction, and Search, Springer-Verlag, New York.Google Scholar
  54. Suddarth, S. C. & Kergosien, Y. L. (1990). “Rule-injection Hints as a Means of Improving Network Performance and Learning Time,” Proceedings of the 1990 EURASIP Workshop on Neural Networks, pp. 120-129.Google Scholar
  55. Suddarth, S. C. & Holden, A. D. C. (1991). “Symbolic-neural Systems and the Use of Hints for Developing Complex Systems,” International Journal of Man-Machine Studies, 35(3), pp. 291-311.Google Scholar
  56. Thrun, S. & Mitchell, T. (1994). “Learning One More Thing,” Carnegie Mellon University: CS-94-184.Google Scholar
  57. Thrun, S. (1995). “Lifelong Learning: A Case Study,” Carnegie Mellon University: CS-95-208.Google Scholar
  58. Thrun, S. (1996a). “Is Learning the N-th Thing Any Easier Than Learning the First?,” Advances in Neural Information Processing Systems 8 (Proceedings of NIPS-95), pp. 640-646.Google Scholar
  59. Thrun, S. (1996b). Explanation-Based Neural Network Learning: A Lifelong Learning Approach, Kluwer Academic Publisher.Google Scholar
  60. Tresp, V., Ahmad, S., & Neuneier, R. (1994). “Training Neural Networks with Deficient Data,” Advances in Neural Information Processing Systems 6 (Proceedings of NIPS-93), pp. 128-135.Google Scholar
  61. Valdes-Perez, R., & Simon, H. (1994). “A Powerful Heuristic for the Discovery of Complex Patterned Behavior,” Proceedings of the 11th International Conference on Machine Learning, ML-94, Rutgers University, NewJersey, pp. 326-334.Google Scholar
  62. Waibel, A., Sawai, H., & Shikano, K. (1989). “Modularity and Scaling in Large Phonemic Neural Networks,” IEEE Transactions on Acoustics, Speech and Signal Processing, 37(12), pp. 1888-1898.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Rich Caruana
    • 1
  1. 1.School of Computer ScienceCarnegie Mellon UniversityPittsburgh

Personalised recommendations