Advertisement

Image Morphing: Transfer Learning between Tasks That Have Multiple Outputs

  • Daniel L. Silver
  • Liangliang Tu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7310)

Abstract

Prior work has reported the benefit of transfer learning on domains of single output tasks for classification or prediction of a scalar. We investigate the use of transfer learning on a domain of tasks where each task has multiple outputs (ie. output is a vector). Multiple Task Learning (MTL) and Context-sensitive Multiple Task Learning (csMTL) neural networks are considered for a domain of image transformation tasks. Models are developed to transform images of neutral human faces to that of corresponding images of angry, happy and sad faces. The MTL approach proves problematic because the size of the network grows as a multiplicative function of the number of outputs and number of tasks. Empirical results show that csMTL neural networks are capable of developing superior models to single task learning models when beneficial transfer occurs from one or more secondary tasks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baxter, J.: Learning model bias. In: Touretzky, D.S., Pmozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 169–175. MIT Press (1996)Google Scholar
  2. 2.
    Bengio, Y., Delalleau, O.: On the Expressive Power of Deep Architectures. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 18–36. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    Bichsel, M.: Automatic interpolation and recognition of face images by morphing. In: Proc. of the 2nd Int. Conf. on Automatic Face and Gesture Recognition (FG 1996), pp. 128–136. IEEE Computer Society, Washington, DC (1996)CrossRefGoogle Scholar
  4. 4.
    Caruana, R.A.: Multitask learning. Machine Learning 28, 41–75 (1997)CrossRefGoogle Scholar
  5. 5.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (1988)Google Scholar
  6. 6.
    Kharat, G.U., Dudul, S.V.: Neural network classifier for human emotion recognition from facial expressions using discrete cosine transform. In: Proc. of the 2008 First Int. Conf. on Emerging Trends in Engineering and Technology, pp. 653–658. IEEE Computer Society, Washington, DC (2008)CrossRefGoogle Scholar
  7. 7.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  8. 8.
    Silver, D.L., Bennett, K.P.: Special issue on inductive transfer. Machine Learning 73 (2008)Google Scholar
  9. 9.
    Silver, D.L., Mercer, R.E.: The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. In: Thrun, S., Pratt, L.Y. (eds.) Learning to Learn, pp. 213–233. Kluwer Academic Publisher, Boston (1997)Google Scholar
  10. 10.
    Silver, D.L., Poirier, R., Currie, D.: Inductive transfer with context-sensitive neural networks. Machine Learning 73, 323–336 (2008)Google Scholar
  11. 11.
    Silver, D.L., Tu, L.: Image Transformation: Inductive Transfer between Multiple Tasks Having Multiple Outputs. In: Bergler, S. (ed.) Canadian AI 2008. LNCS (LNAI), vol. 5032, pp. 296–307. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Tu, L.: Image Morphing: Inductive Transfer between Tasks that have Multiple Outputs. MSc Thesis, School of Computer Science, Acadia University, Wolfville, NS (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Daniel L. Silver
    • 1
  • Liangliang Tu
    • 1
  1. 1.Jodrey School of Computer ScienceAcadia UniversityWolfvilleCanada

Personalised recommendations