Advertisement

Will Outlier Tasks Deteriorate Multitask Deep Learning?

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10635)

Abstract

Most of the multitask deep learning today use different but correlated tasks to improve their performances by sharing the common features of the tasks. What will happen if we use outlier tasks instead of related tasks? Will they deteriorate the performance? In this paper, we explore the influence of outlier tasks to the multitask deep learning through carefully designed experiments. We compare the accuracies and the convergence rates between the single task convolutional neural network (STCNN) and outlier multitask convolutional neural network (OMTCNN) on facial attribute recognition and hand-written digit recognition. By doing that, we prove that outlier tasks will constrain each other in a multitask network without parameter redundancy and cause a worse performance. We also discover that outlier tasks related to image recognition, like facial attribute recognition and hand-written digit recognition, may not be outlier tasks and have some common features in the bottom layers for the fact that they can use the other one’s first convolutional layer to replace theirs without any accuracy losses.

Keywords

Outlier tasks Multitask learning Deep learning 

Notes

Acknowledgement

The work is funded by the National Natural Science Foundation of China (No. 61170155), Shanghai Innovation Action Plan Project (No. 16511101200) and the Open Project Program of the National Laboratory of Pattern Recognition (No. 201600017).

References

  1. 1.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)MATHGoogle Scholar
  2. 2.
    Nielsen, M.: Neural Networks and Deep Learning. Determination Press (2015)Google Scholar
  3. 3.
    Krizhevsky, A., Sutskever, I., Hinton, E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems (2012)Google Scholar
  4. 4.
    Lecun, Y., Haffner, P.: Gradient-based Learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)CrossRefGoogle Scholar
  5. 5.
    Caruana, R.: Multitask learning. Mach. Learn. 28, 41–75 (1997)CrossRefGoogle Scholar
  6. 6.
    Biswaranjan, K., Devries, T., Taylor, G.W.: Multitask learning of facial landmarks and expression. In: 2014 Canadian Conference on Computer and Robot Vision, pp. 98–103 (2014)Google Scholar
  7. 7.
    Zhang, Z., Luo, P., Chen, C.L., Tang, X.: Facial landmark detection by deep multitask learning. In: European Conference on Computer Vision, pp. 94–108 (2014)Google Scholar
  8. 8.
    Yu, B., Lane, I.: Multitask deep learning for image understanding. In: Soft Computing and Pattern Recognition, pp. 37–42. IEEE (2014)Google Scholar
  9. 9.
    Zhang, C., Zhang, Z.: Improving multiview face detection with multitask deep convolutional neural networks. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1036–1041 (2014)Google Scholar
  10. 10.
    Bengio, Y.: Learning Deep Architectures for AI. Now Publishers, Hanover (2009)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.School of Computer Engineering and ScienceShanghai UniversityShanghaiChina

Personalised recommendations