Advertisement

Learning Multiple Views with Orthogonal Denoising Autoencoders

  • TengQi YeEmail author
  • Tianchun Wang
  • Kevin McGuinness
  • Yu Guo
  • Cathal Gurrin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9516)

Abstract

Multi-view learning techniques are necessary when data is described by multiple distinct feature sets because single-view learning algorithms tend to overfit on these high-dimensional data. Prior successful approaches followed either consensus or complementary principles. Recent work has focused on learning both the shared and private latent spaces of views in order to take advantage of both principles. However, these methods can not ensure that the latent spaces are strictly independent through encouraging the orthogonality in their objective functions. Also little work has explored representation learning techniques for multi-view learning. In this paper, we use the denoising autoencoder to learn shared and private latent spaces, with orthogonal constraints — disconnecting every private latent space from the remaining views. Instead of computationally expensive optimization, we adapt the backpropagation algorithm to train our model.

Keywords

Denoising autoencoder Autoencoder Representation learning Multi-view learning Multimedia fusion 

Notes

Acknowledgement

The research was supported by the Irish Research Council (IRCSET) under Grant Number GOIPG/2013/330. The authors wish to acknowledge the DJEI/DES/SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support. Amen.

References

  1. 1.
    Sun, S.: A survey of multi-view machine learning. Neural Comput. Appl. 23(7–8), 2031–2038 (2013)CrossRefGoogle Scholar
  2. 2.
    Xu, C., Tao, D., Xu, C.: A survey on multi-view learning. arXiv preprint arXiv:1304.5634 (2013)
  3. 3.
    Dasgupta, S., Littman, M.L., McAllester, D.: Pac generalization bounds for co-training. Adv. Neural Inf. Process. Syst. 1, 375–382 (2002)Google Scholar
  4. 4.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)zbMATHGoogle Scholar
  5. 5.
    Jia, Y., Salzmann, M., Darrell, T.: Factorized latent spaces with structured sparsity. In: Advances in Neural Information Processing Systems, pp. 982–990 (2010)Google Scholar
  6. 6.
    Salzmann, M., Ek, C.H., Urtasun, R., Darrell, T.: Factorized orthogonal latent spaces. In: International Conference on Artificial Intelligence and Statistics, pp. 701–708 (2010)Google Scholar
  7. 7.
    Bengio, Y., Goodfellow, I.J., Courville, A.: Deep learning. Book in preparation for MIT Press (2015)Google Scholar
  8. 8.
    Memisevic, R.: On multi-view feature learning. arXiv preprint arXiv:1206.4609 (2012)
  9. 9.
    Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the Eleventh Annual Conference On Computational Learning Theory, pp. 92–100. ACM (1998)Google Scholar
  10. 10.
    Yu, S., Krishnapuram, B., Rosales, R., Rao, R.B.: Bayesian co-training. J. Mach. Learn. Res. 12, 2649–2680 (2011)zbMATHMathSciNetGoogle Scholar
  11. 11.
    Muslea, I., Minton, S., Knoblock, C.A.: Active learning with multiple views. J. Artif. Intell. Res. 27, 203–233 (2006)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Gönen, M., Alpaydın, E.: Multiple kernel learning algorithms. J. Mach. Learn. Res. 12, 2211–2268 (2011)zbMATHMathSciNetGoogle Scholar
  13. 13.
    Rakotomamonjy, A., Bach, F., Canu, S., Grandvalet, Y.: More efficiency in multiple kernel learning. In: Proceedings of the 24th International Conference On Machine Learning, pp. 775–782. ACM (2007)Google Scholar
  14. 14.
    Akaho, S.: A kernel method for canonical correlation analysis. arXiv preprint cs/0609071 (2006)
  15. 15.
    Guillaumin, M., Verbeek, J., Schmid, C.: Multimodal semi-supervised learning for image classification. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 902–909. IEEE (2010)Google Scholar
  16. 16.
    Sun, S., Hardoon, D.R.: Active learning with extremely sparse labeled examples. Neurocomputing 73(16), 2980–2988 (2010)CrossRefGoogle Scholar
  17. 17.
    Farquhar, J., Hardoon, D., Meng, H., Shawe-taylor, J.S., Szedmak, S.: Two view learning: Svm-2k, theory and practice. In: Advances in Neural Information Processing Systems, pp. 355–362 (2005)Google Scholar
  18. 18.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  19. 19.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  20. 20.
    Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  21. 21.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference On Machine Learning, pp. 160–167. ACM (2008)Google Scholar
  22. 22.
    Liu, W., Tao, D., Cheng, J., Tang, Y.: Multiview hessian discriminative sparse coding for image annotation. Comput. Vis. Image Underst. 118, 50–60 (2014)CrossRefGoogle Scholar
  23. 23.
    Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., et al.: Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153 (2007)Google Scholar
  24. 24.
    Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008)Google Scholar
  25. 25.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  26. 26.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html
  27. 27.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: The proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)Google Scholar
  28. 28.
    van de Weijer, J., Schmid, C.: Coloring local feature extraction. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 334–348. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  29. 29.
    Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2001)zbMATHCrossRefGoogle Scholar
  30. 30.
    Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(1), 281–305 (2012)zbMATHMathSciNetGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • TengQi Ye
    • 1
    Email author
  • Tianchun Wang
    • 2
  • Kevin McGuinness
    • 1
  • Yu Guo
    • 3
  • Cathal Gurrin
    • 1
  1. 1.Insight Centre for Data AnalyticsDublin City UniversityDublinIreland
  2. 2.School of Software, TNListTsinghua UniversityBeijingChina
  3. 3.Department of Computer ScienceCity University of Hong KongHong KongChina

Personalised recommendations