Advertisement

Multiple-Task Learning and Knowledge Transfer Using Generative Adversarial Capsule Nets

  • Ancheng Lin
  • Jun Li
  • Lujuan Zhang
  • Zhenyuan Ma
  • Weiqi Luo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11320)

Abstract

It is common that practical data has multiple attributes of interest. For example, a picture can be characterized in terms of its content, e.g. the categories of the objects in the picture, and in the meanwhile the image style such as photo-realistic or artistic is also relevant. This work is motivated by taking advantage of all available sources of information about the data, including those not directly related to the target of analytics.

We propose an explicit and effective knowledge representation and transfer architecture for image analytics by employing Capsules for deep neural network training based on the generative adversarial nets (GAN). The adversarial scheme help discover capsule-representation of data with different semantic meanings in respective dimensions of the capsules. The data representation includes one subset of variables that are particularly specialized for the target task – by eliminating information about the irrelevant aspects. We theoretically show the elimination by mixing conditional distributions of the represented data. Empirical evaluations show the propose method is effective for both standard transfer-domain recognition tasks and zero-shot transfer.

References

  1. 1.
    Goodfellow, I.J., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)Google Scholar
  2. 2.
    Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: NIPS, pp. 3859–3869 (2017)Google Scholar
  3. 3.
    Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML, pp. 2642–2651 (2017)Google Scholar
  4. 4.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  5. 5.
    Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: ICCV, pp. 2200–2207 (2013)Google Scholar
  6. 6.
    Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)Google Scholar
  7. 7.
    Shen, J., Qu, Y., Zhang, W., Yu, Y.: Adversarial representation learning for domain adaptation, CoRR, abs/1707.01217 (2017)Google Scholar
  8. 8.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1106–1114 (2012)Google Scholar
  10. 10.
    Dahl, G.E., Yu, D., Deng, L., Acero, A.: Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20(1), 30–42 (2012)CrossRefGoogle Scholar
  11. 11.
    Zen, H., Senior, A.W., Schuster, M.: Statistical parametric speech synthesis using deep neural networks. In: ICASSP, pp. 7962–7966 (2013)Google Scholar
  12. 12.
    Jia, K., Gong, S.: Multi-modal tensor face for simultaneous super-resolution and recognition. In: ICCV, pp. 1683–1690 (2005)Google Scholar
  13. 13.
    Jia, K., Tao, D., Gao, S., Xu, X.: Improving training of deep neural networks via singular value bounding. In: CVPR, pp. 3994–4002 (2017)Google Scholar
  14. 14.
    Abdulnabi, A.H., Wang, G., Lu, J., Jia, K.: Multi-task CNN model for attribute prediction. IEEE Trans. Multimed. 17(11), 1949–1959 (2015)CrossRefGoogle Scholar
  15. 15.
    Tang, Z., Wang, D., Pan, Y., Zhang, Z.: Knowledge transfer pre-training, CoRR, abs/1506.02256 (2015)Google Scholar
  16. 16.
    Wang, H., Nie, F., Huang, H., Ding, C.H.Q.: Dyadic transfer learning for cross-domain image classification. In: ICCV, pp. 551–556 (2011)Google Scholar
  17. 17.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  18. 18.
    Zheng, L., Zhao, Y., Wang, S., Wang, J., Tian, Q.: Good practice in CNN feature transfer, CoRR, abs/1604.00133 (2016)Google Scholar
  19. 19.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN, CoRR, abs/1701.07875 (2017)Google Scholar
  20. 20.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: NIPS, pp. 2172–2180 (2016)Google Scholar
  21. 21.
    Choi, Y., Choi, M., Kim, M., Ha, J., Kim, S., Choo, J.: Stargan: unified generative adversarial networks for multi-domain image-to-image translation, CoRR, abs/1711.09020 (2017)Google Scholar
  22. 22.
    Fu, L., et al.: Utilizing information from task-independent aspects via GAN-assisted knowledge transfer. In: IJCNN (2018)Google Scholar
  23. 23.
    Hinton, G.E., Sabour, S., Frosst, N.: Matrix capsules with EM routing (2018)Google Scholar
  24. 24.
    Zhao, W., Ye, J., Yang, M., Lei, Z., Zhang, S., Zhao, Z.: Investigating capsule networks with dynamic routing for text classification, CoRR, abs/1804.00538 (2018)Google Scholar
  25. 25.
    Villani, C.: Optimal Transport: Old and New. Grundlehren der mathematischen Wissenschaften, vol. 338. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-71050-9CrossRefzbMATHGoogle Scholar
  26. 26.
    Cortes, C., LeCun, Y., Burges, C.J.: The MNIST database of handwritten digits (1998)Google Scholar
  27. 27.
    Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 16(5), 550–554 (1994)CrossRefGoogle Scholar
  28. 28.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein GANs. In: NIPS, pp. 5769–5779 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Ancheng Lin
    • 1
  • Jun Li
    • 2
  • Lujuan Zhang
    • 3
  • Zhenyuan Ma
    • 3
  • Weiqi Luo
    • 4
  1. 1.School of Computer SciencesGuangdong Polytechnic Normal UniversityGuangzhouChina
  2. 2.School of Software and Centre for Artificial Intelligence, Faculty of Engineering and Information TechnologyUniversity of Technology SydneyBroadwayAustralia
  3. 3.School of Mathematics and System SciencesGuangdong Polytechnic Normal UniversityGuangzhouChina
  4. 4.College of Information Science and TechnologyJinan UniversityGuangzhouChina

Personalised recommendations