Automatic Inference of Cross-Modal Connection Topologies for X-CNNs

  • Laurynas Karazija
  • Petar Veličković
  • Pietro Liò
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10878)

Abstract

This paper introduces a way to learn cross-modal convolutional neural network (X-CNN) architectures from a base convolutional network (CNN) and the training data to reduce the design cost and enable applying cross-modal networks in sparse data environments. Two approaches for building X-CNNs are presented. The base approach learns the topology in a data-driven manner, by using measurements performed on the base CNN and supplied data. The iterative approach performs further optimisation of the topology through a combined learning procedure, simultaneously learning the topology and training the network. The approaches were evaluated agains examples of hand-designed X-CNNs and their base variants, showing superior performance and, in some cases, gaining an additional 9% of accuracy. From further considerations, we conclude that the presented methodology takes less time than any manual approach would, whilst also significantly reducing the design complexity. The application of the methods is fully automated and implemented in Xsertion library (Code is publicly available at https://github.com/karazijal/xsertion).

Keywords

Deep learning Model selection and structure learning Optimisation algorithms Evolutionary neural networks 

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  2. 2.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  3. 3.
    Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  4. 4.
    Veličković, P., Wang, D., Laney, N.D., Liò, P.: X-CNN: cross-modal convolutional neural networks for sparse datasets. In: 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8. IEEE (2016)Google Scholar
  5. 5.
    Velickovic, P., Karazija, L., Lane, N.D., Bhattacharya, S., Liberis, E., Lio, P., Chieh, A., Bellahsen, O., Vegreville, M.: Cross-modal recurrent models for weight objective prediction from multimodal time-series data. arXiv e-prints (2017)Google Scholar
  6. 6.
    Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar
  7. 7.
    Yao, X., Liu, Y.: EPNet for chaotic time-series prediction. In: Yao, X., Kim, J.-H., Furuhashi, T. (eds.) SEAL 1996. LNCS, vol. 1285, pp. 146–156. Springer, Heidelberg (1997).  https://doi.org/10.1007/BFb0028531CrossRefGoogle Scholar
  8. 8.
    Stanley, K.O., Bryant, B.D., Miikkulainen, R.: Real-time neuroevolution in the nero video game. IEEE Trans. Evol. Comput. 9(6), 653–668 (2005)CrossRefGoogle Scholar
  9. 9.
    Zhang, B.T., Ohm, P., Mühlenbein, H.: Evolutionary induction of sparse neural trees. Evol. Comput. 5(2), 213–236 (1997)CrossRefGoogle Scholar
  10. 10.
    Chen, Y., Yang, B., Dong, J., Abraham, A.: Time-series forecasting using flexible neural tree model. Inf. Sci. 174(3), 219–235 (2005)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient transfer learning. CoRR abs/1611.06440 (2016)Google Scholar
  12. 12.
    Hu, H., Peng, R., Tai, Y., Tang, C.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. CoRR abs/1607.03250 (2016)Google Scholar
  13. 13.
    Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in Neural Information Processing Systems, pp. 2074–2082 (2016)Google Scholar
  14. 14.
    Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. CoRR abs/1707.07012 (2017)Google Scholar
  15. 15.
    Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Le, Q.V., Kurakin, A.: Large-scale evolution of image classifiers. CoRR abs/1703.01041 (2017)Google Scholar
  16. 16.
    Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. CoRR abs/1611.02167 (2016)Google Scholar
  17. 17.
    Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: FitNets: hints for thin deep nets. CoRR abs/1412.6550 (2014)Google Scholar
  18. 18.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015Google Scholar
  19. 19.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  20. 20.
    Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. CoRR abs/1505.00387 (2015)Google Scholar
  21. 21.
    Alain, G., Bengio, Y.: Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644 (2016)
  22. 22.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  23. 23.
    Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014)Google Scholar
  24. 24.
    Dozat, T.: Incorporating nesterov momentum into adam (2016). http://cs229.stanford.edu/proj2015/054_report.pdf
  25. 25.
    Chollet, F., et al.: Train a simple deep CNN on the CIFAR10 small images dataset (2015). https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
  26. 26.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009). https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
  27. 27.
    Targ, S., Almeida, D., Lyman, K.: Resnet in resnet: Generalizing residual architectures. CoRR abs/1603.08029 (2016)Google Scholar
  28. 28.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Laurynas Karazija
    • 1
  • Petar Veličković
    • 1
  • Pietro Liò
    • 1
  1. 1.Computer LaboratoryUniversity of CambridgeCambridgeUK

Personalised recommendations