Advertisement

On Deep Set Learning and the Choice of Aggregations

  • Maximilian SoelchEmail author
  • Adnan Akhundov
  • Patrick van der Smagt
  • Justin Bayer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11727)

Abstract

Recently, it has been shown that many functions on sets can be represented by sum decompositions. These decompositons easily lend themselves to neural approximations, extending the applicability of neural nets to set-valued inputs—Deep Set learning. This work investigates a core component of Deep Set architecture: aggregation functions. We suggest and examine alternatives to commonly used aggregation functions, including learnable recurrent aggregation functions. Empirically, we show that the Deep Set networks are highly sensitive to the choice of aggregation functions: beyond improved performance, we find that learnable aggregations lower hyper-parameter sensitivity and generalize better to out-of-distribution input size.

Keywords

Set functions Deep learning Representation learning 

References

  1. 1.
    Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.: Learning Representations and Generative Models for 3D Point Clouds, February 2018. https://openreview.net/forum?id=BJInEZsTb
  2. 2.
    Chang, M.B., Ullman, T., Torralba, A., Tenenbaum, J.B.: A Compositional Object-Based Approach to Learning Physical Dynamics. arXiv:1612.00341 [cs], December 2016
  3. 3.
    Chen, X., Cheng, X., Mallat, S.: Unsupervised deep haar scattering on graphs. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 1709–1717. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5545-unsupervised-deep-haar-scattering-on-graphs.pdf
  4. 4.
    Edwards, H., Storkey, A.: Towards a Neural Statistician. arXiv:1606.02185 [cs, stat], June 2016
  5. 5.
    Eslami, S.M.A., et al.: Attend, infer, repeat: fast scene understanding with generative models. In: Proceedings of the 30th International Conference on Neural Information Processing Systems NIPS 2016, pp. 3233–3241. Curran Associates Inc., USA (2016). http://dl.acm.org/citation.cfm?id=3157382.3157459
  6. 6.
    Guttenberg, N., Virgo, N., Witkowski, O., Aoki, H., Kanai, R.: Permutation-equivariant neural networks applied to dynamics prediction. arXiv:1612.04530 [cs, stat], December 2016
  7. 7.
    Hecht-Nielsen, R.: Theory of the backpropagation neural network. In: International Joint Conference on Neural Networks, vol. 1, pp. 593–605. IEEE, Washington (1989).  https://doi.org/10.1109/IJCNN.1989.118638, http://ieeexplore.ieee.org/document/118638/
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735,https://www.mitpressjournals.org/doi/10.1162/neco.1997.9.8.1735CrossRefGoogle Scholar
  9. 9.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989).  https://doi.org/10.1016/0893-6080(89)90020-8, http://www.sciencedirect.com/science/article/pii/0893608089900208CrossRefGoogle Scholar
  10. 10.
    Ilse, M., Tomczak, J.M., Welling, M.: Attention-based Deep Multiple Instance Learning, February 2018. https://arxiv.org/abs/1802.04712
  11. 11.
    Kingma, D.P., Welling, M.: Auto-Encoding Variational Bayes. arXiv:1312.6114 [cs, stat], December 2013
  12. 12.
    Kolmogorov, A.N.: On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSSR 114, 953–956 (1957). https://zbmath.org/?q=an%3A0090.27103, mSC2010: 26B40 = Representation and superposition of functions of several real variables
  13. 13.
    Kosiorek, A., Kim, H., Teh, Y.W., Posner, I.: Sequential attend, infer, repeat: generative modelling of moving objects. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 8606–8616. Curran Associates, Inc. (2018). http://papers.nips.cc/paper/8079-sequential-attend-infer-repeat-generative-modelling-of-moving-objects.pdf
  14. 14.
    Lee, J., Lee, Y., Kim, J., Kosiorek, A.R., Choi, S., Teh, Y.W.: Set Transformer, October 2018. https://arxiv.org/abs/1810.00825
  15. 15.
    Murphy, R.L., Srinivasan, B., Rao, V., Ribeiro, B.: Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs. arXiv:1811.01900 [cs, stat], November 2018
  16. 16.
    Poczos, B., Singh, A., Rinaldo, A., Wasserman, L.: Distribution-free distribution regression. In: Artificial Intelligence and Statistics, pp. 507–515, April 2013. http://proceedings.mlr.press/v31/poczos13a.html
  17. 17.
    Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D Object Detection from RGB-D Data. arXiv:1711.08488 [cs], November 2017
  18. 18.
    Qi, C.R., Su, H., Kaichun, M., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77–85, July 2017.  https://doi.org/10.1109/CVPR.2017.16
  19. 19.
    Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Guyon, I. et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 5099–5108. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7095-pointnet-deep-hierarchical-feature-learning-on-point-sets-in-a-metric-space.pdf
  20. 20.
    Ravanbakhsh, S., Schneider, J., Poczos, B.: Deep Learning with Sets and Point Clouds. arXiv:1611.04500 [cs, stat], November 2016
  21. 21.
    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: International Conference on Machine Learning, pp. 1060–1069, June 2016. http://proceedings.mlr.press/v48/reed16.html
  22. 22.
    Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic Backpropagation and Approximate Inference in Deep Generative Models, January 2014. https://arxiv.org/abs/1401.4082
  23. 23.
    Santoro, A., et al.: A simple neural network module for relational reasoning. arXiv:1706.01427 [cs], June 2017
  24. 24.
    Vinyals, O., Bengio, S., Kudlur, M.: Order Matters: Sequence to sequence for sets. arXiv:1511.06391 [cs, stat], November 2015
  25. 25.
    Wagstaff, E., Fuchs, F.B., Engelcke, M., Posner, I., Osborne, M.: On the Limitations of Representing Functions on Sets. arXiv:1901.09006 [cs, stat], January 2019
  26. 26.
    Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic Graph CNN for Learning on Point Clouds. arXiv:1801.07829 [cs], January 2018
  27. 27.
    Welzl, E.: Smallest enclosing disks (balls and ellipsoids). In: Maurer, H. (ed.) New Results and New Trends in Computer Science. LNCS, vol. 555, pp. 359–370. Springer, Heidelberg (1991).  https://doi.org/10.1007/BFb0038202CrossRefGoogle Scholar
  28. 28.
    Yi, L., Zhao, W., Wang, H., Sung, M., Guibas, L.: GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud. arXiv:1812.03320 [cs], December 2018
  29. 29.
    Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R.R., Smola, A.J.: Deep sets. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 3391–3401. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/6931-deep-sets.pdf
  30. 30.
    Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1912–1920. IEEE, Boston, June 2015.  https://doi.org/10.1109/CVPR.2015.7298801, http://ieeexplore.ieee.org/document/7298801/

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Maximilian Soelch
    • 1
    Email author
  • Adnan Akhundov
    • 1
  • Patrick van der Smagt
    • 1
  • Justin Bayer
    • 1
  1. 1.argmax.ai, Volkswagen Group Machine Learning Research LabMunichGermany

Personalised recommendations