Advertisement

Modeling 3D Shapes by Reinforcement Learning

Conference paper
  • 790 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12355)

Abstract

We explore how to enable machines to model 3D shapes like human modelers using deep reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework .

Notes

Acknowledgements

We thank Roy Subhayan and Agrawal Dhruv for their help on data preprocessing and Angela Dai for the voice-over of the video. We also thank Armen Avetisyan, Changjian Li, Nenglun Chen, Zhiming Cui for their discussions and comments. This work was supported by a TUM-IAS Rudolf Mößbauer Fellowship, the ERC Starting Grant Scan2CAD (804724), and the German Research Foundation (DFG) Grant Making Machine Learning on Static and Dynamic 3D Data Practical.

Supplementary material

504449_1_En_32_MOESM1_ESM.pdf (300 kb)
Supplementary material 1 (pdf 299 KB)

Supplementary material 2 (mp4 20893 KB)

References

  1. 1.
    Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 1. ACM (2004)Google Scholar
  2. 2.
    Brockman, G., et al.: OpenAI gym. arXiv preprint arXiv:1606.01540 (2016)
  3. 3.
    Chang, A.X., et al.: Shapenet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)
  4. 4.
    Cruz Jr., G.V., Du, Y., Taylor, M.E.: Pre-training neural networks with human demonstrations for deep reinforcement learning. arXiv preprint arXiv:1709.04083 (2017)
  5. 5.
    Ganin, Y., Kulkarni, T., Babuschkin, I., Eslami, S.A., Vinyals, O.: Synthesizing programs for images using reinforced adversarial learning. In: International Conference on Machine Learning, pp. 1666–1675 (2018)Google Scholar
  6. 6.
    Hester, T., et al.: Deep q-learning from demonstrations. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  7. 7.
    Huang, Z., Heng, W., Zhou, S.: Learning to paint with model-based deep reinforcement learning. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 8709–8718 (2019)Google Scholar
  8. 8.
    Kalogerakis, E., Averkiou, M., Maji, S., Chaudhuri, S.: 3D shape segmentation with projective convolutional networks. In: Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  9. 9.
    Kalogerakis, E., Hertzmann, A., Singh, K.: Learning 3D mesh segmentation and labeling. ACM Trans. Graph. (TOG) 29, 102 (2010). ACMCrossRefGoogle Scholar
  10. 10.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  11. 11.
    Li, J., Xu, K., Chaudhuri, S., Yumer, E., Zhang, H., Guibas, L.: Grass: Generative recursive autoencoders for shape structures. ACM Trans. Graph. (TOG) 36(4), 52 (2017)Google Scholar
  12. 12.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)CrossRefGoogle Scholar
  13. 13.
    Paschalidou, D., Ulusoy, A.O., Geiger, A.: Superquadrics revisited: learning 3D shape parsing beyond cuboids. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10344–10353 (2019)Google Scholar
  14. 14.
    Raitt, B., Minter, G.: Digital sculpture techniques. Interact. Mag. 4(5) (2000)Google Scholar
  15. 15.
    Riaz Muhammad, U., Yang, Y., Song, Y.Z., Xiang, T., Hospedales, T.M.: Learning deep sketch abstraction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8014–8023 (2018)Google Scholar
  16. 16.
    Ross, S., Bagnell, D.: Efficient reductions for imitation learning. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 661–668 (2010)Google Scholar
  17. 17.
    Ross, S., Bagnell, J.A.: Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979 (2014)
  18. 18.
    Ross, S., Gordon, G., Bagnell, D.: A reduction of imitation learning and structured prediction to no-regret online learning. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 627–635 (2011)Google Scholar
  19. 19.
    Ruiz-Montiel, M., Boned, J., Gavilanes, J., Jiménez, E., Mandow, L., PéRez-De-La-Cruz, J.L.: Design with shape grammars and reinforcement learning. Adv. Eng. Inform. 27(2), 230–245 (2013)CrossRefGoogle Scholar
  20. 20.
    Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015)
  21. 21.
    Sharma, G., Goyal, R., Liu, D., Kalogerakis, E., Maji, S.: CSGNet: neural shape parser for constructive solid geometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5515–5523 (2018)Google Scholar
  22. 22.
    Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484 (2016)CrossRefGoogle Scholar
  23. 23.
    Subramanian, K., Isbell Jr., C.L., Thomaz, A.L.: Exploration from demonstration for interactive reinforcement learning. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 447–456. International Foundation for Autonomous Agents and Multiagent Systems (2016)Google Scholar
  24. 24.
    Sun, C., Zou, Q., Tong, X., Liu, Y.: Learning adaptive hierarchical cuboid abstractions of 3D shape collections. ACM Trans. Graph. (SIGGRAPH Asia) 38(6) (2019)Google Scholar
  25. 25.
    Teboul, O., Kokkinos, I., Simon, L., Koutsourakis, P., Paragios, N.: Shape grammar parsing via reinforcement learning. In: CVPR 2011, pp. 2273–2280. IEEE (2011)Google Scholar
  26. 26.
    Tian, Y., et al.: Learning to infer and execute 3D shape programs. In: International Conference on Learning Representations (2019)Google Scholar
  27. 27.
    Tulsiani, S., Su, H., Guibas, L.J., Efros, A.A., Malik, J.: Learning shape abstractions by assembling volumetric primitives. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2635–2643 (2017)Google Scholar
  28. 28.
    Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)Google Scholar
  29. 29.
    Wang, Y., et al.: Symmetry hierarchy of man-made objects. Comput. Graph. Forum 30, 287–296 (2011). Wiley Online LibraryCrossRefGoogle Scholar
  30. 30.
    Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1995–2003 (2016)Google Scholar
  31. 31.
    Wu, J., Kobbelt, L.: Structure recovery via hybrid variational surface approximation. J. Comput. Graph. Forum 24, 277–284 (2005). Wiley Online LibraryCrossRefGoogle Scholar
  32. 32.
    Xie, N., Hachiya, H., Sugiyama, M.: Artist agent: a reinforcement learning approach to automatic stroke generation in oriental ink painting. IEICE Trans. Inf. Syst. 96(5), 1134–1144 (2013)CrossRefGoogle Scholar
  33. 33.
    Zhou, T., et al.: Learning to sketch with deep Q networks and demonstrated strokes. arXiv preprint arXiv:1810.05977 (2018)
  34. 34.
    Ziebart, B.D., Maas, A., Bagnell, J.A., Dey, A.K.: Maximum entropy inverse reinforcement learning (2008)Google Scholar
  35. 35.
    Zou, C., Yumer, E., Yang, J., Ceylan, D., Hoiem, D.: 3D-PRNN: generating shape primitives with recurrent neural networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 900–909 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.The University of Hong KongPok Fu LamHong Kong
  2. 2.Technical University of MunichMunichGermany

Personalised recommendations