Advertisement

3D Conceptual Design Using Deep Learning

  • Zhangsihao Yang
  • Haoliang Jiang
  • Lan ZouEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 943)

Abstract

This article proposes a data-driven methodology to achieve fast design support to generate or develop novel designs covering multiple object categories. This methodology implements two state-of-the-art Variational Autoencoder, dealing with 3D model data, with a self-defined loss function. The loss function, containing the outputs of individual layers in the autoencoder, obtains combinations of different latent features from different 3D model categories. This article provides detail explanation for utilizing the Princeton Model-Net40 database, a comprehensive clean collection of 3D CAD models for objects. After converting the original 3D mesh file to voxel and point cloud data type, the model will feed an autoencoder with data in the same dimension. The novelty is to leverage the power of deep learning methods as an efficient latent feature extractor to explore unknown designing areas. The output is expected to show a clear and smooth interpretation of the model from different categories to generate new shapes. This article will explore (1) the theoretical ideas, (2) the progress to implement Variational Autoencoder to attain implicit features from input shapes, (3) the results of output shapes during training in selected domains of both 3D voxel data and 3D point cloud data, and (4) the conclusion and future work to achieve the more outstanding goal.

Keywords

Design support Data analysis 3D representation Generative model Computer vision 

References

  1. 1.
    Makhzani, A., et al.: Adversarial Autoencoders. American Physical Society, 25 May 2016. http://arxiv.org/abs/1511.05644
  2. 2.
    Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.J.: Representation learning and adversarial generation of 3D point clouds. CoRR, abs/1707.02392 (2017). http://arxiv.org/abs/1707.02392
  3. 3.
    Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A.C., Ben-gio, Y.: A recurrent latent variable model for sequential data. CoRR, abs/1506.02216 (2015). http://arxiv.org/abs/1506.02216
  4. 4.
    Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. CoRR, abs/1508.06576 (2015). http://arxiv.org/abs/1508.06576
  5. 5.
    Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Fisher, M.: ShapeNet: an information-rich 3D model repository, 09 December 2015. https://arxiv.org/abs/1512.03012
  6. 6.
    Goodfellow, J., Pouget-Abadie, J., Mirza, M., Xu, B., Bengio, Y.: Generative Adversarial Networks, 10 June 2014. https://arxiv.org/abs/1406.2661
  7. 7.
    Huang, X., Belongie, S.J.: Arbitrary style transfer in real-time with adaptive instance normalization. CoRR, abs/1703.06868 (2017). http://arxiv.org/abs/1703.06868
  8. 8.
    Johnson, J., Alahi, A., Li, F.-F.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision (2016)Google Scholar
  9. 9.
    Liu, S., Ororbia II, A.G., Giles, C.L.: Learning a hierarchical latent-variable model of voxelized 3D shapes. CoRR, abs/1705.05994 (2017). http://arxiv.org/abs/1705.05994
  10. 10.
    Rezende, D.J., Eslami, S.M.A., Mohamed, S., Battaglia, P., Jaderberg, M., Heess, N.: Unsupervised learning of 3D structure from images. CoRR, abs/1607.00662 (2016). http://arxiv.org/abs/1607.00662
  11. 11.
    Wu, Z., Song, S., Khosla, A., Tang, X., Xiao, J.: 3D shapenets for 2.5D object recognition and next-best-view prediction. CoRR, abs/1406.5670 (2014). http://arxiv.org/abs/1406.5670

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations