Advertisement

House-GAN: Relational Generative Adversarial Networks for Graph-Constrained House Layout Generation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12346)

Abstract

This paper proposes a novel graph-constrained generative adversarial network, whose generator and discriminator are built upon relational architecture. The main idea is to encode the constraint into the graph structure of its relational networks. We have demonstrated the proposed architecture for a new house layout generation problem, whose task is to take an architectural constraint as a graph (i.e., the number and types of rooms with their spatial adjacency) and produce a set of axis-aligned bounding boxes of rooms. We measure the quality of generated house layouts with the three metrics: the realism, the diversity, and the compatibility with the input graph constraint. Our qualitative and quantitative evaluations over 117,000 real floorplan images demonstrate that the proposed approach outperforms existing methods and baselines. We will publicly share all our code and data.

Keywords

GAN Graph-constrained Layout Generation Floorplan 

Notes

Acknowledgement

This research is partially supported by NSERC Discovery Grants, NSERC Discovery Grants Accelerator Supplements, and DND/NSERC Discovery Grant Supplement. We would like to thank architects and students for participating in our user study.

Supplementary material

500725_1_En_10_MOESM1_ESM.pdf (6.9 mb)
Supplementary material 1 (pdf 7060 KB)

References

  1. 1.
    Lifull home’s dataset. https://www.nii.ac.jp/dsc/idr/lifull
  2. 2.
    Abu-Aisheh, Z., Raveaux, R., Ramel, J.Y., Martineau, P.: An exact graph edit distance algorithm for solving pattern recognition problems (2015)Google Scholar
  3. 3.
    Ashual, O., Wolf, L.: Specifying object attributes and relations in interactive scene generation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4561–4569 (2019)Google Scholar
  4. 4.
    Bao, F., Yan, D.M., Mitra, N.J., Wonka, P.: Generating and exploring good building layouts. ACM Trans. Graph. (TOG) 32(4), 1–10 (2013)CrossRefGoogle Scholar
  5. 5.
    Choi, Y., et al.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)Google Scholar
  6. 6.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  7. 7.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)Google Scholar
  8. 8.
    Harada, M., Witkin, A., Baraff, D.: Interactive physically-based manipulation of discrete/continuous models. In: Proceedings of the 22nd Annual Conference on Computer Gand Interactive Techniques, pp. 199–208 (1995)Google Scholar
  9. 9.
    Hendrikx, M., Meijer, S., Van Der Velden, J., Iosup, A.: Procedural content generation for games: a survey. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 9(1), 1–22 (2013)CrossRefGoogle Scholar
  10. 10.
    Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637 (2017)Google Scholar
  11. 11.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)Google Scholar
  12. 12.
    Johnson, J., Gupta, A., Fei-Fei, L.: Image generation from scene graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1219–1228 (2018)Google Scholar
  13. 13.
    Jyothi, A.A., Durand, T., He, J., Sigal, L., Mori, G.: LayoutVAE: stochastic scene layout generation from a label set. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9895–9904 (2019)Google Scholar
  14. 14.
    Karras, T., et al.: Analyzing and improving the image quality of styleGAN. arXiv preprint arXiv:1912.04958 (2019)
  15. 15.
    Kwon, Y.H., Park, M.G.: Predicting future frames using retrospective cycle GAN. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1811–1820 (2019)Google Scholar
  16. 16.
    Li, J., Yang, J., Hertzmann, A., Zhang, J., Xu, T.: LayoutGAN: generating graphic layouts with wireframe discriminators. arXiv preprint arXiv:1901.06767 (2019)
  17. 17.
    Liu, C., Wu, J., Kohli, P., Furukawa, Y.: Raster-to-vector: revisiting floorplan transformation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2195–2203 (2017)Google Scholar
  18. 18.
    Ma, C., Vining, N., Lefebvre, S., Sheffer, A.: Game level layout from design specification. Comput. Graph. Forum 33, 95–104 (2014). Wiley Online LibraryCrossRefGoogle Scholar
  19. 19.
    Merrell, P., Schkufza, E., Koltun, V.: Computer-generated residential building layouts. ACM Trans. Graph. (TOG) 29, 181 (2010). ACMCrossRefGoogle Scholar
  20. 20.
    Müller, P., Wonka, P., Haegler, S., Ulmer, A., Van Gool, L.: Procedural modeling of buildings. In: ACM SIGGRAPH 2006 Papers, pp. 614–623 (2006)Google Scholar
  21. 21.
    Peng, C.H., Yang, Y.L., Wonka, P.: Computing layouts with deformable templates. ACM Trans. Graph. (TOG) 33(4), 1–11 (2014)CrossRefGoogle Scholar
  22. 22.
    Ritchie, D., Wang, K., Lin, Y.A.: Fast and flexible indoor scene synthesis via deep convolutional generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6182–6190 (2019)Google Scholar
  23. 23.
    Wang, K., Lin, Y.A., Weissmann, B., Savva, M., Chang, A.X., Ritchie, D.: Planit: planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Trans. Graph. (TOG) 38(4), 132 (2019)Google Scholar
  24. 24.
    Wang, K., Savva, M., Chang, A.X., Ritchie, D.: Deep convolutional priors for indoor scene synthesis. ACM Trans. Graph. (TOG) 37(4), 1–14 (2018)Google Scholar
  25. 25.
    Wu, W., Fu, X.M., Tang, R., Wang, Y., Qi, Y.H., Liu, L.: Data-driven interior plan generation for residential buildings. ACM Trans. Graph. (TOG) 38(6), 1–12 (2019)Google Scholar
  26. 26.
    Zhang, F., Nauata, N., Furukawa, Y.: Conv-MPN: convolutional message passing neural network for structured outdoor architecture reconstruction. arXiv preprint arXiv:1912.01756 (2019)
  27. 27.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Simon Fraser UniversityBurnabyCanada
  2. 2.Autodesk ResearchSan FranciscoUSA

Personalised recommendations