Advertisement

Neural Design Network: Graphic Layout Generation with Constraints

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12348)

Abstract

Graphic design is essential for visual communication with layouts being fundamental to composing attractive designs. Layout generation differs from pixel-level image synthesis and is unique in terms of the requirement of mutual relations among the desired components. We propose a method for design layout generation that can satisfy user-specified constraints. The proposed neural design network (NDN) consists of three modules. The first module predicts a graph with complete relations from a graph with user-specified relations. The second module generates a layout from the predicted graph. Finally, the third module fine-tunes the predicted layout. Quantitative and qualitative experiments demonstrate that the generated layouts are visually similar to real design layouts. We also construct real designs based on predicted layouts for a better understanding of the visual quality. Finally, we demonstrate a practical application on layout recommendation.

Notes

Acknowledgements

This work is supported in part by the NSF CAREER Grant \(\#1149783\).

Supplementary material

504435_1_En_29_MOESM1_ESM.pdf (296 kb)
Supplementary material 1 (pdf 296 KB)

References

  1. 1.
    Bylinskii, Z., et al.: Learning visual importance for graphic designs and data visualizations. In: UIST (2017)Google Scholar
  2. 2.
    Cheng, Y.C., Lee, H.Y., Sun, M., Yang, M.H.: Controllable image synthesis via SegVAE. In: ECCV (2020)Google Scholar
  3. 3.
    Damera-Venkata, N., Bento, J., O’Brien-Strain, E.: Probabilistic document model for automated document composition. In: DocEng (2011)Google Scholar
  4. 4.
    Deka, B., et al.: Rico: a mobile app dataset for building data-driven design applications. In: UIST (2017)Google Scholar
  5. 5.
    Duvenaud, D.K., et al.: Convolutional networks on graphs for learning molecular fingerprints. In: NeurIPS (2015)Google Scholar
  6. 6.
    Goller, C., Kuchler, A.: Learning task-dependent distributed representations by backpropagation through structure. In: ICNN (1996)Google Scholar
  7. 7.
    Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)Google Scholar
  8. 8.
    Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: IJCNN (2005)Google Scholar
  9. 9.
    Gupta, T., Schwenk, D., Farhadi, A., Hoiem, D., Kembhavi, A.: Imagine this! scripts to compositions to videos. In: ECCV (2018)Google Scholar
  10. 10.
    Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: NeurIPS (2017)Google Scholar
  11. 11.
    Hong, S., Yang, D., Choi, J., Lee, H.: Inferring semantic layout for hierarchical text-to-image synthesis. In: CVPR (2018)Google Scholar
  12. 12.
    Hurst, N., Li, W., Marriott, K.: Review of automatic document formatting. In: DocEng (2009)Google Scholar
  13. 13.
    Jain, A., Zamir, A.R., Savarese, S., Saxena, A.: Structural-ENN: Deep learning on spatio-temporal graphs. In: CVPR (2016)Google Scholar
  14. 14.
    Jin, W., Yang, K., Barzilay, R., Jaakkola, T.: Learning multimodal graph-to-graph translation for molecular optimization. In: ICLR (2019)Google Scholar
  15. 15.
    Johnson, J., Gupta, A., Fei-Fei, L.: Image generation from scene graphs. In: CVPR (2018)Google Scholar
  16. 16.
    Jyothi, A.A., Durand, T., He, J., Sigal, L., Mori, G.: LayoutVAE: stochastic scene layout generation from a label set. In: ICCV (2019)Google Scholar
  17. 17.
    Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019)Google Scholar
  18. 18.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)Google Scholar
  19. 19.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)Google Scholar
  20. 20.
    Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)Google Scholar
  21. 21.
    Kumar, R., Talton, J.O., Ahmad, S., Klemmer, S.R.: Bricolage: example-based retargeting for web design. In: SIGCHI (2011)Google Scholar
  22. 22.
    Li, J., Yang, J., Hertzmann, A., Zhang, J., Xu, T.: LayoutGAN: generating graphic layouts with wireframe discriminators. In: ICLR (2019)Google Scholar
  23. 23.
    Li, Y., Jiang, L., Yang, M.H.: Controllable and progressive image extrapolation. arXiv preprint arXiv:1912.11711 (2019)
  24. 24.
    Liu, T.F., Craft, M., Situ, J., Yumer, E., Mech, R., Kumar, R.: Learning design semantics for mobile apps. In: UIST (2018)Google Scholar
  25. 25.
    O’Donovan, P., Agarwala, A., Hertzmann, A.: Learning layouts for single-pagegraphic designs. TVCG 20, 1200–1213 (2014)Google Scholar
  26. 26.
    Pang, X., Cao, Y., Lau, R.W., Chan, A.B.: Directing user attention via visual flow on web designs. ACM TOG 35, 1–11 (2016). (Proc. SIGGRAPH)CrossRefGoogle Scholar
  27. 27.
    Razavi, A., van den Oord, A., Vinyals, O.: Generating diverse high-fidelity images with VQ-VAE-2. arXiv preprint arXiv:1906.00446 (2019)
  28. 28.
    Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: ICML (2014)Google Scholar
  29. 29.
    Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. TNN 20, 61–80 (2008)Google Scholar
  30. 30.
    Tabata, S., Yoshihara, H., Maeda, H., Yokoyama, K.: Automatic layout generation for graphical design magazines. In: SIGGRAPH (2019)Google Scholar
  31. 31.
    Tan, F., Feng, S., Ordonez, V.: Text2scene: Generating abstract scenes from textual descriptions. In: CVPR (2019)Google Scholar
  32. 32.
    Tseng, H.Y., Lee, H.Y., Jiang, L., Yang, W., Yang, M.H.: RetrieveGAN: image synthesis via differentiable patch retrieval. In: ECCV (2020)Google Scholar
  33. 33.
    Zheng, X., Qiao, X., Cao, Y., Lau, R.W.: Content-aware generative modeling of graphic design layouts. In: SIGGRAPH (2019)Google Scholar
  34. 34.
    Yang, J., Lu, J., Lee, S., Batra, D., Parikh, D.: Graph R-CNN for scene graph generation. In: ECCV (2018)Google Scholar
  35. 35.
    Zhu, J.Y., et al.: Toward multimodal image-to-image translation. In: NeurIPS (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Google ResearchMountain ViewUSA
  2. 2.University of California, MercedMercedUSA
  3. 3.Yonsei UniversitySeoulSouth Korea
  4. 4.Georgia Institute of TechnologyAtlantaUSA

Personalised recommendations