Sensing and Imaging

, 20:29 | Cite as

New Product Design with Automatic Scheme Generation

  • Yong Dai
  • Yi LiEmail author
  • Li-Jun Liu
Original Paper


Traditionally, design schemes were usually drawn manually or with the help of graphics software. These methods are usually labor-intensive and time-consuming. The design field calls for advanced product design, which will be highly efficient and meet more individualized demands for customized manufacturing. In this paper, a new design approach is introduced to assist designers to improve the efficiency of product design with automatic sample generation algorithms. This approach mainly consists of two parts: design scheme generation and sketch inversion. For design scheme generation, we adopt the generative adversarial networks to extract features from the existing product images and generate new design schemes based on these features. This is followed by post-processing including wire frame design and sketch design. Meanwhile, for sketch inversion, the sketches are generated and input along with the corresponding color images to train the sketch inversion model. With this model, hand-drawn sketches are transferred into color design schemes. We take watch designing as an example to validate the effectiveness of the proposed method on design scheme generation and sketch inversion. Experimental results demonstrate that the proposed design scheme generation method can generate new product design schemes, and further sketch inversion process can transfer them into color design schemes with high quality.


Design scheme generation Wire frame design Sketch inversion 



This work is supported by the National Natural Science Foundation of China (No. 61772186).


  1. 1.
    Aksac, A., Ozyer, T., & Alhajj, R. (2017). Complex networks driven salient region detection based on superpixel segmentation. Pattern Recognition, 66, 268–279.CrossRefGoogle Scholar
  2. 2.
    Almeida, M. A., & Pedrino, E. C. (2018). Hybrid evolvable hardware for automatic generation of image filters. Integrated Computer Aided Engineering, 25(1), 1–15.Google Scholar
  3. 3.
    Bar, A., Bar, A., Bar, A., & Bar, A. (2017). Inferring contextual preferences using deep auto-encoding. In International conference on user modeling, adaptation and personalization (pp. 221–229).Google Scholar
  4. 4.
    Bianco, S., Buzzelli, M., Mazzini, D., & Schettini, R. (2017). Deep learning for logo recognition. Neurocomputing, 245, 23–30.CrossRefGoogle Scholar
  5. 5.
    Cheng, M., Mitra, N. J., Huang, X., Torr, P. H. S., & Hu, S. (2011). Global contrast based salient region detection. In IEEE conference on computer vision and pattern recognition (pp. 409–416).Google Scholar
  6. 6.
    Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A. (2018). Generative adversarial networks: An overview. IEEE Signal Processing Magazine, 35(1), 53–65.CrossRefGoogle Scholar
  7. 7.
    Denton, E., Chintala, S., Szlam, A., & Fergus, R. (2015). Deep generative image models using a laplacian pyramid of adversarial networks. In International conference on neural information processing systems (pp. 1486–1494).Google Scholar
  8. 8.
    Diego-Mas, J. A., & Alcaide-Marzal, J. (2016). Single users’ affective responses models for product form design. International Journal of Industrial Ergonomics, 53, 102–114.CrossRefGoogle Scholar
  9. 9.
    Dosovitskiy, A., Springenberg, J. T., Tatarchenko, M., & Brox, T. (2014). Learning to generate chairs, tables and cars with convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 692–705.Google Scholar
  10. 10.
    Eckert, C., Kelly, I., & Stacey, M. (1999). Interactive generative systems for conceptual design: An empirical perspective. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 13, 303–320.CrossRefGoogle Scholar
  11. 11.
    Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In IEEE conference on computer vision and pattern recognition (pp. 2414–2423).Google Scholar
  12. 12.
    Ge, H., & Rr, S. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.MathSciNetCrossRefGoogle Scholar
  13. 13.
    Goodfellow, I. J., Pouget-Abadie, J., & Mirza, M. (2014). Generative adversarial networks. Advances in Neural Information Processing Systems, 3, 2672–2680.Google Scholar
  14. 14.
    Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., & Wierstra, D. (2015). DRAW: A recurrent neural network for image generation. In Computer Science (pp. 1462–1471).Google Scholar
  15. 15.
    Güçlütürk, Y., Güçlü, U., Lier, R. V., & Gerven, M. A. J. V. (2016). Convolutional sketch inversion. In European conference on computer vision (pp. 810–824).Google Scholar
  16. 16.
    Han, X., Lu, J., Zhao, C., You, S., & Li, H. (2018). Semisupervised and weakly supervised road detection based on generative adversarial networks. IEEE Signal Processing Letters, 25(4), 551–555.CrossRefGoogle Scholar
  17. 17.
    Hardt, M. (2010). The common in communism. Contemporary Marxism Review, 22(3), 346–356.Google Scholar
  18. 18.
    He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (pp. 770–778).Google Scholar
  19. 19.
    Hou, Q., Cheng, M., Hu, X., Borji, A., Tu, Z., & Torr, P. H. S. (2019). Deeply supervised salient object detection with short connections. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(4), 815–828.CrossRefGoogle Scholar
  20. 20.
    Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448–456).Google Scholar
  21. 21.
    Jihyun, L., & Chang, M. L. (2010). Stimulating designers’ creativity based on a creative evolutionary system and collective intelligence in product design. International Journal of Industrial Ergonomics, 40(3), 295–305.CrossRefGoogle Scholar
  22. 22.
    Johnson, J., Alahi, A., & Li, F. F. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision (pp. 694–711).Google Scholar
  23. 23.
    Lee, H., Grosse, R., Ranganath, R., & Ng, A. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Ibn International conference on machine learning (pp. 609–616).Google Scholar
  24. 24.
    Luan, F., Paris, S., Shechtman, E., & Bala, K. (2018). Deep painterly harmonization. arXiv:1804.03189
  25. 25.
    Mao, X., Li, Q., Xie, H., Lau, R. Y. K., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In IEEE international conference on computer vision (pp. 2813–2821).Google Scholar
  26. 26.
    Martin, A., & Soumith Chintala, L. B. (2017). Wasserstein GAN. In International conference on machine learning (pp. 214–223).Google Scholar
  27. 27.
    Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. In Computer science (pp. 1791–1799).Google Scholar
  28. 28.
    Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434
  29. 29.
    Ranzato, M., Susskind, J., Mnih, V., & Hinton, G. (2011). On deep generative models with applications to recognition. In IEEE conference on computer vision and pattern recognition (pp. 2857–2864).Google Scholar
  30. 30.
    Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning (pp. 1278–1286).Google Scholar
  31. 31.
    Salakhutdinov, R., & Hinton, G. (2009). Deep boltzmann machines. Journal of Machine Learning Research, 5(2), 1967–2006.zbMATHGoogle Scholar
  32. 32.
    Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.CrossRefGoogle Scholar
  33. 33.
    Tang, Y., & Mohamed, A. R. (2012). Multiresolution deep belief networks. In International conference on artificial intelligence and statistics (pp. 1203–1211).Google Scholar
  34. 34.
    Wang, N., Tao, D., Gao, X., Li, X., & Li, J. (2013). Transductive face sketch-photo synthesis. IEEE Transactions on Neural Networks and Learning Systems, 24(9), 1364–1376.CrossRefGoogle Scholar
  35. 35.
    Yang, X., Mei, T., Xu, Y., Rui, Y., & Li, S. (2016). Automatic generation of visual-textual presentation layout. ACM Transactions on Multimedia Computing, Communications, and Applications, 12(2), 1–22.CrossRefGoogle Scholar
  36. 36.
    Yu, F., Zhang, Y., Song, S., Seff, A., & Xiao, J. (2015). LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint.Google Scholar
  37. 37.
    Zhang, K., Wang, J., Hua, B., & Lu, L. (2013). Dhash: A cache-friendly TCP lookup algorithm for fast network processing. In 38th annual IEEE conference on local computer networks (pp. 484–491).Google Scholar
  38. 38.
    Zheng, Y., Zhang, Y. J., & Larochelle, H. (2014). A deep and autoregressive approach for topic modeling of multimodal data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(6), 1056–1069.CrossRefGoogle Scholar
  39. 39.
    Zhuo, L., Hu, X., Jiang, L., & Zhang, J. (2016). A color image edge detection algorithm based on color difference. Sensing and Imaging, 17(1), 16–28.CrossRefGoogle Scholar
  40. 40.
    Zou, W., Liu, Z., Kpalma, K., Ronsin, J., Zhao, Y., & Komodakis, N. (2015). Unsupervised joint salient region detection and object segmentation. IEEE Transactions on Image Processing, 24(11), 3858–3873.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Electrical and Information EngineeringHunan UniversityChangshaChina
  2. 2.School of DesignHunan UniversityChangshaChina
  3. 3.Key Laboratory of Visual Perception and Artificial Intelligence of Hunan ProvinceChangshaChina

Personalised recommendations