Advertisement

Camera Obscurer: Generative Art for Design Inspiration

  • Dilpreet Singh
  • Nina Rajcic
  • Simon ColtonEmail author
  • Jon McCormack
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11453)

Abstract

We investigate using generated decorative art as a source of inspiration for design tasks. Using a visual similarity search for image retrieval, the Camera Obscurer app enables rapid searching of tens of thousands of generated abstract images of various types. The seed for a visual similarity search is a given image, and the retrieved generated images share some visual similarity with the seed. Implemented in a hand-held device, the app empowers users to use photos of their surroundings to search through the archive of generated images and other image archives. Being abstract in nature, the retrieved images supplement the seed image rather than replace it, providing different visual stimuli including shapes, colours, textures and juxtapositions, in addition to affording their own interpretations. This approach can therefore be used to provide inspiration for a design task, with the abstract images suggesting new ideas that might give direction to a graphic design project. We describe a crowdsourcing experiment with the app to estimate user confidence in retrieved images, and we describe a pilot study where Camera Obscurer provided inspiration for a design task. These experiments have enabled us to describe future improvements, and to begin to understand sources of visual inspiration for design tasks.

Notes

Acknowledgements

We would like to thank the participants in the pilot study for their time and energy, members of SensiLab for their very useful feedback on the Camera Obscurer app, and the anonymous reviewers for their helpful comments.

References

  1. 1.
    Romero, J., Machado, P. (eds.): The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music. Springer, New York (2008)Google Scholar
  2. 2.
    Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: CAN: creative adversarial networks generating “art” by learning about styles and deviating from style norms. In: Proceedings of the 8th International Conference on Computational Creativity (2017)Google Scholar
  3. 3.
    Reas, C., McWilliams, C.: LUST: Form+Code in Design, Art and Architecture. Princeton Architectural Press, New York (2010)Google Scholar
  4. 4.
    Colton, S.: Evolving a library of artistic scene descriptors. In: Machado, P., Romero, J., Carballal, A. (eds.) EvoMUSART 2012. LNCS, vol. 7247, pp. 35–47. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29142-5_4CrossRefGoogle Scholar
  5. 5.
    Sims, K.: Artificial evolution for computer graphics. Comput. Graph. 25(4), 319–328 (1991)CrossRefGoogle Scholar
  6. 6.
    Todd, S., Latham, W.: Evolutionary Art and Computers. Academic Press, San Diego (1992)zbMATHGoogle Scholar
  7. 7.
    McCormack, J.: Aesthetic evolution of L-systems revisited. In: Raidl, G.R., Cagnoni, S., Branke, J., Corne, D.W., Drechsler, R., Jin, Y., Johnson, C.G., Machado, P., Marchiori, E., Rothlauf, F., Smith, G.D., Squillero, G. (eds.) EvoWorkshops 2004. LNCS, vol. 3005, pp. 477–488. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24653-4_49CrossRefGoogle Scholar
  8. 8.
    Correia, J., Machado, P., Romero, J., Carballal, A.: Evolving figurative images using expression-based evolutionary art. In: Proceedings of the ICCC (2013)Google Scholar
  9. 9.
    Machado, P., Correia, J., Romero, J.: Expression-based evolution of faces. In: Machado, P., Romero, J., Carballal, A. (eds.) EvoMUSART 2012. LNCS, vol. 7247, pp. 187–198. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29142-5_17CrossRefGoogle Scholar
  10. 10.
    Colton, S., Cook, M., Raad, A.: Ludic considerations of tablet-based evo-art. In: Di Chio, C., Brabazon, A., Di Caro, G.A., Drechsler, R., Farooq, M., Grahl, J., Greenfield, G., Prins, C., Romero, J., Squillero, G., Tarantino, E., Tettamanzi, A.G.B., Urquhart, N., Uyar, A.Ş. (eds.) EvoApplications 2011. LNCS, vol. 6625, pp. 223–233. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-20520-0_23CrossRefGoogle Scholar
  11. 11.
    Colton, S., Torres, P.: Evolving approximate image filters. In: Giacobini, M., Brabazon, A., Cagnoni, S., Di Caro, G.A., Ekárt, A., Esparcia-Alcázar, A.I., Farooq, M., Fink, A., Machado, P. (eds.) EvoWorkshops 2009. LNCS, vol. 5484, pp. 467–477. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-01129-0_53CrossRefGoogle Scholar
  12. 12.
    Krause, J.: Color Index. David and Charles (2002)Google Scholar
  13. 13.
    Sharma, G., Schiele, B.: Scalable nonlinear embeddings for semantic category-based image retrieval. In: Proceedings of the IEEE International Conference on Computer Vision (2015)Google Scholar
  14. 14.
    Zheng, L., Yang, Y., Tian, Q.: Sift meets CNN: a decade survey of instance retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 40(5) (2018)Google Scholar
  15. 15.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  16. 16.
    Perronnin, F., Dance, C.: Fisher kernels on visual vocabularies for image categorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  17. 17.
    Jegou, H., Perronnin, F., Douze, M., Sánchez, J., Perez, P., Schmid, C.: Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1704–1716 (2012)CrossRefGoogle Scholar
  18. 18.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in NIPS (2012)Google Scholar
  19. 19.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Computer Vision and Pattern Recognition (2009)Google Scholar
  20. 20.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE CVPR Workshops (2014)Google Scholar
  21. 21.
    Azizpour, H., Razavian, A.S., Sullivan, J., Maki, A., Carlsson, S.: From generic to specific deep representations for visual recognition. In: Proceedings of the CVPR Workshops (2015)Google Scholar
  22. 22.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. (2018)Google Scholar
  23. 23.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: IEEE ICCV (2017)Google Scholar
  24. 24.
    Babenko, A., Slesarev, A., Chigorin, A., Lempitsky, V.: Neural codes for image retrieval. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 584–599. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_38CrossRefGoogle Scholar
  25. 25.
    Hu, H., et al.: Web-scale responsive visual search at Bing. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2018)Google Scholar
  26. 26.
    Jing, Y., et al.: Visual search at pinterest. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)Google Scholar
  27. 27.
    Zujovic, J., Gandy, L., Friedman, S., Pardo, B., Pappas, T.N.: Classifying paintings by artistic genre: an analysis of features and classifiers. In: Proceedings of the IEEE Workshop on Multimedia Signal Processing (2009)Google Scholar
  28. 28.
    Bar, Y., Levy, N., Wolf, L.: Classification of artistic styles using binarized features derived from a deep neural network. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 71–84. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16178-5_5CrossRefGoogle Scholar
  29. 29.
    Hicsonmez, S., Samet, N., Sener, F., Duygulu, P.: Draw: deep networks for recognizing styles of artists who illustrate children’s books. In: Proceedings of the International Conference on Multimedia Retrieval (2017)Google Scholar
  30. 30.
    Seguin, B., Striolo, C., diLenardo, I., Kaplan, F.: Visual link retrieval in a database of paintings. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9913, pp. 753–767. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46604-0_52CrossRefGoogle Scholar
  31. 31.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference Computer Vision & Pattern Recognition (2016)Google Scholar
  32. 32.
    Razavian, A.S., Sullivan, J., Carlsson, S., Maki, A.: Visual instance retrieval with deep convolutional networks. ITE Trans. Media Tech. Appl. 4(3), 251–258 (2016)CrossRefGoogle Scholar
  33. 33.
    Wang, Y., Takatsuka, M.: SOM based artistic style visualization. In: Proceedings of the IEEE Conference Multimedia and Expo (2013)Google Scholar
  34. 34.
    Aumüller, M., Bernhardsson, E., Faithfull, A.: ANN-benchmarks: a benchmarking tool for approximate nearest neighbor algorithms. In: Beecks, C., Borutta, F., Kröger, P., Seidl, T. (eds.) SISAP 2017. LNCS, vol. 10609, pp. 34–49. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68474-1_3CrossRefGoogle Scholar
  35. 35.
    Dasgupta, S., Sinha, K.: Randomized partition trees for exact nearest neighbor search. In: Proceedings of the Conference on Learning Theory (2013)Google Scholar
  36. 36.
    Keysers, D., Deselaers, T., Ney, H.: Pixel-to-pixel matching for image recognition using hungarian graph matching. In: Rasmussen, C.E., Bülthoff, H.H., Schölkopf, B., Giese, M.A. (eds.) DAGM 2004. LNCS, vol. 3175, pp. 154–162. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-28649-3_19CrossRefGoogle Scholar
  37. 37.
    Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2011)Google Scholar
  38. 38.
    Machado, P., Vinhas, A., Correia, J., Ekárt, A.: Evolving ambiguous images. In: Proceedings of the IJCAI (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.SensiLab, Faculty of ITMonash UniversityMelbourneAustralia
  2. 2.Game AI Group, EECSQueen Mary University of LondonLondonUK

Personalised recommendations