Skip to main content

Camera Obscurer: Generative Art for Design Inspiration

  • Conference paper
Computational Intelligence in Music, Sound, Art and Design (EvoMUSART 2019)

Abstract

We investigate using generated decorative art as a source of inspiration for design tasks. Using a visual similarity search for image retrieval, the Camera Obscurer app enables rapid searching of tens of thousands of generated abstract images of various types. The seed for a visual similarity search is a given image, and the retrieved generated images share some visual similarity with the seed. Implemented in a hand-held device, the app empowers users to use photos of their surroundings to search through the archive of generated images and other image archives. Being abstract in nature, the retrieved images supplement the seed image rather than replace it, providing different visual stimuli including shapes, colours, textures and juxtapositions, in addition to affording their own interpretations. This approach can therefore be used to provide inspiration for a design task, with the abstract images suggesting new ideas that might give direction to a graphic design project. We describe a crowdsourcing experiment with the app to estimate user confidence in retrieved images, and we describe a pilot study where Camera Obscurer provided inspiration for a design task. These experiments have enabled us to describe future improvements, and to begin to understand sources of visual inspiration for design tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Romero, J., Machado, P. (eds.): The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music. Springer, New York (2008)

    Google Scholar 

  2. Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: CAN: creative adversarial networks generating “art” by learning about styles and deviating from style norms. In: Proceedings of the 8th International Conference on Computational Creativity (2017)

    Google Scholar 

  3. Reas, C., McWilliams, C.: LUST: Form+Code in Design, Art and Architecture. Princeton Architectural Press, New York (2010)

    Google Scholar 

  4. Colton, S.: Evolving a library of artistic scene descriptors. In: Machado, P., Romero, J., Carballal, A. (eds.) EvoMUSART 2012. LNCS, vol. 7247, pp. 35–47. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29142-5_4

    Chapter  Google Scholar 

  5. Sims, K.: Artificial evolution for computer graphics. Comput. Graph. 25(4), 319–328 (1991)

    Article  Google Scholar 

  6. Todd, S., Latham, W.: Evolutionary Art and Computers. Academic Press, San Diego (1992)

    MATH  Google Scholar 

  7. McCormack, J.: Aesthetic evolution of L-systems revisited. In: Raidl, G.R., Cagnoni, S., Branke, J., Corne, D.W., Drechsler, R., Jin, Y., Johnson, C.G., Machado, P., Marchiori, E., Rothlauf, F., Smith, G.D., Squillero, G. (eds.) EvoWorkshops 2004. LNCS, vol. 3005, pp. 477–488. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24653-4_49

    Chapter  Google Scholar 

  8. Correia, J., Machado, P., Romero, J., Carballal, A.: Evolving figurative images using expression-based evolutionary art. In: Proceedings of the ICCC (2013)

    Google Scholar 

  9. Machado, P., Correia, J., Romero, J.: Expression-based evolution of faces. In: Machado, P., Romero, J., Carballal, A. (eds.) EvoMUSART 2012. LNCS, vol. 7247, pp. 187–198. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29142-5_17

    Chapter  Google Scholar 

  10. Colton, S., Cook, M., Raad, A.: Ludic considerations of tablet-based evo-art. In: Di Chio, C., Brabazon, A., Di Caro, G.A., Drechsler, R., Farooq, M., Grahl, J., Greenfield, G., Prins, C., Romero, J., Squillero, G., Tarantino, E., Tettamanzi, A.G.B., Urquhart, N., Uyar, A.Ş. (eds.) EvoApplications 2011. LNCS, vol. 6625, pp. 223–233. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20520-0_23

    Chapter  Google Scholar 

  11. Colton, S., Torres, P.: Evolving approximate image filters. In: Giacobini, M., Brabazon, A., Cagnoni, S., Di Caro, G.A., Ekárt, A., Esparcia-Alcázar, A.I., Farooq, M., Fink, A., Machado, P. (eds.) EvoWorkshops 2009. LNCS, vol. 5484, pp. 467–477. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01129-0_53

    Chapter  Google Scholar 

  12. Krause, J.: Color Index. David and Charles (2002)

    Google Scholar 

  13. Sharma, G., Schiele, B.: Scalable nonlinear embeddings for semantic category-based image retrieval. In: Proceedings of the IEEE International Conference on Computer Vision (2015)

    Google Scholar 

  14. Zheng, L., Yang, Y., Tian, Q.: Sift meets CNN: a decade survey of instance retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 40(5) (2018)

    Google Scholar 

  15. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  16. Perronnin, F., Dance, C.: Fisher kernels on visual vocabularies for image categorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2007)

    Google Scholar 

  17. Jegou, H., Perronnin, F., Douze, M., Sánchez, J., Perez, P., Schmid, C.: Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1704–1716 (2012)

    Article  Google Scholar 

  18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in NIPS (2012)

    Google Scholar 

  19. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Computer Vision and Pattern Recognition (2009)

    Google Scholar 

  20. Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE CVPR Workshops (2014)

    Google Scholar 

  21. Azizpour, H., Razavian, A.S., Sullivan, J., Maki, A., Carlsson, S.: From generic to specific deep representations for visual recognition. In: Proceedings of the CVPR Workshops (2015)

    Google Scholar 

  22. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. (2018)

    Google Scholar 

  23. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: IEEE ICCV (2017)

    Google Scholar 

  24. Babenko, A., Slesarev, A., Chigorin, A., Lempitsky, V.: Neural codes for image retrieval. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 584–599. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_38

    Chapter  Google Scholar 

  25. Hu, H., et al.: Web-scale responsive visual search at Bing. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2018)

    Google Scholar 

  26. Jing, Y., et al.: Visual search at pinterest. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)

    Google Scholar 

  27. Zujovic, J., Gandy, L., Friedman, S., Pardo, B., Pappas, T.N.: Classifying paintings by artistic genre: an analysis of features and classifiers. In: Proceedings of the IEEE Workshop on Multimedia Signal Processing (2009)

    Google Scholar 

  28. Bar, Y., Levy, N., Wolf, L.: Classification of artistic styles using binarized features derived from a deep neural network. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 71–84. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_5

    Chapter  Google Scholar 

  29. Hicsonmez, S., Samet, N., Sener, F., Duygulu, P.: Draw: deep networks for recognizing styles of artists who illustrate children’s books. In: Proceedings of the International Conference on Multimedia Retrieval (2017)

    Google Scholar 

  30. Seguin, B., Striolo, C., diLenardo, I., Kaplan, F.: Visual link retrieval in a database of paintings. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9913, pp. 753–767. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46604-0_52

    Chapter  Google Scholar 

  31. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference Computer Vision & Pattern Recognition (2016)

    Google Scholar 

  32. Razavian, A.S., Sullivan, J., Carlsson, S., Maki, A.: Visual instance retrieval with deep convolutional networks. ITE Trans. Media Tech. Appl. 4(3), 251–258 (2016)

    Article  Google Scholar 

  33. Wang, Y., Takatsuka, M.: SOM based artistic style visualization. In: Proceedings of the IEEE Conference Multimedia and Expo (2013)

    Google Scholar 

  34. Aumüller, M., Bernhardsson, E., Faithfull, A.: ANN-benchmarks: a benchmarking tool for approximate nearest neighbor algorithms. In: Beecks, C., Borutta, F., Kröger, P., Seidl, T. (eds.) SISAP 2017. LNCS, vol. 10609, pp. 34–49. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68474-1_3

    Chapter  Google Scholar 

  35. Dasgupta, S., Sinha, K.: Randomized partition trees for exact nearest neighbor search. In: Proceedings of the Conference on Learning Theory (2013)

    Google Scholar 

  36. Keysers, D., Deselaers, T., Ney, H.: Pixel-to-pixel matching for image recognition using hungarian graph matching. In: Rasmussen, C.E., Bülthoff, H.H., Schölkopf, B., Giese, M.A. (eds.) DAGM 2004. LNCS, vol. 3175, pp. 154–162. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28649-3_19

    Chapter  Google Scholar 

  37. Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2011)

    Google Scholar 

  38. Machado, P., Vinhas, A., Correia, J., Ekárt, A.: Evolving ambiguous images. In: Proceedings of the IJCAI (2015)

    Google Scholar 

Download references

Acknowledgements

We would like to thank the participants in the pilot study for their time and energy, members of SensiLab for their very useful feedback on the Camera Obscurer app, and the anonymous reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon Colton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Cite this paper

Singh, D., Rajcic, N., Colton, S., McCormack, J. (2019). Camera Obscurer: Generative Art for Design Inspiration. In: Ekárt, A., Liapis, A., Castro Pena, M.L. (eds) Computational Intelligence in Music, Sound, Art and Design. EvoMUSART 2019. Lecture Notes in Computer Science(), vol 11453. Springer, Cham. https://doi.org/10.1007/978-3-030-16667-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16667-0_4

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16666-3

  • Online ISBN: 978-3-030-16667-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics