Skip to main content

Predicting Artist Drawing Activity via Multi-camera Inputs for Co-creative Drawing

  • Conference paper
  • First Online:
Towards Autonomous Robotic Systems (TAROS 2021)

Abstract

This paper presents the results of computer vision experiments in the perception of an artist drawing with analog media (pen and paper), with the aim to contribute towards a human-robot co-creative drawing system. Using data gathered from user studies with artists and illustrators, two types of CNN models were designed and evaluated. Both models use multi-camera images of the drawing surface as input. One models predicts an artist’s activity (e.g. are they drawing or not?). The other model predicts the position of the pen on the canvas. Results of different combination of input sources are presented. The overall mean accuracy is 95% (std: 7%) for predicting when the artist is present and 68% (std: 15%) for predicting when the artist is drawing. The model predicts the pen’s position on the drawing canvas with a mean squared error (in normalised units) of 0.0034 (std: 0.0099). These results contribute towards the development of an autonomous robotic system which is aware of an artist at work via camera based input. In addition, this benefits the artist with a more fluid physical to digital workflow for creative content creation.

This research is supported through an EPSRC (UK) DTP Studentship “Collaborative Drawing Systems”, Grant Reference EP/N509498/1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://robots.ieee.org/robots/baxter/.

  2. 2.

    Raspberry PI Camera Module V2 https://www.raspberrypi.org/products/camera-module-v2/.

  3. 3.

    Intel Depth Camera SR305 https://www.intelrealsense.com/depth-camera-sr305/.

  4. 4.

    https://www.tensorflow.org/.

References

  1. Cabannes, V., Kerdreux, T., Thiry, L., Campana, T., Ferrandes, C.: Dialog on a canvas with a machine. arXiv:1910.04386 [cs], October 2019

  2. Chung, S.: Drawing Operations (DOUG) (2015). https://sougwen.com/project/drawing-operations

  3. Cooney, M., Berck, P.: Designing a robot which paints with a human: visual metaphors to convey contingency and artistry. In: ICRA-X Robots Art Program at IEEE International Conference on Robotics and Automation (ICRA), Montreal QC, Canada, p. 2, May 2019

    Google Scholar 

  4. Davis, N., Hsiao, C.P., Singh, K.Y., Magerko, B.: Co-creative drawing agent with object recognition. In: Artificial Intelligence in Interactive Digital Entertainment, Burlingame, California, USA, p. 8 (2016)

    Google Scholar 

  5. Fan, J.E., Dinculescu, M., Ha, D.: Collabdraw: an environment for collaborative sketching with an artificial agent. In: Proceedings of the 2019 on Creativity and Cognition, C&C 2019, pp. 556–561. Association for Computing Machinery, San Diego, CA, USA, June 2019. https://doi.org/10.1145/3325480.3326578

  6. Fernando, P., Weiler, J., Kuznetsov, S., Turaga, P.: Tracking, animating, and 3D printing elements of the fine arts freehand drawing process. In: Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction - TEI 2018, Stockholm, Sweden, pp. 555–561. ACM Press (2018). https://doi.org/10.1145/3173225.3173307

  7. Ha, D., Eck, D.: A neural representation of sketch drawings. arXiv:1704.03477 [cs, stat], May 2017

  8. Jansen, C., Sklar, E.: Co-creative physical drawing systems. In: ICRA-X Robots Art Program at IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, p. 2, May 2019

    Google Scholar 

  9. Jansen, C., Sklar, E.: Towards a HRI system for co-creative drawing. In: ACM/IEEE International Conference on Human-Robot Interaction (HRI), Workshop on on Exploring Creative Content in Social Robotics (2020)

    Google Scholar 

  10. Jansen, C., Sklar, E.: Exploring co-creative drawing workflows. Front. Robot. AI 8, 92 (2021)

    Article  Google Scholar 

  11. Jongejan, J., Rowley, H., Kawashima, T., Kim, J., Fox-Gieg, N.: The Quick, Draw! - A.I. Experiment (2016). https://quickdraw.withgoogle.com/

  12. Jorge, J., Samavati, F.: Sketch-Based Interfaces and Modeling. Springer, London (2010). https://doi.org/10.1007/978-1-84882-812-4

    Book  Google Scholar 

  13. Karimi, P., Maher, M.L., Davis, N., Grace, K.: Deep learning in a computational model for conceptual shifts in a co-creative design system. arXiv:1906.10188 [cs, stat], June 2019

  14. Oh, C., Song, J., Choi, J., Kim, S., Lee, S., Suh, B.: I lead, you help but only with enough details: understanding user experience of co-creation with artificial intelligence. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal QC, Canada, pp. 1–13. Association for Computing Machinery, April 2018. https://doi.org/10.1145/3173574.3174223

  15. Olsen, L., Samavati, F.F., Sousa, M.C., Jorge, J.A.: Sketch-based modeling: a survey. Comput. Graph. 33(1), 85–103 (2009). https://doi.org/10.1016/j.cag.2008.09.013

    Article  Google Scholar 

  16. Sarvadevabhatla, R.K., Suresh, S., Babu, R.V.: Object category understanding via eye fixations on freehand sketches. IEEE Trans. Image Process. 26(5), 2508–2518 (2017). https://doi.org/10.1109/TIP.2017.2675539

    Article  MathSciNet  MATH  Google Scholar 

  17. Tchalenko, J., Nam, S.H., Ladanga, M., Miall, R.C.: The gaze-shift strategy in drawing. Psychol. Aesthet. Creat. Arts 8(3), 330–339 (2014). https://doi.org/10.1037/a0036132

    Article  Google Scholar 

  18. Van Sommers, P.: Drawing and Cognition: Descriptive and Experimental Studies of Graphic Production Processes. Cambridge University Press, Cambridge (1984)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chipp Jansen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jansen, C., Sklar, E. (2021). Predicting Artist Drawing Activity via Multi-camera Inputs for Co-creative Drawing. In: Fox, C., Gao, J., Ghalamzan Esfahani, A., Saaj, M., Hanheide, M., Parsons, S. (eds) Towards Autonomous Robotic Systems. TAROS 2021. Lecture Notes in Computer Science(), vol 13054. Springer, Cham. https://doi.org/10.1007/978-3-030-89177-0_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89177-0_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89176-3

  • Online ISBN: 978-3-030-89177-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics