Advertisement

Esquisse: Using 3D Models Staging to Facilitate the Creation of Vector-Based Trace Figures

  • Axel AntoineEmail author
  • Sylvain Malacria
  • Nicolai Marquardt
  • Géry Casiez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11747)

Abstract

Trace figures are contour drawings of people and objects that capture the essence of scenes without the visual noise of photos or other visual representations. Their focus and clarity make them ideal representations to illustrate designs or interaction techniques. In practice, creating those figures is a tedious task requiring advanced skills, even when creating the figures by tracing outlines based on photos. To mediate the process of creating trace figures, we introduce the open-source tool Esquisse. Informed by our taxonomy of 124 trace figures, Esquisse provides an innovative 3D model staging workflow, with specific interaction techniques that facilitate 3D staging through kinematic manipulation, anchor points and posture tracking. Our rendering algorithm (including stroboscopic rendering effects) creates vector-based trace figures of 3D scenes. We validated Esquisse with an experiment where participants created trace figures illustrating interaction techniques, and results show that participants quickly managed to use and appropriate the tool.

Keywords

Trace figures 3D models staging Vector graphics Blender 

References

  1. 1.
    UIST 2015: Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology. ACM, New York (2015)Google Scholar
  2. 2.
    UIST 2016: Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, New York (2016)Google Scholar
  3. 3.
    UIST 2017: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, New York (2017)Google Scholar
  4. 4.
    Achibet, M., Casiez, G., Lécuyer, A., Marchal, M.: Thing: introducing a tablet-based interaction technique for controlling 3D hand models. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, pp. 317–326. ACM, New York (2015).  https://doi.org/10.1145/2702123.2702158
  5. 5.
    Bolt, R.A.: Put-that-there: voice and gesture at the graphics interface. In: Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1980, pp. 262–270. ACM, New York (1980).  https://doi.org/10.1145/800250.807503
  6. 6.
    Canny, J.: A computational approach to edge detection. In: Readings in Computer Vision, pp. 184–203. Elsevier (1987)Google Scholar
  7. 7.
    Chang, Y., L’Yi, S., Koh, K., Seo, J.: Understanding users’ touch behavior on large mobile touch-screens and assisted targeting by tilting gesture. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, pp. 1499–1508. ACM, New York (2015).  https://doi.org/10.1145/2702123.2702425
  8. 8.
    Chen, T., Zhu, Z., Shamir, A., Hu, S.M., Cohen-Or, D.: 3-sweep: extracting editable objects from a single photo. ACM Trans. Graph. (TOG) 32(6), 195 (2013)Google Scholar
  9. 9.
    Chi, P.Y.P., Vogel, D., Dontcheva, M., Li, W., Hartmann, B.: Authoring illustrations of human movements by iterative physical demonstration. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST 2016, pp. 809–820. ACM, New York (2016).  https://doi.org/10.1145/2984511.2984559
  10. 10.
    Eisemann, E., Paris, S., Durand, F.: A visibility algorithm for converting 3D meshes into editable 2D vector graphics. In: ACM SIGGRAPH 2009 Papers, SIGGRAPH 2009. pp. 83:1–83:8. ACM, New York (2009).  https://doi.org/10.1145/1576246.1531389
  11. 11.
    Eisemann, E., Winnemöller, H., Hart, J.C., Salesin, D.: Stylized vector art from 3D models with region support. In: Computer Graphics Forum, vol. 27, pp. 1199–1207. Wiley Online Library (2008)Google Scholar
  12. 12.
    Grabli, S., Turquin, E., Durand, F., Sillion, F.X.: Programmable style for NPR line drawing. In: Proceedings of the Fifteenth Eurographics Conference on Rendering Techniques, EGSR 2004, Aire-la-Ville, Switzerland, pp. 33–44. Eurographics Association, Switzerland (2004).  https://doi.org/10.2312/EGWR/EGSR04/033-044
  13. 13.
    Han, J., Lee, G.: Push-push: a drag-like operation overlapped with a page transition operation on touch interfaces. In: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST 2015, pp. 313–322. ACM, New York (2015).  https://doi.org/10.1145/2807442.2807457
  14. 14.
    Hess, R.: The Essential Blender: Guide to 3D Creation with the Open Source Suite Blender. No Starch Press, San Francisco (2007)Google Scholar
  15. 15.
    Hinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., Smith, M.: Stitching: pen gestures that span multiple displays. In: Proceedings of the Working Conference on Advanced Visual Interfaces, AVI 2004, pp. 23–31. ACM, New York (2004).  https://doi.org/10.1145/989863.989866
  16. 16.
    Huang, D., Zhang, X., Saponas, T.S., Fogarty, J., Gollakota, S.: Leveraging dual-observable input for fine-grained thumb interaction using forearm EMG. In: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST 2015, pp. 523–528. ACM, New York (2015).  https://doi.org/10.1145/2807442.2807506
  17. 17.
    Karsch, K., Hart, J.C.: Snaxels on a plane. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, NPAR 2011, pp. 35–42. ACM, New York (2011).  https://doi.org/10.1145/2024676.2024683
  18. 18.
    Kazi, R.H., Chevalier, F., Grossman, T., Fitzmaurice, G.: Kitty: sketching dynamic and interactive illustrations. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST 2014, pp. 395–405. ACM, New York (2014).  https://doi.org/10.1145/2642918.2647375
  19. 19.
    Kazi, R.H., Chevalier, F., Grossman, T., Zhao, S., Fitzmaurice, G.: Draco: sketching animated drawings with kinetic textures. In: ACM SIGGRAPH 2014 Posters, SIGGRAPH 2014, pp. 5:1–5:1. ACM, New York (2014).  https://doi.org/10.1145/2614217.2614221
  20. 20.
    Kholgade, N., Simon, T., Efros, A., Sheikh, Y.: 3D object manipulation in a single photograph using stock 3D models. ACM Trans. Graph. (TOG) 33(4), 127 (2014)CrossRefGoogle Scholar
  21. 21.
    Kim, H.J., Kim, C.M., Nam, T.J.: SketchStudio: experience prototyping with 2.5-dimensional animated design scenarios. In: Proceedings of the 2018 Designing Interactive Systems Conference, DIS 2018, pp. 831–843. ACM, New York (2018).  https://doi.org/10.1145/3196709.3196736
  22. 22.
    Lander, C., Gehring, S., Krüger, A., Boring, S., Bulling, A.: GazeProjector: accurate gaze estimation and seamless gaze interaction across multiple displays. In: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST 2015, pp. 395–404. ACM, New York (2015).  https://doi.org/10.1145/2807442.2807479
  23. 23.
    Lewis, C., Rieman, J.: Task-Centered User Interface Design. A Practical Introduction (1993)Google Scholar
  24. 24.
    Li, W., Viola, F., Starck, J., Brostow, G.J., Campbell, N.D.F.: Roto++: accelerating professional rotoscoping using shape manifolds. ACM Trans. Graph. 35(4), 62:1–62:15 (2016).  https://doi.org/10.1145/2897824.2925973CrossRefGoogle Scholar
  25. 25.
    Lo, J., et al.: Aesthetic electronics: designing, sketching, and fabricating circuits through digital exploration. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST 2016, pp. 665–676. ACM, New York (2016).  https://doi.org/10.1145/2984511.2984579
  26. 26.
    McAweeney, E., Zhang, H., Nebeling, M.: User-driven design principles for gesture representations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, pp. 547:1–547:13, ACM, New York (2018).  https://doi.org/10.1145/3173574.3174121
  27. 27.
    McCloud, S.: Understanding Comics: The Invisible Art (1993)Google Scholar
  28. 28.
    Mortensen, E.N., Barrett, W.A.: Intelligent scissors for image composition. In: Proceedings of the 22Nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1995, pp. 191–198. ACM, New York (1995).  https://doi.org/10.1145/218380.218442
  29. 29.
    Nancel, M., Vogel, D., De Araujo, B., Jota, R., Casiez, G.: Next-point prediction metrics for perceived spatial errors. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST 2016, pp. 271–285. ACM, New York (2016).  https://doi.org/10.1145/2984511.2984590
  30. 30.
    Neufeld, E., Popoola, H., Callele, D., Mould, D.: Mixed initiative interactive edge detection. In: Graphics Interface, pp. 177–184 (2003)Google Scholar
  31. 31.
    Ruiz, J., Li, Y.: DoubleFlip: a motion gesture delimiter for mobile interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2011, pp. 2717–2720. ACM, New York (2011).  https://doi.org/10.1145/1978942.1979341
  32. 32.
    Wu, Y., Huang, T.S.: Hand modeling, analysis and recognition. IEEE Signal Process. Mag. 18(3), 51–60 (2001)CrossRefGoogle Scholar
  33. 33.
    Xie, J., Hertzmann, A., Li, W., Winnemöller, H.: PortraitSketch: face sketching assistance for novices. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST 2014, pp. 407–417. ACM, New York (2014).  https://doi.org/10.1145/2642918.2647399
  34. 34.
    Xie, J., Winnemöller, H., Li, W., Schiller, S.: Interactive vectorization. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI 2017, pp. 6695–6705. ACM, New York (2017).  https://doi.org/10.1145/3025453.3025872
  35. 35.
    Yoon, D., Lee, J.H., Yeom, K., Park, J.: Mobiature: 3D model manipulation technique for large displays using mobile devices. In: 2011 IEEE International Conference on Consumer Electronics (ICCE), pp. 495–496, January 2011.  https://doi.org/10.1109/ICCE.2011.5722702
  36. 36.
    Zheng, Y., Chen, X., Cheng, M.M., Zhou, K., Hu, S.M., Mitra, N.J.: Interactive images: cuboid proxies for smart image manipulation. ACM Trans. Graph. 31(4), 99–1 (2012)Google Scholar
  37. 37.
    Zhou, J., Zhang, Y., Laput, G., Harrison, C.: AuraSense: enabling expressive around-smartwatch interactions with electric field sensing. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST 2016, pp. 81–86. ACM, New York (2016).  https://doi.org/10.1145/2984511.2984568

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Axel Antoine
    • 1
    Email author
  • Sylvain Malacria
    • 2
  • Nicolai Marquardt
    • 3
  • Géry Casiez
    • 1
  1. 1.Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 - CRIStAL - Centre de Recherche en Informatique Signal et Automatique de LilleLilleFrance
  2. 2.Inria Lille - Nord EuropeLilleFrance
  3. 3.University College LondonLondonUK

Personalised recommendations