Abstract
Gesture and speech based interaction offers designers a powerful technique to create 3D CAD models. Previous studies on gesture based modeling have employed author defined gestures which may not be very user friendly. The aim of this study was to collect a data set of user generated gestures and accompanying voice commands for 3D modeling for form exploration in the conceptual architectural design phase. We conducted an experiment with 41 subjects to elicit their preferences in using gestures and speech for twelve 3D CAD modeling referents. In this paper we present the different types of gestures we found, and present user preferences of gestures and speech. Findings from this study will be used for the design of a speech and gesture based Cad modeling interface.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Kang, J., et al. (2013). Instant 3D design concept generation and visualization by real-time hand gesture recognition. Computers in Industry, 64(7), 785–797.
Wobbrock, J. O., Morris, M. R. & Wilson, A. D. (2009). User-defined gestures for surface computing. In Proceedings of the 27th International Conference on Human Factors in Computing Systems—CHI 09 (p. 1083).
van den Hoven, E., & Mazalek, A. (2011). Grasping gestures: Gesturing with physical artifacts. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 25(3), 255–271.
Malizia, A., & Bellucci, A. (2012). The artificiality of natural user interfaces. Communications of the ACM, 55(3), 36.
Zamborlin, B., et al. (2014). Fluid gesture interaction design. ACM Transactions on Interactive Intelligent Systems, 3(4), 1–30.
Quek, F. K. (1995). Eyes in the interface. Image and Vision Computing, 13(6), 511–525.
Grandhi, S. A., Joue, G. & Mittelberg, I. (2011). Understanding naturalness and intuitiveness in gesture production. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems—CHI’11 (p. 821).
Thompson, L. A., & Massaro, D. W. (1986). Evaluation and Integration of Speech and Pointing Gestures during Referential Understanding. Journal of Experimental Child Psychology, 42, 144–168.
McNeill, D. (1992). Hand and mind: what gestures reveal about thought (Univ. of Chicago Press).
Athavankar, U. (1999). Gestures, imagery and spatial reasoning. ‘Visual and Spatial Reasoning’. In J. S Garo & B. Tversky (Eds.), Preprints of the International Conference on Visual Reasoning (VR 99) (pp 103–128). MIT.
Varshney, S. (1998). Castle in the air: A strategy to model shapes in a computer. In Proceedings of the Asia Pacific Conference on Computer Human Interaction (APCHI 98) (pp. 350–355).
Athavankar, U. (1997). Mental imagery as a design tool. In Cybernetics and Systems (Vol. 28, No. 1, pp. 25–42).
Wardak, D. (2016). Gestures orchestrating the multimodal development of ideas in educational design team meetings. Design Studies, 47, 1–22.
Salisbury, M. W., Hendrickson, J. H., Lammers, T. L., Fu, C., & Moody, S. A. (1990). Talk and draw: Bundling speech and graphics. Computer, 23(8), 59–65.
Weimer, D. & Ganapathy, S. K. (1989). A synthetic visual environment with hand gesturing and voice input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Wings for the Mind—CHI’89 (Vol. 20, pp. 235–240).
Herold, J., & Stahovich, T. F. (2011). Using speech to identify gesture pen strokes in collaborative, multimodal device descriptions. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM, 25(3), 237–254.
Bolt, R. A. (1980). Put-that-there. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques—SIGGRAPH’80 (Vol. 14, pp. 262–270). New York, New York, USA: ACM.
Nanjundaswamy, V. G. et al. (2013). Intuitive 3D computer-aided design (CAD) system with multimodal interfaces. In Proceedings of the ASME Design Engineering Technical Conference, 2 A. Portland, Oregon, USA.
Dave, D., Chowriappa, A., & Kesavadas, T. (2013). Gesture interface for 3D CAD modeling using Kinect. Computer-Aided Design and Applications, 10(4), 663–669.
Arroyave-Tobón, S., Osorio-Gómez, G., & Cardona-McCormick, J. F. (2015). Air-modelling: A tool for gesture-based solid modelling in context during early design stages in AR environments. Computers in Industry, 66, 73–81.
Song, J., Cho, S., Baek, S.-Y., Lee, K., & Bang, H. (2014). GaFinC: Gaze and finger control interface for 3D model manipulation in CAD application. Computer Aided Design, 46, 239–245.
Huang, J. & Rai, R. (2014). Hand gesture based intuitive CAD interface. In Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, 1 A. Buffalo, New York, USA.
Zhong, K. et al. (2011). Rapid 3D conceptual design based on hand gesture. In 2011 3rd International Conference on Advanced Computer Control, ICACC 2011 (pp. 192–197).
Yi, X., Qin, S., & Kang, J. (2009). Generating 3D architectural models based on hand motion and gesture. Computers in Industry, 60(9), 677–685.
Nacenta, M.A. et al. (2013). Memorability of Pre-designed & user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1099–1108).
Hinckley, K., Pausch, R. & Proffitt, D. (1997). Attention and visual feedback. In Proceedings of the 1997 Symposium on Interactive 3D Graphics—SI3D’97 (pp. 121–130). New York, New York, USA: ACM Press.
Boyes Braem, P. & Bram, T. (2000). A pilot study of the expressive gestures used by classical orchestral conductors. In: K. Emmorey & H. Lane (Eds.), The Signs of Language Revisted. An Anthology to Honor Ursula Bellugi and Edward Klima (pp. 143–167). New Jersey: Lawrence Erlbaum Associates.
Acknowledgements
This research is supported by SUTD-MIT International Design Centre (IDC) grant number IDG21500109, under the Sustainable Built Environment Grand Challenge, and Visualization and Prototyping Design Research Thrust.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Tunçer, B., Khan, S. (2018). User Defined Conceptual Modeling Gestures. In: Lee, JH. (eds) Computational Studies on Cultural Variation and Heredity. KAIST Research Series. Springer, Singapore. https://doi.org/10.1007/978-981-10-8189-7_10
Download citation
DOI: https://doi.org/10.1007/978-981-10-8189-7_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-8188-0
Online ISBN: 978-981-10-8189-7
eBook Packages: Computer ScienceComputer Science (R0)