MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures with Low Probability of False Positives During Use

Chapter
Part of the The Springer Series on Challenges in Machine Learning book series (SSCML)

Abstract

Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users’ normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an “Everyday Gesture Library” or EGL). The EGL is quantized and indexed via multi-dimensional Symbolic Aggregate approXimation (SAX) to enable quick searching. MAGIC exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering. Suggested gestures are ordered according to brevity and simplicity, freeing the interface designer to focus on the user experience. Once a gesture is selected, MAGIC can output synthetic examples of the gesture to train a chosen classifier (for example, with a hidden Markov model). If the interface designer suggests his own gesture and provides several examples, MAGIC estimates how accurately that gesture can be recognized and estimates its false positive rate by comparing it against the natural movements in the EGL. We demonstrate MAGIC’s effectiveness in gesture selection and helpfulness in creating accurate gesture recognizers.

Keywords

Gesture recognition Gesture spotting False positives Continuous recognition 

References

  1. D. Ashbrook, Enabling Mobile Microinteractions, Ph.D. thesis, Georgia Institute of Technology, Atlanta, Georgia, 2009Google Scholar
  2. D. Ashbrook, T. Starner. MAGIC: a motion gesture design tool, in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, New York, 2010, pp. 2159–2168Google Scholar
  3. M. Belatar, F. Coldefy. Sketched menus and iconic gestures, techniques designed in the context of shareable interfaces, in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, New York, 2010, pp. 143–146Google Scholar
  4. X. Cao, S. Zhai, Modeling human performance of pen stroke gestures, in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, New York, 2007, pp. 1495–1504Google Scholar
  5. C.T. Dang, E. André, Surface-poker: multimodality in tabletop games, in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, New York, 2010, pp. 251–252Google Scholar
  6. R. Dannenberg, D. Amon, A gesture based user interface prototyping system, in Proceedings of the ACM Symposium on User Interface Software and Technology, New York, 1989, pp. 127–132Google Scholar
  7. A.K. Dey, R. Hamid, C. Beckmann, I. Li, D. Hsu, a CAPpella: programming by demonstration of context-aware applications, in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, New York, 2004, pp. 33–40Google Scholar
  8. J. Fails, D, Olsen, A design tool for camera-based interaction, in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, New York, 2003, pp. 449–456Google Scholar
  9. A.W.-C. Fu, E. Keogh, L.Y. Lau, C.A. Ratanamahatana, R.C.-W. Wong, Scaling and time warping in time series querying. Int. J. Very Large Data Bases 17(4), 899–921 (2008)Google Scholar
  10. F. Guimbretière, T. Winograd, Flowmenu: combining command, text, and data entry, in Proceedings of the ACM Symposium on User Interface Software and Technology, 2000, pp. 213–216Google Scholar
  11. R. Hamming, Error detecting and error correcting codes. Bell Syst. Tech. J. 29, 147–160 (1950)MathSciNetCrossRefGoogle Scholar
  12. Y. Hattori, S. Inoue, G. Hirakawa, A large scale gathering system for activity data with mobile sensors, in Proceedings of the IEEE International Symposium on Wearable Computers, Washington, DC, 2011, pp. 97–100Google Scholar
  13. E.L. Hutchins, J.D. Hollan, D.A. Norman, Direct manipulation interfaces. Hum. Comput. Interact. 1(4): 311–338 (1985). ISSN 0737-0024Google Scholar
  14. D. Kohlsdorf, Motion gesture: false positive prediction and prevention, Master’s thesis, University of Bremen, Bremen, 2011Google Scholar
  15. D. Kohlsdorf, T. Starner, D. Ashbrook. MAGIC 2.0: a web tool for false positive prediction and prevention for gesture recognition systems, in Proceedings of the International Conference on Automatic Face and Gesture Recognition, Washington, DC, 2011, pp. 1–6Google Scholar
  16. Y. Li, Gesture search: a tool for fast mobile data access, in Proceedings of the ACM Symposium on User Interface Software and Technology, New York, 2010, pp. 87–96Google Scholar
  17. J. Lin, L. Wei, E. Keogh, Experiencing sax: a novel symbolic representation of time series. J. Data Min. Knowl. Discov. 15(2), 107–144 (2007)MathSciNetCrossRefGoogle Scholar
  18. C. Long, Quill: A Gesture Design Tool for Pen-based User Interfaces, PhD thesis, University of California, Berkeley, California, 2001Google Scholar
  19. K. Lyons, H. Brashear, T. Westeyn, J.S. Kim, T. Starner, GART: the gesture and activity recognition toolkit, in Proceedings of the International Conference on Human-Computer Interaction: Intelligent Multimodal Interaction Environments, Berlin, 2007, pp. 718–727Google Scholar
  20. D. Maynes-Aminzade, T. Winograd, T. Igarashi. Eyepatch: prototyping camera-based interaction through examples, in Proceedings of the ACM Symposium on User Interface Software and Technology, New York, 2007, pp. 33–42Google Scholar
  21. M.T. Mitchell, Machine Learning (McGraw Hill, New York, 1997)MATHGoogle Scholar
  22. T. Ouyang, Y. Li. Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition, in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 2012, pp. 2895–2904Google Scholar
  23. A. Pirhonen, S. Brewster, C. Holguin, Gestural and audio metaphors as a means of control for mobile devices, in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, New York, 2002, pp. 291–298Google Scholar
  24. J.-W. Shieh, Time Series Retrievel: Indexing and Mapping Large Datasets, Ph,D. thesis, University California, Riverside, 2010Google Scholar
  25. J. Shieh, E. Keogh, iSAX: indexing and mining terabyte sized time series, in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, 2008, pp. 623–631Google Scholar
  26. T. Starner, J. Weaver, A. Pentland, Real-time American sign language recognition using desk and wearable computer-based video. IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1371–1375 (1998). DecemberCrossRefGoogle Scholar
  27. T. Westeyn, H. Brashear, A. Atrash, T. Starner, Georgia tech gesture toolkit: supporting experiments in gesture recognition, in Proceedings of the International Conference on Multimodal Interfaces, New York, 2003, pp. 85–92Google Scholar
  28. H. Witt, Human-Computer Interfaces for Wearable Computers, Ph.D. thesis, University Bremen, Bremen, 2007Google Scholar
  29. J.O. Wobbrock, A.D. Wilson, Y. Li, Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes, in Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 159–168, New York, 2007Google Scholar
  30. H.-D. Yang, S. Sclaroff, S.-W. Lee, Sign language spotting with a threshold model based on conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 31(7), 1264–1277 (2009)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.GVU & School of Interactive ComputingGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations