Advertisement

Understanding How Non-experts Collect and Annotate Activity Data

  • Naomi JohnsonEmail author
  • Michael Jones
  • Kevin Seppi
  • Lawrence Thatcher
Chapter
  • 113 Downloads
Part of the Springer Series in Adaptive Environments book series (SPSADENV)

Abstract

Inexpensive, low-power sensors and microcontrollers are widely available along with tutorials about how to use them in systems that sense the world around them. Despite this progress, it remains difficult for non-experts to design and implement event recognizers that find events in raw sensor data streams. Such a recognizer might identify specific events, such as gestures, from accelerometer or gyroscope data and be used to build an interactive system. While it is possible to use machine learning to learn event recognizers from labeled examples in sensor data streams, non-experts find it difficult to label events using sensor data alone. We combine sensor data and video recordings of example events to create a better interface for labeling examples. Non-expert users were able to collect video and sensor data and then quickly and accurately label example events using the video and sensor data together. We include 3 example systems based on event recognizers that were trained from examples labeled using this process.

Keywords

Interactive learning Toolkit Annotation Computational devices Machine learning Internet of things 

Notes

Acknowledgements

This work supported by NSF grant IIS-1406578.

References

  1. Bulling A, Blanke U, Schiele B (2014) A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput Surv (CSUR) 46(3):33CrossRefGoogle Scholar
  2. Chen SS, Gopalakrishnan PS (1998) Clustering via the Bayesian information criterion with applications in speech recognition. In: Proceedings of the 1998 IEEE international conference on acoustics, speech and signal processing, 1998, vol 2. IEEE, pp 645–648Google Scholar
  3. Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 20:37–46CrossRefGoogle Scholar
  4. Hansen DL, Schone PJ, Corey D, Reid M, Gehring J (2013) Quality control mechanisms for crowdsourcing: peer review, arbitration, & expertise at familysearch indexing. In: Proceedings of the 2013 conference on computer supported cooperative work. ACM, pp 649–660Google Scholar
  5. Harrison C, Hudson SE (2009) Providing dynamically changeable physical buttons on a visual display. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’09. ACM, New York, pp 299–308. https://doi.org/10.1145/1518701.1518749
  6. Hartmann B, Klemmer SR, Bernstein M, Abdulla L, Burr B, Robinson-Mosher A, Gee J (2006) Reflective physical prototyping through integrated design, test, and analysis. In: Proceedings of the 19th annual ACM symposium on User interface software and technology, UIST ’06. ACM, New York, pp 299–308. https://doi.org/10.1145/1166253.1166300
  7. Hartmann B, Abdulla L, Mittal M, Klemmer SR (2007) Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’07. ACM, New York, pp 145–154 (2007). https://doi.org/10.1145/1240624.1240646
  8. Hudson SE, Mankoff J (2006) Rapid construction of functioning physical interfaces from cardboard, thumbtacks, tin foil and masking tape. In: Proceedings of the 19th annual ACM symposium on user interface software and technology, UIST ’06. ACM, New York, pp 289–298. https://doi.org/10.1145/1166253.1166299
  9. Jin H, Xu C, Lyons K (2015) Corona: positioning adjacent device with asymmetric bluetooth low energy RSSI distributions. In: Proceedings of the 28th annual ACM symposium on user interface software & technology, UIST ’15. ACM, New York, pp 175–179Google Scholar
  10. Jones M, Walker C, Anderson Z, Thatcher L (2016) Automatic detection of alpine ski turns in sensor data. In: Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing: adjunct, UbiComp ’16. ACM, New York, pp 856–860. https://doi.org/10.1145/2968219.2968535
  11. Jones MD, Johnson N, Seppi K, Thatcher L (2018) Understanding how non-experts collect and annotate activity data. In: Proceedings of the 2018 ACM international joint conference and 2018 international symposium on pervasive and ubiquitous computing and wearable computers, UbiComp ’18. ACM, New York, pp 1424–1433. https://doi.org/10.1145/3267305.3267507
  12. Jurafsky D, Martin J (2009) Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition. In: Prentice Hall series in artificial intelligence. Pearson Prentice HallGoogle Scholar
  13. Krippendorff K (2012) Content analysis: an introduction to its methodology. SageGoogle Scholar
  14. Laput G, Yang C, Xiao R, Sample A, Harrison C (2015) EM-Sense: touch recognition of uninstrumented, electrical and electromechanical objects. In: Proceedings of the 28th annual ACM symposium on user interface software & technology, UIST ’15. ACM, New York, pp 157–166. https://doi.org/10.1145/2807442.2807481
  15. Murphy KP (2012) Machine learning: a probabilistic perspective. In: Adaptive computation and machine learning series. MIT PressGoogle Scholar
  16. Patterson DJ, Fox D, Kautz H, Philipose M (2005) Fine-grained activity recognition by aggregating abstract object usage. In: Proceedings of the ninth IEEE international symposium on wearable computers, 2005. IEEE, pp 44–51Google Scholar
  17. Pham C, Olivier P (2009) Ambient intelligence: European conference, AmI 2009, Salzburg, Austria, November 18–21, 2009. Proceedings, chap. Slice&Dice: recognizing food preparation activities using embedded accelerometers. Springer, Berlin, Heidelberg, pp 34–43. https://doi.org/10.1007/978-3-642-05408-2_4Google Scholar
  18. Plötz T, Moynihan P, Pham C, Olivier P (2011) Activity recognition and healthier food preparation. In: Activity recognition in pervasive intelligent environments. Springer, pp 313–329Google Scholar
  19. Sakoe H, Chiba S (1978) Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans Acoust Speech Signal Process 26(1):43–49. https://doi.org/10.1109/TASSP.1978.1163055CrossRefGoogle Scholar
  20. Savage V, Follmer S, Li J, Hartmann B (2015a) Makers’ marks: physical markup for designing and fabricating functional objects. In: Proceedings of the 28th annual ACM symposium on user interface software & technology. ACM, pp 103–108Google Scholar
  21. Savage V, Head A, Hartmann B, Goldman DB, Mysore G, Li W (2015b) Lamello: passive acoustic sensing for tangible input components. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems, CHI ’15. ACM, New York, pp 1277–1280Google Scholar
  22. Young S, Evermann G, Gales M, Hain T, Kershaw D, Liu X, Moore G, Odell J, Ollason D, Povey D et al (1997) The HTK book, vol 2. Entropic Cambridge Research Laboratory CambridgeGoogle Scholar
  23. Zhang Y, Harrison C (2015) Tomo: wearable, low-cost electrical impedance tomography for hand gesture recognition. In: Proceedings of the 28th annual ACM symposium on user interface software & technology, UIST ’15. ACM, New York, pp 167–173. https://doi.org/10.1145/2807442.2807480

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Naomi Johnson
    • 1
    Email author
  • Michael Jones
    • 2
  • Kevin Seppi
    • 2
  • Lawrence Thatcher
    • 2
  1. 1.University of VirginiaCharlottesvilleUSA
  2. 2.Brigham Young UniversityProvoUSA

Personalised recommendations