Advertisement

GERMIC: Application of Gesture Recognition Model with Interactive Correction to Manual Grading Tasks

  • Kohei Yamamoto
  • Fumiya Kan
  • Kazuya Murao
  • Masahiro Mochizuki
  • Nobuhiko Nishio
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 240)

Abstract

Gesture-based recognition is one of the most intuitive methods for inputting information and is not subject to cumbersome operations. Recognition is performed on human’s consecutive motion without reference to retrial or alternation by user. We propose a gesture recognition model with a mechanism for correcting recognition errors that operates interactively and is practical. We applied the model to a setting involving a manual grading task in order to verify its effectiveness. Our system, named GERMIC, consists of two major modules, namely, handwritten recognition and interactive correction. Recognition is materialized with image feature extraction and convolutional neural network. A mechanism for interactive correction is called on-demand by a user-based trigger. GERMIC monitors, track, and stores information on the user’s grading task and generates output based on the recognition information collected. In contrast to conventional grading done manually, GERMIC significantly shortens the total time for completing the task by 24.7% and demonstrates the effectiveness of the model with interactive correction in two real world user environments.

Keywords

Handwriting recognition Recognition error correction 

Notes

Acknowledgement

This research has been supported by the Kayamori Foundation of lnformational Science Advancement.

References

  1. 1.
    Bulling, A., Blanke, U., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 46(3), 33 (2014)CrossRefGoogle Scholar
  2. 2.
    Ortiz, J.L.R., Oneto, L., Samá, A., Parra, X., Anguit, D.: Transition-aware human activity recognition using smartphones. Neurocomputing 171(C), 754–767 (2016)CrossRefGoogle Scholar
  3. 3.
    Ren, Z., Meng, J., Yuan, J., Zhantg, Z.: Robust hand gesture recognition with kinect sensor. In: Proceedings of 19th ACM International Conference on Multimedia, pp. 759–760 (2011)Google Scholar
  4. 4.
    Ren, Z., Yuan, J., Zhang, Z.: Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of 19th ACM International Conference on Multimedia, pp. 1093–1096 (2011)Google Scholar
  5. 5.
    Kuwahara, N., Kogure, K., Ohmura, A., Noma, H.: Wearable sensors for auto-event-recording on medical nursing - user study of ergonomic design. In: 2012 16th International Symposium on Wearable Computers, pp. 8–15 (2004)Google Scholar
  6. 6.
    Westerfield, G., Mitrovic, A., Billinghurst, M.: Intelligent augmented reality training for motherboard assembly. Int. J. Artif. Intell. Educ. 25(1), 157–172 (2015)CrossRefGoogle Scholar
  7. 7.
    Radkowski, R., Herrema, J., Oliver, J.: Augmented reality-based manual assembly support with visual features for different degrees of difficulty. Ind. Prod. Eng. 34(5), 362–374 (2015)Google Scholar
  8. 8.
    Johnson, W., Jellinek, H., Klotz Jr., L., Rao, R., Card, S.: Bridging the paper and electronic worlds. In: Proceedings of INTERACT 1993 and CHI 1993 Conference on Human Factors in Computing Systems, pp. 507–512 (1993)Google Scholar
  9. 9.
    Bayar, G.: The use of hough transform to develop an intelligent grading system for the multiple choice exam papers. Proc. Karaelmas Sci. Eng. 6(1), 100–104 (2016)Google Scholar
  10. 10.
    Benedito, J.L.P., Aragón, E.Q., Alriols, J.A., Medic, L.: Optical mark recognition in student continuous assessment. IEEE Rev. Iberoam. de Tecnol. del Aprendiz. 9(4), 133–138 (2014)Google Scholar
  11. 11.
    Atasoy, H., Yildirim, E., Kutlu, Y., Tohma, K.: Webcam based real-time robust optical mark recognition. In: Arik, S., Huang, T., Lai, W.K., Liu, Q. (eds.) ICONIP 2015. LNCS, vol. 9490, pp. 449–456. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-26535-3_51CrossRefGoogle Scholar
  12. 12.
    Koile, K., Chevalier, K., Low, C., Pal, S., Rogal, A., Singer, D., Sorensen, J., Tay, K.S., Wu, K.: Supporting pen-based classroom interaction: new findings and functionality for classroom learning partner. In: Proceedings of International Workshop on Pen-Based Learning Technologies (PLT 2007), pp. 1–7 (2007)Google Scholar
  13. 13.
    Koile, K., Chevalier, K., Rbeiz, M., Rogal, A., Singer, D., Sorensen, J., Smith, A., Tay, K.S., Wu, K.: Supporting feedback and assessment of digital ink answers to in-class exercises. In: Proceedings of 22nd National Conference on Artificial Intelligence, pp. 1787–1794 (2007)Google Scholar
  14. 14.
    Tay, K.S., Koile, K.: Improving digital ink interpretation through expected type prediction and dynamic dispatch. In: Proceedings of 19th International Conference on Pattern Recognition, pp. 1–4 (2008)Google Scholar
  15. 15.
    Edwards, S.: Work-in-progress: program grading and feedback generation with web-CAT. In: Proceedings of 1st ACM Conference on Learning@ Scale Conference, pp. 215–216 (2014)Google Scholar
  16. 16.
    Caiza, J., Alamo, J.M.D.: Programming assignments automatic grading: review of tools and implementations. In: Proceedings of 7th International Technology, Education and Development Conference (INTED 2013), pp. 5691–5700 (2013)Google Scholar
  17. 17.
    Yamamoto, M., Umemura, N., Kawano, H.: Automated essay scoring system based on rubric. In: Lee, R. (ed.) ACIT 2017. Studies in Computational Intelligence, vol. 727, pp. 177–190. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-64051-8_11CrossRefGoogle Scholar
  18. 18.
    Liu, M., Li, Y., Xu, W., Liu, L.: Automated essay feedback generation and its impact in the revision. IEEE Trans. Learn. Technol. PP(99), 1 (2016)Google Scholar
  19. 19.
    Nguyen, D.M., Hsieh, J., Allen, G.D.: The impact of web-based assessment and practice on students’ mathematics learning attitudes. Math. Sci. Teach. 25(3), 251–279 (2006)Google Scholar
  20. 20.
    Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006).  https://doi.org/10.1007/11744023_34CrossRefGoogle Scholar
  21. 21.
    Amma, C., Georgi, M., Schultz, T.: Airwriting: a wearable handwriting recognition system. Pers. Ubiquit. Comput. 18(1), 191–203 (2014)CrossRefGoogle Scholar
  22. 22.
    Ahmad, A.R., Khalia, M., Gaudin, C.V., Poisson, E.: Online handwriting recognition using support vector machine. In: Proceedings of 2004 IEEE Region 10 Conference TENCON 2004, vol. 1, pp. 311–314 (2004)Google Scholar
  23. 23.
    Doetsch, P., Kozielski, M., Ney, H.: Fast and robust training of recurrent neural networks for offline handwriting recognition. In: Proceedings of 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 279–284 (2014)Google Scholar
  24. 24.
    Suen, C.Y., Tan, J.: Analysis of errors of handwritten digits made by a multitude of classifiers. Pattern Recogn. Lett. 26(3), 369–379 (2005)CrossRefGoogle Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2018

Authors and Affiliations

  • Kohei Yamamoto
    • 1
  • Fumiya Kan
    • 1
  • Kazuya Murao
    • 1
  • Masahiro Mochizuki
    • 2
  • Nobuhiko Nishio
    • 1
  1. 1.College of Information Science and EngineeringRitsumeikan UniversityKusatsuJapan
  2. 2.Research Organization of Science and TechnologyRitsumeikan UniversityKusatsuJapan

Personalised recommendations