Skip to main content

GERMIC: Application of Gesture Recognition Model with Interactive Correction to Manual Grading Tasks

  • 718 Accesses

Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST,volume 240)

Abstract

Gesture-based recognition is one of the most intuitive methods for inputting information and is not subject to cumbersome operations. Recognition is performed on human’s consecutive motion without reference to retrial or alternation by user. We propose a gesture recognition model with a mechanism for correcting recognition errors that operates interactively and is practical. We applied the model to a setting involving a manual grading task in order to verify its effectiveness. Our system, named GERMIC, consists of two major modules, namely, handwritten recognition and interactive correction. Recognition is materialized with image feature extraction and convolutional neural network. A mechanism for interactive correction is called on-demand by a user-based trigger. GERMIC monitors, track, and stores information on the user’s grading task and generates output based on the recognition information collected. In contrast to conventional grading done manually, GERMIC significantly shortens the total time for completing the task by 24.7% and demonstrates the effectiveness of the model with interactive correction in two real world user environments.

Keywords

  • Handwriting recognition
  • Recognition error correction

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-90740-6_6
  • Chapter length: 16 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   54.99
Price excludes VAT (USA)
  • ISBN: 978-3-319-90740-6
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   72.00
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.
Fig. 11.
Fig. 12.

Notes

  1. 1.

    Xerox: https://www.xerox.com/.

  2. 2.

    Remark: http://remarksoftware.com/products/office-omr/.

  3. 3.

    OpenCV: http://opencv.org/opencv-3-2.html.

  4. 4.

    TensorFlow: https://www.tensorflow.org/.

  5. 5.

    The MNIST database of handwritten digits: http://yann.lecun.com/exdb/mnist/.

References

  1. Bulling, A., Blanke, U., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 46(3), 33 (2014)

    CrossRef  Google Scholar 

  2. Ortiz, J.L.R., Oneto, L., Samá, A., Parra, X., Anguit, D.: Transition-aware human activity recognition using smartphones. Neurocomputing 171(C), 754–767 (2016)

    CrossRef  Google Scholar 

  3. Ren, Z., Meng, J., Yuan, J., Zhantg, Z.: Robust hand gesture recognition with kinect sensor. In: Proceedings of 19th ACM International Conference on Multimedia, pp. 759–760 (2011)

    Google Scholar 

  4. Ren, Z., Yuan, J., Zhang, Z.: Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of 19th ACM International Conference on Multimedia, pp. 1093–1096 (2011)

    Google Scholar 

  5. Kuwahara, N., Kogure, K., Ohmura, A., Noma, H.: Wearable sensors for auto-event-recording on medical nursing - user study of ergonomic design. In: 2012 16th International Symposium on Wearable Computers, pp. 8–15 (2004)

    Google Scholar 

  6. Westerfield, G., Mitrovic, A., Billinghurst, M.: Intelligent augmented reality training for motherboard assembly. Int. J. Artif. Intell. Educ. 25(1), 157–172 (2015)

    CrossRef  Google Scholar 

  7. Radkowski, R., Herrema, J., Oliver, J.: Augmented reality-based manual assembly support with visual features for different degrees of difficulty. Ind. Prod. Eng. 34(5), 362–374 (2015)

    Google Scholar 

  8. Johnson, W., Jellinek, H., Klotz Jr., L., Rao, R., Card, S.: Bridging the paper and electronic worlds. In: Proceedings of INTERACT 1993 and CHI 1993 Conference on Human Factors in Computing Systems, pp. 507–512 (1993)

    Google Scholar 

  9. Bayar, G.: The use of hough transform to develop an intelligent grading system for the multiple choice exam papers. Proc. Karaelmas Sci. Eng. 6(1), 100–104 (2016)

    Google Scholar 

  10. Benedito, J.L.P., Aragón, E.Q., Alriols, J.A., Medic, L.: Optical mark recognition in student continuous assessment. IEEE Rev. Iberoam. de Tecnol. del Aprendiz. 9(4), 133–138 (2014)

    Google Scholar 

  11. Atasoy, H., Yildirim, E., Kutlu, Y., Tohma, K.: Webcam based real-time robust optical mark recognition. In: Arik, S., Huang, T., Lai, W.K., Liu, Q. (eds.) ICONIP 2015. LNCS, vol. 9490, pp. 449–456. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-26535-3_51

    CrossRef  Google Scholar 

  12. Koile, K., Chevalier, K., Low, C., Pal, S., Rogal, A., Singer, D., Sorensen, J., Tay, K.S., Wu, K.: Supporting pen-based classroom interaction: new findings and functionality for classroom learning partner. In: Proceedings of International Workshop on Pen-Based Learning Technologies (PLT 2007), pp. 1–7 (2007)

    Google Scholar 

  13. Koile, K., Chevalier, K., Rbeiz, M., Rogal, A., Singer, D., Sorensen, J., Smith, A., Tay, K.S., Wu, K.: Supporting feedback and assessment of digital ink answers to in-class exercises. In: Proceedings of 22nd National Conference on Artificial Intelligence, pp. 1787–1794 (2007)

    Google Scholar 

  14. Tay, K.S., Koile, K.: Improving digital ink interpretation through expected type prediction and dynamic dispatch. In: Proceedings of 19th International Conference on Pattern Recognition, pp. 1–4 (2008)

    Google Scholar 

  15. Edwards, S.: Work-in-progress: program grading and feedback generation with web-CAT. In: Proceedings of 1st ACM Conference on Learning@ Scale Conference, pp. 215–216 (2014)

    Google Scholar 

  16. Caiza, J., Alamo, J.M.D.: Programming assignments automatic grading: review of tools and implementations. In: Proceedings of 7th International Technology, Education and Development Conference (INTED 2013), pp. 5691–5700 (2013)

    Google Scholar 

  17. Yamamoto, M., Umemura, N., Kawano, H.: Automated essay scoring system based on rubric. In: Lee, R. (ed.) ACIT 2017. Studies in Computational Intelligence, vol. 727, pp. 177–190. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-64051-8_11

    CrossRef  Google Scholar 

  18. Liu, M., Li, Y., Xu, W., Liu, L.: Automated essay feedback generation and its impact in the revision. IEEE Trans. Learn. Technol. PP(99), 1 (2016)

    Google Scholar 

  19. Nguyen, D.M., Hsieh, J., Allen, G.D.: The impact of web-based assessment and practice on students’ mathematics learning attitudes. Math. Sci. Teach. 25(3), 251–279 (2006)

    Google Scholar 

  20. Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_34

    CrossRef  Google Scholar 

  21. Amma, C., Georgi, M., Schultz, T.: Airwriting: a wearable handwriting recognition system. Pers. Ubiquit. Comput. 18(1), 191–203 (2014)

    CrossRef  Google Scholar 

  22. Ahmad, A.R., Khalia, M., Gaudin, C.V., Poisson, E.: Online handwriting recognition using support vector machine. In: Proceedings of 2004 IEEE Region 10 Conference TENCON 2004, vol. 1, pp. 311–314 (2004)

    Google Scholar 

  23. Doetsch, P., Kozielski, M., Ney, H.: Fast and robust training of recurrent neural networks for offline handwriting recognition. In: Proceedings of 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 279–284 (2014)

    Google Scholar 

  24. Suen, C.Y., Tan, J.: Analysis of errors of handwritten digits made by a multitude of classifiers. Pattern Recogn. Lett. 26(3), 369–379 (2005)

    CrossRef  Google Scholar 

Download references

Acknowledgement

This research has been supported by the Kayamori Foundation of lnformational Science Advancement.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kohei Yamamoto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2018 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Yamamoto, K., Kan, F., Murao, K., Mochizuki, M., Nishio, N. (2018). GERMIC: Application of Gesture Recognition Model with Interactive Correction to Manual Grading Tasks. In: Murao, K., Ohmura, R., Inoue, S., Gotoh, Y. (eds) Mobile Computing, Applications, and Services. MobiCASE 2018. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 240. Springer, Cham. https://doi.org/10.1007/978-3-319-90740-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-90740-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-90739-0

  • Online ISBN: 978-3-319-90740-6

  • eBook Packages: Computer ScienceComputer Science (R0)