Advertisement

The Application of Deep Learning in Automated Essay Evaluation

  • Shili Ge
  • Xiaoxiao Chen
Conference paper
  • 6 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11984)

Abstract

The shift from Automated Essay Scoring (AES) to Automated Essay Evaluation (AEE) indicates that natural language processing (NLP) researchers respond positively to the request from language teaching field. Writers and teachers need more feedback about writing content and language use from AEE software beside a precise evaluative score. This requirement can be met by the neural network based deep learning technique. Deep learning has been applied in many NLP fields and great success has been made, such as machine translation, emotional analysis, question answering, and automatic summarization. Neural network based deep learning is suitable for AES research and development since AES requires mainly a precise score of writing quality. This can be accomplished with human accurately scored essays as input and scoring model as output with deep learning technology. However, AEE requires more than a score and deep learning can be used to select linguistically meaningful features for writing quality and apply in the AEE model construction. Related experiments already show the feasibility and further research is worth exploring.

Keywords

Automated Essay Evaluation Automated Essay Scoring Deep learning Neural network Natural Language Processing 

Notes

Acknowledgements

This work is financially supported by the Science and Technology Project of Guangdong Province, China (2017A020220002), Graduate Education Innovation Plan of Guangdong Province (2018JGXM39) and the fund of Center for Translation Studies, Guangdong University of Foreign Studies.

References

  1. 1.
    Whithaus, C.: Foreword. In: Shermis, M.D., Burstein, J. (eds.) Handbook of Automated Essay Evaluation: Current Applications and New Directions, pp. vii–ix. Routledge, New York (2013)Google Scholar
  2. 2.
    Burstein, J.: Opportunities for natural language processing research in education. In: Gelbukh, A. (ed.) CICLing 2009. LNCS, vol. 5449, pp. 6–27. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-00382-0_2CrossRefGoogle Scholar
  3. 3.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958)CrossRefGoogle Scholar
  6. 6.
    Xi, X., Zhou, G.D.: A survey on deep learning for natural language processing. Acta Automat. Sinica 42(10), 1445–1465 (2016)zbMATHGoogle Scholar
  7. 7.
    Kalchbrenner, N., Blunsom, P.: Recurrent continuous translation models. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1700–1709. Seattle, Washington (2013)Google Scholar
  8. 8.
    Sutskever, I., Vinyals, O., Le, Q. V.: Sequence to sequence learning with neural networks. In: arXiv:1409.3215v3 [cs.CL] (2014)
  9. 9.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Proc. ICLR 2015, 1–15 (2014)Google Scholar
  10. 10.
    Gehring, J., Auli, M., Grangier, D., Dauphin, Y. N.: A convolutional encoder model for neural machine translation. In: Proceedings of ACL, pp. 123–135 (2017)Google Scholar
  11. 11.
    Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)Google Scholar
  12. 12.
    Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751 (2014)Google Scholar
  13. 13.
    Wang, X., Liu, Y., Sun, C., et al.: Predicting polarities of tweets by composing word embeddings with long short-term memory. In: Proceedings of the 53rd Annual Meeting of Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp. 1343–1353. Stroudsburg, PA (2015)Google Scholar
  14. 14.
    Weston, J., Chopra, S., Bordes, A.: Memory networks. In: Proceedings of ICLR 2015, pp. 1–15 (2015)Google Scholar
  15. 15.
    Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., et al.: Ask me anything: dynamic memory networks for natural language processing. In: Proceedings of the 33th International Conference on Machine Learning, pp. 1378–1387 (2015)Google Scholar
  16. 16.
    Rush, M. A., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Proceedings of EMNLP 2015, pp. 379–389 (2015)Google Scholar
  17. 17.
    Cao, Z., Li, W., Li, S., Wei, F.: Joint learning of focusing and summarization with neural attention. In: Proceedings of 2016 COLING, pp. 547–556 (2016)Google Scholar
  18. 18.
    Alikaniotis, D., Yannakoudakis, H., Rei, M.: Automatic text scoring using neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 715–725. Berlin, Germany (2016)Google Scholar
  19. 19.
    Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1–8 (2009)Google Scholar
  20. 20.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the Twenty-Fifth international conference on Machine Learning, pp. 160–167 (2008)Google Scholar
  21. 21.
    Tang, D.: Sentiment-specific representation learning for document-level sentiment analysis. In: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining – WSDM 2015, pp. 447–452. Association for Computing Machinery (ACM) (2015)Google Scholar
  22. 22.
    Taghipour, K., Ng, H. T.: A neural approach to automated essay scoring. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1882–1891, Austin, Texas (2016)Google Scholar
  23. 23.
    Fu, R., Wang, D., Wang, S., Hu, G., Liu, T.: Elegant sentence recognition for automated essay scoring. J. Chin. Inf. Process. 32(6), 88–97 (2018)Google Scholar
  24. 24.
    Guffey, M.E., Loewy, D.: Essentials of Business Communication. Cengage Learning, South-Western (2010)Google Scholar
  25. 25.
    Mikolov, T., Kombrink, S., Deoras, A., Burget, L., Cernocky, J.: RNNLM - recurrent neural network language modeling toolkit. In: IEEE Automatic Speech Recognition and Understanding Workshop (2011)Google Scholar
  26. 26.
    Chelba, C., et al.: One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. Computer Science. arXiv:1312.3005 [cs.CL] (2013)
  27. 27.
    Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. arXiv:1508.06615 [cs.CL] (2016)
  28. 28.
    Sundermeyer, M., Ney, H., Schlüter, R.: From feedforward to recurrent LSTM neural networks for language modeling. IEEE/ACM Trans. Audio Speech Lang. Process. 23(3), 517–529 (2015)CrossRefGoogle Scholar
  29. 29.
    Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., Hinton, G.: Grammar as a foreign language. In: Advances in Neural Information Processing Systems. arXiv:1412.7449 [cs.CL] (2015)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Shili Ge
    • 1
  • Xiaoxiao Chen
    • 1
  1. 1.Guangdong University of Foreign StudiesGuangzhouChina

Personalised recommendations