Issues and Challenges for Implementing Writing Analytics at Higher Education

  • Duygu BektikEmail author


Effective written communication is an essential skill which promotes educational success for undergraduates. One of the key requirements of good academic writing in higher education is that students must develop a critical mind and learn how to construct sound arguments in their discipline. Writing analytics focuses on the measurement and analysis of written texts to improve the teaching and learning of writing and is being developed at the intersection of fields such as automated assessment and computational linguistics. Since writing is an activity that is deeply human, its association with computational formulations is double-edged. This chapter discusses issues and challenges for implementing writing analytics in higher education through theoretical considerations that emerge from the literature review and an example application. It includes findings from empirical research conducted with academic tutors of the Open University, UK, on adopting writing analytics to support their feedback processes, which reveal the preconceptions that academic tutors have had about the use of writing analytics specifically concerns centred around the privacy and ethical aspects.


Writing analytics Discourse-centric learning analytics Academic writing Natural language processing Automated text analysis 


  1. Aït-Mokhtar, S., Chanod, J.-P., & Roux, C. (2002). Robustness beyond shallowness: Incremental deep parsing. Natural Language Engineering, 8(2–3), 121–144.Google Scholar
  2. Andrews, R. (2010). Argumentation in higher education: Improving practice through theory and research. New York: Routledge.Google Scholar
  3. Attali, Y. (2013). Validity and reliability of automated essay scoring. In M. D. Shermis & J. C. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 181–199). Oxon, UK: Routledge.Google Scholar
  4. Attali, Y., & Burstein, J. (2006). Automated essay scoring with e-rater® V. 2. The Journal of Technology, Learning and Assessment, 4(3).Google Scholar
  5. Bridgeman, B. (2013). Human ratings and automated essay evaluation. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (1st ed., pp. 221–232). Oxon, UK: Routledge.Google Scholar
  6. Bridgeman, B., Trapani, C., & Attali, Y. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27–40.CrossRefGoogle Scholar
  7. Buckingham Shum, S., Knight, S., McNamara, D., Allen, L., Bektik, D., & Crossley, S. (2016). Critical perspectives on writing analytics. In Paper Presented at the Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, Edinburgh, UK.Google Scholar
  8. Burstein, J., & Chodorow, M. (2010). Progress and new directions in technology for automated essay evaluation. In R. Kaplan (Ed.), The Oxford handbook of applied linguistics (2nd ed., pp. 487–497). Oxford, UK: Oxford University Press.Google Scholar
  9. Coffin, C., Curry, M. J., Goodman, S., Hewings, A., Lillis, T., & Swann, J. (2002). Teaching academic writing: A toolkit for higher education. New York: Routledge.Google Scholar
  10. Cook, K. C. (2002). Layered literacies: A theoretical frame for technical communication pedagogy. Technical Communication Quarterly, 11(1), 5–29.CrossRefGoogle Scholar
  11. Dawson, P. (1998). The rhetoric and bureaucracy of quality management: A totally questionable method? Personnel Review, 27(1), 5–19.CrossRefGoogle Scholar
  12. Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7–24.CrossRefGoogle Scholar
  13. Elliot, N., & Williamson, D. M. (2013). Assessing writing special issue: Assessing writing with automated scoring systems. Assessing Writing, 18(1), 1–6.CrossRefGoogle Scholar
  14. Ericsson, P. F., & Haswell, R. H. (2006). Machine scoring of student essays: Truth and consequences. Logan, UT: Utah State University Press.CrossRefGoogle Scholar
  15. Herrington, A., & Moran, C. (2012). Writing to a machine is not writing at all. In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 219–232). New York: Hampton Press.Google Scholar
  16. Hounsell, D. (1984). Essay planning and essay writing. Higher Education Research and Development, 3, 13–31.CrossRefGoogle Scholar
  17. Hyland, K. (2005). Metadiscourse: Wiley Online Library.Google Scholar
  18. Landauer, T. K., Laham, D., & Foltz, P. W. (2003). Automated scoring and annotation of essays with the intelligent essay assessor. Automated essay scoring: A crossdisciplinary perspective (pp. 87–112). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  19. Lea, M., & Street, B. V. (1998). Student writing in higher education: An academic literacies approach. Studies in Higher Education, 23, 157–172.CrossRefGoogle Scholar
  20. Lillis, T., & Turner, J. (2001). Student writing in higher education: Contemporary confusion, traditional concerns. Teaching in Higher Education, 6, 57–68.CrossRefGoogle Scholar
  21. Maynard, A. (1998). Competition and quality: Rhetoric and reality. International Journal for Quality in Health Care, 10(5), 379–384.CrossRefGoogle Scholar
  22. Norton, L. S. (1998). Essay-writing: What really counts? Higher Education, 20, 411–442.CrossRefGoogle Scholar
  23. Poland, B. D. (1995). Transcription quality as an aspect of rigor in qualitative research. Qualitative Inquiry, 1, 290–310.CrossRefGoogle Scholar
  24. Ras, E., Whitelock, D., & Kalz, M. (2015). The promise and potential of e- assessment for learning. In P. Reimann, S. Bull, M. Kickmeier-Rust, R. Vatrapu, & B. Wasson (Eds.), Measuring and visualizing learning in the information-rich classroom (pp. 21–40). New York: Routledge.Google Scholar
  25. Sandoval, W. A., & Millwood, K. A. (2005). The quality of students’ use of evidence in written scientific explanations. Cognition and Instruction, 23, 23–55.CrossRefGoogle Scholar
  26. Shermis, M. D., & Burstein, J. (2013). Handbook of automated essay evaluation: Current applications and new directions. Oxon, UK: Routledge.CrossRefGoogle Scholar
  27. Shermis, M. D., & Burstein, J. C. (2003). Automated essay scoring: A cross-disciplinary perspective. Mahwah, NJ: Lawrence Erlbaum.CrossRefGoogle Scholar
  28. Simsek, D., Buckingham Shum, S., Sándor, Á., De Liddo, A., & Ferguson, R. (2013). XIP dashboard: Visual analytics from automated rhetorical parsing of scientific metadiscourse. In 1st international workshop on discourse-centric learning analytics. (3rd international conference on learning analytics & knowledge, 8 April 2013, Leuven, Belgium). Open Access Eprint: (
  29. Simsek, D., Sandor, A., Buckingham Shum, S., Ferguson, R., De Liddo, A., & Whitelock, D. (2015). Correlations between automated rhetorical analysis and tutors’ grades on student essays. In Proceedings of the fifth international conference on learning analytics and knowledge (pp. 355–359). New York: ACM.Google Scholar
  30. Swales, J. (1990). Genre analysis: English in academic and research settings. Cambridge, UK: Cambridge University Press.Google Scholar
  31. Teufel, S., & Kan, M.-J. (2009). Robust argumentative zoning for sensemaking in scholarly documents. In Proceedings of the 2009 international conference on Advanced language technologies for digital libraries (NLP4DL'09/AT4DL'09).Google Scholar
  32. Whitelock, D., & Bektik, D. (2018). Progress and challenges for automated scoring and feedback systems for large-scale assessments. In J. Voogt et al. (Eds.), Second handbook of information technology in primary and secondary education. Basel, Switzerland: Springer. CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.The Open University, UKMilton KeynesUK

Personalised recommendations