Advertisement

Reading and Writing

, Volume 30, Issue 4, pp 691–718 | Cite as

Associated effects of automated essay evaluation software on growth in writing quality for students with and without disabilities

  • Joshua Wilson
Article

Abstract

The present study examined growth in writing quality associated with feedback provided by an automated essay evaluation system called PEG Writing. Equal numbers of students with disabilities (SWD) and typically-developing students (TD) matched on prior writing achievement were sampled (n = 1196 total). Data from a subsample of students (n = 655) was used to investigate evidence of transfer to improved first-draft performance on a follow up writing prompt. Three-level hierarchical linear modeling was used. Findings indicated that SWD produced first drafts of lesser quality than TD students, but grew at a faster rate and were able to close the gap in writing quality after five revisions. However, these effects were moderated by school quality and the availability of internet-connected devices in schools. There was no evidence of transfer for either group of students. Results document a positive association between the use of PEG Writing and growth in writing quality for SWD, and underscore the importance of having sufficient technology resources for maximizing this growth.

Keywords

Automated essay evaluation Project Essay Grade (PEG) Writing Students with disabilities Writing quality 

Notes

Acknowledgments

This research was supported in part by a Delegated Authority contract from Measurement Incorporated® to University of Delaware (EDUC432914150001). The opinions expressed in this paper are those of the author and do not necessarily reflect the positions or policies of this agency, and no official endorsement by it should be inferred.

References

  1. Abbott, R. D., Berninger, V. W., & Fayol, M. (2010). Longitudinal relationships of levels of language in writing and between writing and reading in grades 1 to 7. Journal of Educational Psychology, 102, 281–298.CrossRefGoogle Scholar
  2. Alexander, P. A. (2003). The development of expertise: The journey from acclimation to proficiency. Educational Researcher, 32(8), 10–14.CrossRefGoogle Scholar
  3. Applebee, A. N., & Langer, J. A. (2009). What is happening in the teaching of writing? English Journal, 98(5), 18–28.Google Scholar
  4. Attali, Y. (2004). Exploring the feedback and revision features of criterion. Paper presented at the National Council on Measurement in Education (NCME), San Diego, CA.Google Scholar
  5. Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  6. Berninger, V. W. (2000). Development of language by hand and its connections with language by ear, mouth, and eye. Topics in Language Disorders, 20(4), 65–84.CrossRefGoogle Scholar
  7. Berninger, V. W., Nagy, W., Tanimoto, S., Thompson, R., & Abbott, R. D. (2015). Computer instruction in handwriting, spelling, and composing for students with specific learning disabilities in grades 4–9. Computers & Education, 81, 154–168.CrossRefGoogle Scholar
  8. Biber, D., Nekrasova, T., & Horn, B. (2011). The effectiveness of feedback for L1-English and L2-writing development: A meta-analysis. TOEFL iBT™ research report. Princeton, NJ: Educational Testing Service.Google Scholar
  9. Bunch, M. B., Vaughn, D., & Miel, S. (2016). Automated scoring in assessment systems. In Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Technology tools for real-world skill development (pp. 611–626). Hershey, PA: IGI Global.CrossRefGoogle Scholar
  10. Clare, L., Valdés, R., & Patthey-Chavez, G. G. (2000). Learning to write in urban elementary and middle schools: An investigation of teachers’ written feedback on student compositions (Center for the Study of Evaluation Technical Report No. 526). Los Angeles: University of California, Center for Research on Evaluation, Standards, and Student Testing (CRESST).Google Scholar
  11. Common Core State Standards Initiative. (2010). Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects. Retrieved from http://www.corestandards.org/ELA-Literacy/.
  12. Dockrell, J. E., Lindsay, G., & Connelly, V. (2009). The impact of specific language impairment on adolescents’ written text. Exceptional Children, 75, 427–446.CrossRefGoogle Scholar
  13. Englert, C. S., Mariage, T. V., Okolo, C. M., Shankland, R. K., Moxley, K. D., Courtad, C. A., et al. (2009). The learning-to-learn strategies of adolescent students with disabilities: Highlighting, note taking, planning, and writing expository texts. Assessment for Effective Intervention, 34, 147–161.CrossRefGoogle Scholar
  14. Foltz, P. W., Lochbaum, K. E. & Rosenstein, M. B. (2011). Analysis of student ELA writing performance for a large scale implementation of formative assessment. Paper presented at the annual meeting of the National Council for Measurement in Education, New Orleans, LA.Google Scholar
  15. Franzke, M., Kintsch, E., Caccamise, D., Johnson, N., & Dooley, S. (2005). Summary street®: Computer support for comprehension and writing. Journal of Educational Computing Research, 33, 53–80.CrossRefGoogle Scholar
  16. Gersten, R., & Baker, S. (2001). Teaching expressive writing to students with learning disabilities: A meta-analysis. The Elementary School Journal, 101, 251–272. doi: 10.1086/499668.CrossRefGoogle Scholar
  17. Gilbert, J., & Graham, S. (2010). Teaching writing to elementary students in grades 4–6: A national survey. Elementary School Journal, 110, 494–518.CrossRefGoogle Scholar
  18. Gillespie, A., & Graham, S. (2014). A meta-analysis of writing interventions for students with learning disabilities. Exceptional Children, 80, 454–473.CrossRefGoogle Scholar
  19. Gillespie, A., Olinghouse, N. G., & Graham, S. (2013). Fifth-grade students’ knowledge about writing process and writing genres. Elementary School Journal, 113, 565–588.CrossRefGoogle Scholar
  20. Graham, S., Bollinger, A., Booth Olson, C., D’Aoust, C., MacArthur, C., McCutchen, D., et al. (2012a). Teaching elementary school students to be effective writers: A practice guide (NCEE 2012-4058). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.Google Scholar
  21. Graham, S., Capizzi, A., Harris, K. R., Hebert, M., & Morphy, P. (2013). Teaching writing to middle school students: A national survey. Reading and Writing, 27, 1015–1042.CrossRefGoogle Scholar
  22. Graham, S., & Harris, K. R. (2003). Students with learning disabilities and the process of writing: A meta-analysis of SRSD studies. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 323–344). New York, NY: Guilford.Google Scholar
  23. Graham, S., & Harris, K. R. (2013). Common core state standards, writing, and students with LD: Recommendations. Learning Disabilities Research and Practice, 28, 28–37.CrossRefGoogle Scholar
  24. Graham, S., Harris, K. R., & Hebert, M. A. (2011). Informing writing: The benefits of formative assessment. A Carnegie Corporation Time to Act report. Washington, DC: Alliance for Excellent Education.Google Scholar
  25. Graham, S., Hebert, M., & Harris, K. R. (2015). Formative assessment and writing: A meta-analysis. Elementary School Journal, 115, 523–547.CrossRefGoogle Scholar
  26. Graham, S., Hebert, M., Sandbank, M. P., & Harris, K. R. (2016). Assessing the writing achievement of young struggling writers: Application of generalizability theory. Learning Disability Quarterly, 39, 72–82.CrossRefGoogle Scholar
  27. Graham, S., MacArthur, C., & Schwartz, S. (1995). Effects of goal setting and procedural facilitation on the revising behavior and writing performance of students with writing and learning problems. Journal of Educational Psychology, 87, 230–240.CrossRefGoogle Scholar
  28. Graham, S., McKeown, D., Kiuhara, S., & Harris, K. R. (2012b). A meta-analysis of writing instruction for students in the elementary grades. Journal of Educational Psychology, 104, 879–896.CrossRefGoogle Scholar
  29. Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of adolescents in middle and high schools—A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.Google Scholar
  30. Grimes, D., & Warschauer, M. (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. Journal of Technology, Learning, and Assessment, 8(6), 1–44.Google Scholar
  31. Harris, K. R., & Graham, S. (2013). “An adjective is a word hanging down from a noun”: Learning to write and students with learning disabilities. Annals of Dyslexia, 63, 65–79.CrossRefGoogle Scholar
  32. Hayes, J. R. (2004). What triggers revision? In L. Chanquoy, P. Largy, & L. Allal (Eds.), Revision: Cognitive and instructional processes (pp. 9–20). Boston, MA: Kluwer Academic.CrossRefGoogle Scholar
  33. Hayes, J. R. (2012). Modeling and remodeling writing. Written Communication, 29(3), 369–388.CrossRefGoogle Scholar
  34. Keith, T. Z. (2003). Validity and automated essay scoring systems. In M. D. Shermis & J. C. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 147–167). Mahwah, NJ: Lawrence Erlbaum Associates Inc.Google Scholar
  35. Kellogg, R. T., & Whiteford, A. P. (2009). Training advanced writing skills: The case for deliberative practice. Educational Psychologist, 44, 250–266.CrossRefGoogle Scholar
  36. Kellogg, R. T., Whiteford, A. P., & Quinlan, T. (2010). Does automated feedback help students learn to write? Journal of Educational Computing Research, 42(2), 173–196.CrossRefGoogle Scholar
  37. Lin, C. S., Monroe, B. W., & Troia, G. A. (2007). Development of writing knowledge in grades 2–8: A comparison of typically developing writers and their struggling peers. Reading and Writing Quarterly: Overcoming Learning Difficulties, 23, 207–230.CrossRefGoogle Scholar
  38. MacArthur, C. A. (2009). Reflections on research on writing and technology for struggling writers. Learning Disabilities Research and Practice, 24, 93–103.CrossRefGoogle Scholar
  39. MacArthur, C. A., Graham, S., & Harris, K. R. (2009). Insights from instructional research on revision with struggling writers. In L. Allal, L. Chanquoy, & P. Largy (Eds.), Revision cognitive and instructional processes (pp. 125–139). Boston, MA: Kluwer Academic.Google Scholar
  40. MacArthur, C. A., Graham, S., & Schwartz, S. (1991). Knowledge of revision and revising behavior among students with learning disabilities. Learning Disability Quarterly, 14, 61–73.CrossRefGoogle Scholar
  41. Matsumara, L. C., Patthey-Chavez, G. G., Valdés, R., & Garnier, G. (2002). Teacher feedback, writing assignment quality, and third-grade students’ revision in lower- and higher-achieving urban schools. Elementary School Journal, 103, 3–25.CrossRefGoogle Scholar
  42. McCutchen, D. (2011). From novice to expert: Implications of language skills and writing-relevant knowledge for memory during the development of writing skill. Journal of Writing Research, 3, 51–68.CrossRefGoogle Scholar
  43. Moore, N. S., & MacArthur, C. A. (2016). Student use of automated essay evaluation technology during revision. Journal of Writing Research, 8, 149–175. doi: 10.17239/jowr-2016.08.01.05.CrossRefGoogle Scholar
  44. Morphy, P., & Graham, S. (2012). Word processing programs and weaker writers/readers: A meta-analysis of research findings. Reading and Writing, 25, 641–678.CrossRefGoogle Scholar
  45. National Center for Education Statistics. (2012). The Nation’s Report Card: Writing 2011 (NCES 2012-470). Washington, DC: Institute of Education Sciences, U.S. Department of Education.Google Scholar
  46. National Commission on Writing for America’s Families, Schools, and Colleges. (2003). The neglected “R”: The need for a writing revolution. Iowa City, IA: The College Board.Google Scholar
  47. National Research Council. (2000). How people learn: Brain, mind, experience, and school: Expanded edition. Washington, DC: The National Academies Press. doi: 10.17226/9853.Google Scholar
  48. Nobles, S., & Paganucci, L. (2015). Do digital writing tools deliver? Student perceptions of writing quality using digital tools and online writing environments. Computers and Composition, 38, 16–31.CrossRefGoogle Scholar
  49. Olinghouse, N. G., & Wilson, J. (2013). The relationship between vocabulary and writing quality in three genres. Reading and Writing, 26, 45–65.CrossRefGoogle Scholar
  50. Page, E. B. (1966). The imminence of grading essays by computer. Phi Delta Kappan, 48, 238–243.Google Scholar
  51. Page, E. B. (2003). Project Essay Grade: PEG. In M. D. Shermis & J. C. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 43–54). Mahwah, NJ: Lawrence Erlbaum Associates Inc.Google Scholar
  52. Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading and Writing Quarterly, 19, 139–158. doi: 10.1080/10573560308222.CrossRefGoogle Scholar
  53. Persky, H. R., Daane, M. C., & Jin, Y. (2002). The Nation’s Report Card: Writing 2002 (NCES 2003-529). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department for Education.Google Scholar
  54. Raudenbush, S. W., Bryk, A. S., & Congdon, R. T. (1988). HLM: Hierarchical linear modeling. Chicago: Scientific Software International Inc.Google Scholar
  55. Ruth, L., & Murphy, S. (1988). Designing writing tasks for the assessment of writing. Norwood, NJ: Ablex.Google Scholar
  56. Salahu-Din, D., Persky, H., & Miller, J. (2008). The Nation’s Report Card: Writing 2007 (NCES 2008-468). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.Google Scholar
  57. Shermis, M. D. (2014). State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration. Assessing Writing, 20, 53–76.CrossRefGoogle Scholar
  58. Shermis, M. D., & Burstein, J. C. (2003). Automated essay scoring: A cross-disciplinary perspective. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  59. Shermis, M. D., & Burstein, J. (2013). Handbook of automated essay evaluation: Current applications and new directions. New York, NY: Routledge.Google Scholar
  60. Shermis, M. D., Garvan, C. W., & Diao, Y. (2008). The impact of automated essay scoring on writing outcomes. Paper presented at the annual meeting of the National Council on Measurement in Education, New York, NY.Google Scholar
  61. Shermis, M. D., & Hammer, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions. New York, NY: Routledge.Google Scholar
  62. Shermis, M. D., Koch, C. M., Page, E. B., Keith, T. Z., & Harrington, S. (2002). Trait ratings for automated essay grading. Educational and Psychological Measurement, 62, 5–18.CrossRefGoogle Scholar
  63. Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York, NY: Oxford University Press.CrossRefGoogle Scholar
  64. Stevenson, M., & Phakiti, A. (2014). The effects of computer-generated feedback on the quality of writing. Assessing Writing, 19, 51–65.CrossRefGoogle Scholar
  65. Troia, G. A. (2006). Instruction and assessment for struggling writers: Evidence-based practices. New York, NY: Guilford.Google Scholar
  66. Vaughn, S., Gersten, R., & Chard, D. J. (2000). The underlying message in LD intervention research: Findings from research syntheses. Exceptional Children, 67, 99–114.CrossRefGoogle Scholar
  67. Warschauer, M., & Grimes, D. (2008). Automated writing assessment in the classroom. Pedagogies: An International Journal, 3, 22–36.CrossRefGoogle Scholar
  68. Wilson, J., & Andrada, G. N. (2016). Using automated feedback to improve writing quality: Opportunities and challenges. In Y. Rosen, S. Ferrara & M. Mosharraf (Eds.), Handbook of research on technology tools for real-world skill development (pp. 678–703). Hershey, PA: IGI Global.Google Scholar
  69. Wilson, J., & Czik, A. (2016). Automated essay evaluation software in English language arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100, 94–109.CrossRefGoogle Scholar
  70. Wilson, J., Olinghouse N. G., & Andrada, G. N. (2014). Does automated feedback improve writing quality? Learning Disabilities: A Contemporary Journal, 12, 93–118.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.University of Delaware, School of EducationNewarkUSA

Personalised recommendations