Assessing Text-Based Writing of Low-Skilled College Students

  • Dolores Perin
  • Mark Lauterbach


The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored measures were a holistic quality rating for a persuasive essay and an analytic quality score for a written summary. Both writing samples were based on text on psychology and sociology topics related to content taught at the introductory undergraduate level. The study is a modified replication of McNamara et al. (Written Communication, 27(1), 57–86 2010), who identified several Coh-Metrix variables from five linguistic classes that reliably predicted group membership (high versus low proficiency) using human quality scores on persuasive essays written by average-achieving college students. When discriminant analyses and ANOVAs failed to replicate the McNamara et al. (Written Communication, 27(1), 57–86 2010) findings, the current study proceeded to analyze all of the variables in the five Coh-Metrix classes. This larger analysis identified 10 variables that predicted human-scored writing proficiency. Essay and summary scores were predicted by different automated variables. Implications for instruction and future use of automated scoring to understand the writing of low-skilled adults are discussed.


Writing skills Automated scoring Adult students Persuasive essay Written summary 



The writing samples analyzed in this study were collected under funding by the Bill & Melinda Gates Foundation to the Community College Research Center, Teachers College, Columbia University for a project entitled “Analysis of Statewide Developmental Education Reform: Learning Assessment Study.” Special thanks to Jian-Ping Ye, Geremy Grant and Natalie Portillo for assistance with data entry.


  1. Abdi, A., Idris, N., Alguliyev, R. M., & Alguliyev, R. M. (2016). An automated summarization assessment algorithm for identifying summarizing strategies. PloS One, 11(1), 1–34. doi: 10.1371/journal.pone.0145809.CrossRefGoogle Scholar
  2. Acker, S. R. (2008). Preparing high school students for college-level writing: using an e-portfolio to support a successful transition. The Journal of General Education, 57(1), 1–15. doi: 10.1353/jge.0.0012.MathSciNetCrossRefGoogle Scholar
  3. Bailey, T. R., Jeong, D.-W., & Cho, S.-W. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29(2), 255–270.CrossRefGoogle Scholar
  4. Brace, N., Kemp, R., & Snelgar, R. (2012). SPSS for psychologists: a guide to data analysis using (5th ed.). New York, NY: Routledge.Google Scholar
  5. Bridgeman, B. (2013). Human ratings and automated essay evaluation. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: current applications and new directions (pp. 221–232). New York, NY: Routledge.Google Scholar
  6. Bridgeman, B., & Carlson, S. B. (1984). Survey of academic writing tasks. Written Communication, 1(2), 247–280. doi: 10.1177/0741088384001002004.CrossRefGoogle Scholar
  7. Brown, A. L., & Day, J. D. (1983). Macrorules for summarizing texts: the development of expertise. Journal of Verbal Learning and Verbal Behavior, 22, 1–14.CrossRefGoogle Scholar
  8. Brown, J. I., Fishco, V. V., & Hanna, G. S. (1993). The Nelson-Denny reading test, forms G and H. Itasca, IL: Riverside/Houghton-Mifflin.Google Scholar
  9. Burstein, J., Tetreault, J., & Madnani, N. (2013). The e-rater® automated essay scoring system. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: current applications and new directions. New York, NY: Routledge.Google Scholar
  10. Burstein, J., Holtzman, S., Lentini, J., Molloy, H., Shore, J., Steinberg, J., … Elliot, N. (2014). Genre research and automated writing evaluation: using the lens of genre to understand exposure and readiness in teaching and assessing school and workplace writing. Paper presented at the National Council on measurement in education (NCME), April 2014, Philadelphia, PA.Google Scholar
  11. Carretti, B., Motta, E., & Re, A. M. (2016). Oral and written expression in children with reading comprehension difficulties. Journal of Learning Disabilities, 49(1), 65–76. doi: 10.1177/0022219414528539.CrossRefGoogle Scholar
  12. Cohen, A. M., Brawer, F. B., & Kisker, C. B. (2013). The American community college (6th ed.). Boston, MA: Wiley.Google Scholar
  13. Comer, D. K., & White, E. M. (2016). Adventuring into MOOC writing assessment: challenges, results, and possibilities. College Composition and Communication, 67(3), 318–359.Google Scholar
  14. Crossley, S. A., & McNamara, D. S. (2009). Computational assessment of lexical differences in L1 and L2 writing. Journal of Second Language Writing, 18(2), 119–135.CrossRefGoogle Scholar
  15. Crossley, S. A., & McNamara, D. S. (2011). Predicting second language writing proficiency: the roles of cohesion and linguistic sophistication. Journal of Research in Reading. doi: 10.1111/j.1467-9817.2010.01449.x.Google Scholar
  16. Crossley, S. A., Weston, J. L., Sullivan, S. T. M., & McNamara, D. S. (2011). The development of writing proficiency as a function of grade level: a linguistic analysis. Written Communication, 28(3), 282–311. doi: 10.1177/0741088311410188.CrossRefGoogle Scholar
  17. Crossley, S. A., Salsbury, T., & McNamara, D. S. (2012). Predicting the proficiency level of language learners using lexical indices. Language Testing, 29(2), 243–263. doi: 10.1177/0265532211419331.CrossRefGoogle Scholar
  18. Crossley, S. A., Kyle, K., & McNamara, D. S. (2016). The development and use of cohesive devices in L2 writing and their relations to judgments of essay quality. Journal of Second Language Writing, 32, 1–16. doi: 10.1016/j.jslw.2016.01.003.CrossRefGoogle Scholar
  19. Danzak, R. L. (2011). The integration of lexical, syntactic, and discourse features in bilingual adolescents’ writing: an exploratory approach. Language, Speech, and Hearing Services in Schools, 42(4), 491–505.CrossRefGoogle Scholar
  20. De La Paz, S. (2005). Effects of historical reasoning instruction and writing strategy mastery in culturally and academically diverse middle school classrooms. Journal of Educational Psychology, 97(2), 139–156. doi: 10.1037/0022-0663.97.2.139.CrossRefGoogle Scholar
  21. De La Paz, S., Ferretti, R., Wissinger, D., Yee, L., & MacArthur, C. A. (2012). Adolescents’ disciplinary use of evidence, argumentative strategies, and organizational structure in writing about historical controversies. Written Communication, 29(4), 412–454. doi: 10.1177/0741088312461591.CrossRefGoogle Scholar
  22. Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7–24. doi: 10.1016/j.asw.2012.10.002.CrossRefGoogle Scholar
  23. Deane, P., & Quinlan, T. (2010). What automated analyses of corpora can tell us about students’ writing skills. Journal of Writing Research, 2(2), 151–177. doi: 10.17239/jowr-2010.02.02.4.CrossRefGoogle Scholar
  24. Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012). Placement of students into first-year writing courses. Research in the Teaching of English, 46(3), 285–313.Google Scholar
  25. Fallahi, C. R. (2012). Improving the writing skills of college students. In E. L. Grigorenko, E. Mambrino, & D. D. Preiss (Eds.), Writing: a mosaic of new perspectives (pp. 209–219). New York, NY: Psychology Press.Google Scholar
  26. Ferretti, R. P., MacArthur, C. A., & Dowdy, N. S. (2000). The effects of an elaborated goal on the persuasive writing of students with learning disabilities and their normally achieving peers. Journal of Educational Psychology, 92(4), 694–702. doi: 10.10377//0022:2–0663.92.4.694.CrossRefGoogle Scholar
  27. Ferretti, R. P., Andrews-Weckerly, S., & Lewis, W. E. (2007). Improving the argumentative writing of students with learning disabilities: descriptive and normative considerations. Reading and Writing Quarterly, 23(3), 267–285.CrossRefGoogle Scholar
  28. Ferretti, R. P., Lewis, W. E., & Andrews-Weckerly, S. (2009). Do goals affect the structure of students’ argumentative writing strategies? Journal of Educational Psychology, 101(3), 577–589. doi: 10.1037/a0014702.CrossRefGoogle Scholar
  29. Golder, C., & Coirier, P. (1994). Argumentative text writing: developmental trends. Discourse Processes, 18(2), 187–210. doi: 10.1080/01638539409544891.CrossRefGoogle Scholar
  30. Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh-Metrix: analysis of text on cohesion and language. Behavioral Research Methods, Instruments and Computers, 36(2), 193–202.CrossRefGoogle Scholar
  31. Graham, S. (1999). Handwriting and spelling instruction for students with learning disabilities: a review. Learning Disability Quarterly, 22(2), 78–98. doi: 10.2307/1511268.CrossRefGoogle Scholar
  32. Graham, S., Hebert, M., Sandbank, M. P., & Harris, K. R. (2014). Assessing the writing achievement of young struggling writers: application of generalizability theory. Learning Disability Quarterly (online first). doi: 10.1177/0731948714555019.Google Scholar
  33. Hale, G., Taylor, C., Bridgeman, B., Carson, J., Kroll, B., & Kantor, R. (1996). A study of writing tasks assigned in academic degree programs (RR-95-44, TOEFL-RR-54). Retrieved from Princeton, NJ.Google Scholar
  34. Hillocks, G. (2011). Teaching argument writing, grades 6–12: supporting claims with relevant evidence and clear reasoning. Portsmouth, NH: Heinemann.Google Scholar
  35. Holtzman, J. M., Elliot, N., Biber, C. L., & Sanders, R. M. (2005). Computerized assessment of dental student writing skills. Journal of Dental Education, 69(2), 285–295.Google Scholar
  36. Hoxby, C. M., & Turner, S. (2015). What high-achieving low-income students know about college. The American Economic Review, 105(5), 514–517. doi: 10.1257/aer.p20151027.CrossRefGoogle Scholar
  37. Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges. Retrieved from CCRC Working Paper No. 19. New York.Google Scholar
  38. Kiuhara, S., O’Neill, R., Hawken, L., & Graham, S. (2012). The effectiveness of teaching 10th-grade students STOP, AIMS, and DARE for planning and drafting persuasive text. Exceptional Children, 78(3), 335–355.CrossRefGoogle Scholar
  39. Klobucar, A., Elliot, N., Deess, P., Rudniy, O., & Joshi, K. (2013). Automated scoring in context: rapid assessment for placed students. Assessing Writing, 18(1), 62–84. doi: 10.1016/j.asw.2012.10.001.CrossRefGoogle Scholar
  40. Li, M., & Kirby, J. R. (2016). The effects of vocabulary breadth and depth on English reading. Applied Linguistics, 36(5), 611–634. doi: 10.1093/applin/amu007.Google Scholar
  41. MacArthur, C. A., & Lembo, L. (2009). Strategy instruction in writing for adult literacy learners. Reading and Writing: An Interdisciplinary Journal, 22(9), 1021–1039.CrossRefGoogle Scholar
  42. MacArthur, C. A., & Philippakos, Z. (2010). Instruction in a strategy for compare-contrast writing. Exceptional Children, 76(4), 438–456.CrossRefGoogle Scholar
  43. MacArthur, C. A., & Philippakos, Z. A. (2012). Strategy instruction with college basic writers: a design study. In C. Gelati, B. Arfé, & L. Mason (Eds.), Issues in writing research (pp. 87–106). Padova: CLEUP.Google Scholar
  44. MacArthur, C. A., & Philippakos, Z. A. (2013). Self-regulated strategy instruction in developmental writing: a design research project. Community College Review, 41(2), 176–195. doi: 10.1177/0091552113484580.CrossRefGoogle Scholar
  45. MacArthur, C. A., Philippakos, Z. A., & Ianetta, M. (2015). Self-regulated strategy instruction in college developmental writing. Journal of Educational Psychology, 107(3), 855–867. doi: 10.1037/edu0000011.CrossRefGoogle Scholar
  46. MacArthur, C. A., Philippakos, Z. A., & Graham, S. (2016). A multicomponent measure of writing motivation with basic college writers. Learning Disability Quarterly, 39(1), 31–43. doi: 10.1177/0731948715583115.CrossRefGoogle Scholar
  47. Magliano, J. P., & Graesser, A. C. (2012). Computer-based assessment of student-constructed responses. Behavior Research Methods (Online), 44(3), 608–621. doi: 10.3758/s13428-012-0211-3.CrossRefGoogle Scholar
  48. Mason, L. H., Davison, M. D., Hammer, C. S., Miller, C. A., & Glutting, J. J. (2013). Knowledge, writing, and language outcomes for a reading comprehension and writing intervention. Reading and Writing: An Interdisciplinary Journal, 26(7), 1133–1158. doi: 10.1007/s11145-012-9409-0.CrossRefGoogle Scholar
  49. Mateos, M., Martin, E., Villalon, R., & Luna, M. (2008). Reading and writing to learn in secondary education: online processing activity and written products in summarizing and synthesizing tasks. Reading and Writing: An Interdisciplinary Journal, 21, 675–697.CrossRefGoogle Scholar
  50. McCarthy, P. M., & Jarvis, S. (2007). Vocd: a theoretical and empirical evaluation. Language Testing, 24(4), 459–488. doi: 10.1177/0265532207080767.CrossRefGoogle Scholar
  51. McNamara, D. S., Crossley, S. A., & McCarthy, P. M. (2010). Linguistic features of writing quality. Written Communication, 27(1), 57–86. doi: 10.1177/0741088309351547.CrossRefGoogle Scholar
  52. McNamara, D. S., Crossley, S. A., & Roscoe, R. (2013). Natural language processing in an intelligent writing strategy tutoring system. Behavior Research Methods (Online), 45(2), 499–515. doi: 10.3758/s13428-012-0258-1.CrossRefGoogle Scholar
  53. McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix. New York, NY: Cambridge University Press.Google Scholar
  54. Miller, L. C., Russell, C. L., Cheng, A.-L., & Skarbek, A. J. (2015). Evaluating undergraduate nursing students’ self-efficacy and competence in writing: effects of a writing intensive intervention. Nurse Education in Practice, 15(3), 174–180. doi: 10.1016/j.nepr.2014.12.002.CrossRefGoogle Scholar
  55. National Center for Education Statistics. (2012). The nation’s report card: writing 2011 (NCES 2012–470). Washington, D.C.: Institute of Education Sciences, U.S. Department of Education. Available
  56. National Governors’ Association and Council of Chief State School Officers. (2010). Common core state standards: English language arts and literacy in history/social studies, science, and technical subjects. Washington, DC: Author. Available at
  57. Newell, G. E., Beach, R., Smith, J., & VanDerHeide, J. (2011). Teaching and learning: argumentative reading and writing: a review of research. Reading Research Quarterly, 46(3), 273–304. doi: 10.1598/RRQ.46.3.4.Google Scholar
  58. Nussbaum, E. M., & Schraw, G. (2007). Promoting argument-counterargument integration in students’ writing. The Journal of Experimental Education, 76(1), 59–92.CrossRefGoogle Scholar
  59. O’Neill, P., Adler-Kassner, L., Fleischer, C., & Hall, A. (2012). Creating the framework for success in postsecondary writing. College English, 74(6), 520–533.Google Scholar
  60. Olinghouse, N. G. (2008). Student- and instruction-level predictors of narrative writing in third-grade students. Reading and Writing: An Interdisciplinary Journal, 21(1–2), 3–26.CrossRefGoogle Scholar
  61. Olinghouse, N. G., & Leaird, J. T. (2009). The relationship between measures of vocabulary and narrative writing quality in second- and fourth-grade students. Reading and Writing: An Interdisciplinary Journal, 22, 545–565.CrossRefGoogle Scholar
  62. Olinghouse, N. G., & Wilson, J. (2013). The relationship between vocabulary and writing quality in three genres. Reading and Writing: An Interdisciplinary Journal, 26(1), 45–65. doi: 10.1007/s11145-012-9392-5.CrossRefGoogle Scholar
  63. Parsad, B., & Lewis, L. (2003). Remedial education at degree-granting postsecondary institutions in fall 2000: Statistical analysis report (NCES 2004–010). Washington D.C.: U.S. Department of Education, National Center for Education Statistics. Retrieved from
  64. Perelman, L. (2013). Critique of Mark D. Shermis & Ben Hamner, “Contrasting state-of-the-art automated scoring of essays: analysis” Journal of Writing Assessment, 6(1), not paginated. Available at
  65. Perin, D., & Greenberg, D. (1993). Relationship between literacy gains and length of stay in basic education program for health care workers. Adult Basic Education, 3(3), 171–186.Google Scholar
  66. Perin, D., Keselman, A., & Monopoli, M. (2003). The academic writing of community college remedial students: text and learner variables. Higher Education, 45(1), 19–42.CrossRefGoogle Scholar
  67. Perin, D., Bork, R. H., Peverly, S. T., & Mason, L. H. (2013). A contextualized curricular supplement for developmental reading and writing. Journal of College Reading and Learning, 43(2), 8–38.CrossRefGoogle Scholar
  68. Perin, D., Raufman, J. R., & Kalamkarian, H. S. (2015). Developmental reading and English assessment in a researcher-practitioner partnership (CCRC Working Paper No. 85). New York, NY: Community College Research Center, Teachers College, Columbia University. Available at
  69. Ramineni, C. (2013). Validating automated essay scoring for online writing placement. Assessing Writing, 18(1), 40–61. doi: 10.1016/j.asw.2012.10.005.CrossRefGoogle Scholar
  70. Reilly, E. D., Stafford, R. E., Williams, K. M., & Corliss, S. B. (2014). Evaluating the validity and applicability of automated essay scoring in two massive open online courses. International Review of Research in Open and Distance Learning, 15(5), 83–99.Google Scholar
  71. Sampson, V., Grooms, J., & Walker, J. P. (2011). Argument-driven inquiry as a way to help students learn how to participate in scientific argumentation and craft written arguments: an exploratory study. Science Education, 95(2), 217–257.CrossRefGoogle Scholar
  72. Shanahan, T. (2016). Relationships between reading and writing development. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 194–207). New York, NY: Guilford.Google Scholar
  73. Shermis, M. D., & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: current applications and new directions (pp. 313–346). New York, NY: Routledge.Google Scholar
  74. Shermis, M. D., Burstein, J., Elliot, N., Miel, S., & Foltz, P. W. (2016). Automated writing evaluation: an expanding body of knowledge. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 395–409). New York, NY: Guilford.Google Scholar
  75. Weigle, S. C. (2013). English language learners and automated scoring of essays: critical considerations. Assessing Writing, 18(2), 85–99. doi: 10.1016/j.asw.2012.10.006.CrossRefGoogle Scholar
  76. Westby, C., Culatta, B., Lawrence, B., & Hall-Kenyon, K. (2010). Summarizing expository texts. Topics in Language Disorders, 30(4), 275–287. doi: 10.1097/TLD.0b013e3181ff5a88.CrossRefGoogle Scholar
  77. Williamson, G. (2008). A text readability continuum for postsecondary readiness. Journal of Advanced Academics, 19(4), 602–632. doi: 10.4219/jaa-2008-832.CrossRefGoogle Scholar
  78. Wilson, J., Olinghouse, N. G., McCoach, D. B., Santangelo, T., & Andrada, G. N. (2016). Comparing the accuracy of different scoring methods for identifying sixth graders at risk of failing a state writing assessment. Assessing Writing, 27(1), 11–23. doi: 10.1016/j.asw.2015.06.003.CrossRefGoogle Scholar
  79. Winerip, M. (2012, April 22). Facing a robo-grader? Just keep obfuscating mellifluously. New York Times. Retrieved from
  80. Wissinger, D. R., & De La Paz, S. (2015). Effects of critical discussions on middle school students’ written historical arguments. Journal of Educational Psychology (online first). doi: 10.1037/edu0000043.Google Scholar
  81. Wolfe, C. R. (2011). Argumentation across the curriculum. Written Communication, 28(1), 193–219. doi: 10.1177/0741088311399236.CrossRefGoogle Scholar
  82. Wolfe, C. R., Britt, M. A., & Butler, J. A. (2009). Argumentation schema and the myside bias in written argumentation. Written Communication, 26(2), 183–209. doi: 10.1177/0741088309333019.CrossRefGoogle Scholar
  83. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III tests of achievement and tests of cognitive abilities. Itasca, IL: Riverside Publishing.Google Scholar
  84. Zhang, R. (2015). A Coh-Metrix study of writings by majors of mechanic engineering in the vocational college. Theory and Practice in Language Studies, 5(9), 1929–1934. doi: 10.17507/tpls.0509.23.CrossRefGoogle Scholar

Copyright information

© International Artificial Intelligence in Education Society 2016

Authors and Affiliations

  1. 1.Teachers CollegeColumbia UniversityNew YorkUSA
  2. 2.Brooklyn CollegeCity University of New YorkBrooklynUSA

Personalised recommendations