Assessing Text-Based Writing of Low-Skilled College Students
- 260 Downloads
The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored measures were a holistic quality rating for a persuasive essay and an analytic quality score for a written summary. Both writing samples were based on text on psychology and sociology topics related to content taught at the introductory undergraduate level. The study is a modified replication of McNamara et al. (Written Communication, 27(1), 57–86 2010), who identified several Coh-Metrix variables from five linguistic classes that reliably predicted group membership (high versus low proficiency) using human quality scores on persuasive essays written by average-achieving college students. When discriminant analyses and ANOVAs failed to replicate the McNamara et al. (Written Communication, 27(1), 57–86 2010) findings, the current study proceeded to analyze all of the variables in the five Coh-Metrix classes. This larger analysis identified 10 variables that predicted human-scored writing proficiency. Essay and summary scores were predicted by different automated variables. Implications for instruction and future use of automated scoring to understand the writing of low-skilled adults are discussed.
KeywordsWriting skills Automated scoring Adult students Persuasive essay Written summary
The writing samples analyzed in this study were collected under funding by the Bill & Melinda Gates Foundation to the Community College Research Center, Teachers College, Columbia University for a project entitled “Analysis of Statewide Developmental Education Reform: Learning Assessment Study.” Special thanks to Jian-Ping Ye, Geremy Grant and Natalie Portillo for assistance with data entry.
- Brace, N., Kemp, R., & Snelgar, R. (2012). SPSS for psychologists: a guide to data analysis using (5th ed.). New York, NY: Routledge.Google Scholar
- Bridgeman, B. (2013). Human ratings and automated essay evaluation. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: current applications and new directions (pp. 221–232). New York, NY: Routledge.Google Scholar
- Brown, J. I., Fishco, V. V., & Hanna, G. S. (1993). The Nelson-Denny reading test, forms G and H. Itasca, IL: Riverside/Houghton-Mifflin.Google Scholar
- Burstein, J., Tetreault, J., & Madnani, N. (2013). The e-rater® automated essay scoring system. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: current applications and new directions. New York, NY: Routledge.Google Scholar
- Burstein, J., Holtzman, S., Lentini, J., Molloy, H., Shore, J., Steinberg, J., … Elliot, N. (2014). Genre research and automated writing evaluation: using the lens of genre to understand exposure and readiness in teaching and assessing school and workplace writing. Paper presented at the National Council on measurement in education (NCME), April 2014, Philadelphia, PA.Google Scholar
- Cohen, A. M., Brawer, F. B., & Kisker, C. B. (2013). The American community college (6th ed.). Boston, MA: Wiley.Google Scholar
- Comer, D. K., & White, E. M. (2016). Adventuring into MOOC writing assessment: challenges, results, and possibilities. College Composition and Communication, 67(3), 318–359.Google Scholar
- De La Paz, S., Ferretti, R., Wissinger, D., Yee, L., & MacArthur, C. A. (2012). Adolescents’ disciplinary use of evidence, argumentative strategies, and organizational structure in writing about historical controversies. Written Communication, 29(4), 412–454. doi: 10.1177/0741088312461591.CrossRefGoogle Scholar
- Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012). Placement of students into first-year writing courses. Research in the Teaching of English, 46(3), 285–313.Google Scholar
- Fallahi, C. R. (2012). Improving the writing skills of college students. In E. L. Grigorenko, E. Mambrino, & D. D. Preiss (Eds.), Writing: a mosaic of new perspectives (pp. 209–219). New York, NY: Psychology Press.Google Scholar
- Ferretti, R. P., MacArthur, C. A., & Dowdy, N. S. (2000). The effects of an elaborated goal on the persuasive writing of students with learning disabilities and their normally achieving peers. Journal of Educational Psychology, 92(4), 694–702. doi: 10.10377//0022:2–0618.104.22.1684.CrossRefGoogle Scholar
- Hale, G., Taylor, C., Bridgeman, B., Carson, J., Kroll, B., & Kantor, R. (1996). A study of writing tasks assigned in academic degree programs (RR-95-44, TOEFL-RR-54). Retrieved from Princeton, NJ.Google Scholar
- Hillocks, G. (2011). Teaching argument writing, grades 6–12: supporting claims with relevant evidence and clear reasoning. Portsmouth, NH: Heinemann.Google Scholar
- Holtzman, J. M., Elliot, N., Biber, C. L., & Sanders, R. M. (2005). Computerized assessment of dental student writing skills. Journal of Dental Education, 69(2), 285–295.Google Scholar
- Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges. Retrieved from CCRC Working Paper No. 19. New York.Google Scholar
- MacArthur, C. A., & Philippakos, Z. A. (2012). Strategy instruction with college basic writers: a design study. In C. Gelati, B. Arfé, & L. Mason (Eds.), Issues in writing research (pp. 87–106). Padova: CLEUP.Google Scholar
- Mason, L. H., Davison, M. D., Hammer, C. S., Miller, C. A., & Glutting, J. J. (2013). Knowledge, writing, and language outcomes for a reading comprehension and writing intervention. Reading and Writing: An Interdisciplinary Journal, 26(7), 1133–1158. doi: 10.1007/s11145-012-9409-0.CrossRefGoogle Scholar
- McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix. New York, NY: Cambridge University Press.Google Scholar
- National Center for Education Statistics. (2012). The nation’s report card: writing 2011 (NCES 2012–470). Washington, D.C.: Institute of Education Sciences, U.S. Department of Education. Available http://nces.ed.gov/nationsreportcard/pdf/main2011/2012470.pdf.
- National Governors’ Association and Council of Chief State School Officers. (2010). Common core state standards: English language arts and literacy in history/social studies, science, and technical subjects. Washington, DC: Author. Available at http://www.corestandards.org/.
- O’Neill, P., Adler-Kassner, L., Fleischer, C., & Hall, A. (2012). Creating the framework for success in postsecondary writing. College English, 74(6), 520–533.Google Scholar
- Parsad, B., & Lewis, L. (2003). Remedial education at degree-granting postsecondary institutions in fall 2000: Statistical analysis report (NCES 2004–010). Washington D.C.: U.S. Department of Education, National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubs2004/2004010.pdf.
- Perelman, L. (2013). Critique of Mark D. Shermis & Ben Hamner, “Contrasting state-of-the-art automated scoring of essays: analysis” Journal of Writing Assessment, 6(1), not paginated. Available at http://www.journalofwritingassessment.org/article.php?article=69.
- Perin, D., & Greenberg, D. (1993). Relationship between literacy gains and length of stay in basic education program for health care workers. Adult Basic Education, 3(3), 171–186.Google Scholar
- Perin, D., Raufman, J. R., & Kalamkarian, H. S. (2015). Developmental reading and English assessment in a researcher-practitioner partnership (CCRC Working Paper No. 85). New York, NY: Community College Research Center, Teachers College, Columbia University. Available at http://ccrc.tc.columbia.edu/publications/developmental-reading-english-assessment-researcher-practitioner-partnership.html.
- Reilly, E. D., Stafford, R. E., Williams, K. M., & Corliss, S. B. (2014). Evaluating the validity and applicability of automated essay scoring in two massive open online courses. International Review of Research in Open and Distance Learning, 15(5), 83–99.Google Scholar
- Shanahan, T. (2016). Relationships between reading and writing development. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 194–207). New York, NY: Guilford.Google Scholar
- Shermis, M. D., & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: current applications and new directions (pp. 313–346). New York, NY: Routledge.Google Scholar
- Shermis, M. D., Burstein, J., Elliot, N., Miel, S., & Foltz, P. W. (2016). Automated writing evaluation: an expanding body of knowledge. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (2nd ed., pp. 395–409). New York, NY: Guilford.Google Scholar
- Wilson, J., Olinghouse, N. G., McCoach, D. B., Santangelo, T., & Andrada, G. N. (2016). Comparing the accuracy of different scoring methods for identifying sixth graders at risk of failing a state writing assessment. Assessing Writing, 27(1), 11–23. doi: 10.1016/j.asw.2015.06.003.CrossRefGoogle Scholar
- Winerip, M. (2012, April 22). Facing a robo-grader? Just keep obfuscating mellifluously. New York Times. Retrieved from http://www.nytimes.com/2012/04/23/education/robo-readers-used-to-grade-test-essays.html
- Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III tests of achievement and tests of cognitive abilities. Itasca, IL: Riverside Publishing.Google Scholar