Advertisement

Reading and Writing

, Volume 26, Issue 1, pp 45–65 | Cite as

The relationship between vocabulary and writing quality in three genres

  • Natalie G. OlinghouseEmail author
  • Joshua Wilson
Article

Abstract

The purpose of this study was to examine the role of vocabulary in writing across three genres. Fifth graders (N = 105) wrote three compositions: story, persuasive, and informative. Each composition revolved around the topic of outer space to control for background knowledge. Written compositions were scored for holistic writing quality and several different vocabulary constructs: diversity, maturity, elaboration, academic words, content words, and register. The results indicated that students vary their vocabulary usage by genre. Story text had higher diversity than informative text as well as higher maturity as compared to persuasive text. Persuasive text contained higher diversity than informative text, and higher register than both of the other genres. Informative text included more content words and elaboration than the other text types as well as more maturity than persuasive text. Additionally, multiple regression and commonality analysis indicated that the vocabulary constructs related to writing quality differed by genre. For story text, vocabulary diversity was a unique predictor, while for persuasive text, content words and register were unique predictors. Finally, for informative text content words was the strongest unique predictor explaining almost all of the total variance in the five factor model, although maturity was also a unique predictor.

Keywords

Writing Vocabulary Genre 

Notes

Acknowledgments

This study was funded in part by the University of Connecticut’s Large Faculty Grant program.

References

  1. Barenbaum, E. M., Newcomer, P. L., & Nodine, B. F. (1987). Children’s ability to write stories as a function of variation in task, age, and developmental level. Learning Disability Quarterly, 10, 175–188.CrossRefGoogle Scholar
  2. Bar-Ilan, L., & Berman, R. A. (2007). Developing register differentiation: The Latinate–Germanic divide in English. Linguistics, 45(1), 1–35.CrossRefGoogle Scholar
  3. Benton, S., Corkill, A., Sharp, J., Downey, R., & Khramtsova, I. (1995). Knowledge, interest, and narrative writing. Journal of Educational Psychology, 87, 66–79.CrossRefGoogle Scholar
  4. Berman, R. A., & Nir-Sagiv, B. (2007). Comparing narrative and expository text construction across adolescence: A developmental paradox. Discourse Processes, 43(2), 79–120.Google Scholar
  5. Berman, R. A., & Verhoeven, L. (2002). Cross-linguistic perspectives on the development of text-production abilities: Speech and writing. Written Language and Literacy, 5(1), 1–43. doi: 10.1075/wll.5.1.02ber.CrossRefGoogle Scholar
  6. Biber, D. (1988). Variation across speech and writing. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  7. Biber, D. (2009). A corpus-driven approach to formulaic language in English: Multi-word patterns in speech and writing. International Journal of Corpus Linguistics, 14, 275–311.CrossRefGoogle Scholar
  8. Biber, D., & Kurjian, J. (2007). Towards a taxonomy of web registers and text types: A multi-dimensional analysis. Language and Computers: Studies in Practical Linguistics, 59(1), 109–131.Google Scholar
  9. Biber, D., & Vasquez, C. (2008). Writing and speaking. In C. Bazerman (Ed.), Handbook of research on writing: History, society, school, individual, text (pp. 535–547). New York, NY: Lawrence Erlbaum.Google Scholar
  10. Brown, G. T. L., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9, 105–121.CrossRefGoogle Scholar
  11. Carroll, J. B. (1964). Language and thought. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  12. Cobb, T. (n.d.). Web Vocabprofile [Accessed June 10, 2011, from http://www.lextutor.ca/vp/], an adaptation of Heatley & Nation’s (1994) Range.
  13. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  14. Common Core State Standards Initiative. (2010). Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects. Retrieved from http://www.corestandards.org/assets/CCSSI_ELA%20Standards.pdf.
  15. Cooley, W. W., & Lohnes, P. R. (1976). Evaluation research in education: Theory, principles, and practice. New York: Irvington Publishers.Google Scholar
  16. Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213–238. http://www.jstor.org/stable/3587951.
  17. DeGroff, L. (1987). The influence of prior knowledge on writing, conferencing, and revising. Elementary School Journal, 88, 105–118.CrossRefGoogle Scholar
  18. DeRemer, M. L. (1998). Writing assessment: Raters’ elaboration of the rating task. Assessing Writing, 5, 7–30.CrossRefGoogle Scholar
  19. Duke, N. K., & Benett-Armistead, S. V. (2003). Reading and writing informational text in the primary grades: Research-based practices. New York: Scholastic Teaching Resources.Google Scholar
  20. Flower, L. S., & Hayes, J. R. (1980). The dynamics of composing: Making plans and juggling constraints. In L. W. Gregg & E. R. Sternberg (Eds.), Cognitive processes in writing (pp. 3–29). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  21. Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. In R. B. Ruddell, M. R. Ruddell, & H. Singer (Eds.), Theoretical models and processes of reading (pp. 928–950). Newark, DE: International Reading Association.Google Scholar
  22. Gardner, D. (2004). Vocabulary input through extensive reading: A comparison of words found in children’s narrative and expository reading materials. Applied Linguistics, 25(1), 1–37.CrossRefGoogle Scholar
  23. Godshalk, F. I., Swineford, F., & Coffman, W. E. (1966). The measurement of writing ability. New York: College Entrance Examination Board.Google Scholar
  24. Graham, S., Harris, K. R., Fink-Chorzempa, B., & MacArthur, C. A. (2003). Primary grade teachers’ instructional adaptations for struggling writers: A national survey. Journal of Educational Psychology, 95, 279–292.CrossRefGoogle Scholar
  25. Graham, S., Harris, K., & Hebert, M. A. (2011). Informing writing: The benefits of formative assessment. A Carnegie Corporation Time to Act report. Washington, DC: Alliance for Excellent Education.Google Scholar
  26. Grobe, G. (1981). Syntactic maturity, mechanics, and vocabulary as predictors of quality ratings. Research in the Teaching of English, 15(1), 75–85.Google Scholar
  27. Halliday, M. A. K., & Hasan, R. (1976). Cohesion in English. London: Longman.Google Scholar
  28. Hammill, D. D., & Larsen, S. C. (1996). Test of written langauge-3. Austin, TX: Pro-ed.Google Scholar
  29. Harmon, J. M., Hedrick, W. B., & Wood, K. D. (2005). Research on vocabulary instruction in the content areas: Implications for struggling readers. Reading and Writing Quarterly, 21, 261–280.CrossRefGoogle Scholar
  30. Harmon, J. M., Wood, K. D., & Medina, A. L. (2009). Vocabulary learning in the content areas: Research-based practices for middle and secondary school classrooms. In K. D. Wood & W. E. Blanton (Eds.), Literacy instruction for adolescents: Research based practice (pp. 344–367). New York: Guilford.Google Scholar
  31. Hayes, J. R. (1996). A new framework for understanding cognition and affect in writing. In R. B. Ruddell & N. J. Unrau (Eds.), Theoretical models and processes of reading (5th ed., pp. 1399–1430). Newark, DE: International Reading Association.Google Scholar
  32. Heatley, A., & Nation, P. (1994). Range. Victoria University of Wellington, NZ. [Computer program, available at http://www.vuw.ac.nz/lals/].
  33. Hiebert, E., & Cervetti, G. (2011). What differences in narrative and informational texts mean for the learning and instruction of vocabulary (Reading Research Report No. 11.01). Retrieved from TextProject website: http://textproject.org/assets/publications/TextProject_RRR-11.01_Vocabularies-of-Narrative-and-Informational-Texts.pdf.
  34. Johansson, V. (2008). Lexical diversity and lexical density in speech and writing: A developmental perspective. Lund University, Department of Linguistics and Phonetics: Working Papers, 53, 61–79.Google Scholar
  35. Johnson, W. (1944). Studies in language behavior: I. A program of research. Psychological Monographs, 56, 1–15.CrossRefGoogle Scholar
  36. Johnson, R. L., Penny, J., Fisher, S., & Kuhs, T. (2003). Score resolution: An investigation of the reliability and validity of resolved scores. Applied Measurement in Education, 16, 299–322.CrossRefGoogle Scholar
  37. Kerlinger, F. N., & Pedhazur, E. J. (1973). Multiple regression in behavioral research. New York: Holt, Rinehart, and Winston.Google Scholar
  38. Langer, J. (1984). The effects of available information on responses to school writing tasks. Research in the Teaching of English, 18, 27–44.Google Scholar
  39. Lawrence, J. F., White, C., & Snow, C. E. (2010). The words students need. Educational Leadership, 68(2), 23–26.Google Scholar
  40. Marzano, R. J. (2002). A comparison of selected methods of scoring classroom assessments. Applied Measurement in Education, 15, 249–267.CrossRefGoogle Scholar
  41. Marzano, R. J., & Pickering, D. J. (2005). Building academic vocabulary: Teacher’s manual. Alexandria, VA: Association for Supervision and Curriculum Development.Google Scholar
  42. McCarthy, P. M., & Jarvis, S. (2007). Vocd: A theoretical and empirical evaluation. Language Testing, 24, 459–488.CrossRefGoogle Scholar
  43. McCarthy, P. M., & Jarvis, S. (2010). MTLD, vocd-D, and HD-D: A validation study of sophisticated approaches to lexical diversity assessment. Behavior Research Methods, 42(2), 381–392.CrossRefGoogle Scholar
  44. McCarthy, P. M., Watanabi S., & Lamkin, T. A. (in press). The Gramulator: A tool to identify differential linguistic features of correlative text types. In P. M. McCarthy & C. Boonthum-Denecke (Eds.), Applied natural language processing: Identification, investigation, and resolution. Hershey, PA: IGI Global.Google Scholar
  45. McCutchen, D. (1986). Domain knowledge and linguistic knowledge in the development of writing ability. Journal of Memory and Language, 25, 431–444.CrossRefGoogle Scholar
  46. McNamara, D. S., Louwerse, M. M., Cai, Z., & Graesser, A. (2005, January 1). Coh-Metrix version 1.4. Retrieved June 10, 2011, from http//:cohmetrix.memphis.edu.Google Scholar
  47. Nagy, W. E., & Scott, J. A. (2000). Vocabulary processing. In M. L. Kamil, P. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 269–284). Mahwah, NJ: Erlbaum.Google Scholar
  48. National Assessment Governing Board. (2010). Writing framework for the 2011 National Assessment of Education Progress. Washington, DC: US Government Printing Office.Google Scholar
  49. Olinghouse, N. G., & Graham, S. (2009). The relationship between the discourse knowledge and the writing performance of elementary-grade students. Journal of Educational Psychology, 101(1), 37–50.CrossRefGoogle Scholar
  50. Olinghouse, N. G., & Leaird, J. T. (2009). The relationship between measures of vocabulary and narrative writing quality in second- and fourth-grade students. Reading and Writing, 22, 545–565.CrossRefGoogle Scholar
  51. Pappas, C. (1991). Young children’s strategies in learning the ‘book language’ of information books. Discourse Processes, 14, 203–225.CrossRefGoogle Scholar
  52. Pedhazur, E. J. (1997). Multiple regression in behavioral research (3rd ed.). Fort Worth, TX: Harcourt Brace.Google Scholar
  53. Penny, J., Johnson, R. L., & Gordon, B. (2000a). The effect of rating augmentation on inter-rater reliability: An empirical study of a holistic rubric. Assessing Writing, 7, 143–164.CrossRefGoogle Scholar
  54. Penny, J., Johnson, R. J., & Gordon, B. (2000b). Using rating augmentation to expand the scale of an analytic rubric. Journal of Experimental Education, 68, 269–287.CrossRefGoogle Scholar
  55. Rehder, B., Schreiner, M. E., Wolfe, M. B. W., Laham, D., Landauer, T. K., & Kintsch, W. (1998). Using latent semantic analysis to assess knowledge: Some technical considerations. Discourse Processes, 25(2&3), 337–354.CrossRefGoogle Scholar
  56. Scardamalia, M., & Bereiter, M. (1987). Knowledge telling and knowledge transforming in written composition. In S. Rosenberg (Ed.), Advances in applied psycholinguistics, Vol. 2: Reading, writing, and language learning (pp. 142–175). New York: Cambridge University Press.Google Scholar
  57. Scott, J. A., Lubliner, S., & Hiebert, E. H. (2006). Constructs underlying word selection and assessments tasks in the archival research on vocabulary instruction. In J. V. Hoffman, D. L. Schallert, C. M. Fairbanks, J. Worthy, & B. Maloch (Eds.), 55th Yearbook of the national reading conference (pp. 264–275). Oak Creek, WI: National Reading Conference, Inc.Google Scholar
  58. Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research & Evaluation, 9(4).Google Scholar
  59. Swales, J. (1988). Discourse communities, genres and English as an international language. World Englishes, 7(2), 211–220.CrossRefGoogle Scholar
  60. Watt, R. J. C. (2000). Concordance (computer software). Dundee, Scotland: R. J. C. Watt.Google Scholar
  61. West, M. (1953). A general service list of English words. London: Longman, Green & Co.Google Scholar
  62. Zeno, S. M., Ivens, S. H., Millard, R. T., & Duvvuri, R. (1995). The educator’s word frequency guide. New York, NY: Touchstone Applied Science Associates.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.Educational PsychologyUniversity of ConnecticutStorrsUSA

Personalised recommendations