Automatic representation of knowledge structure: enhancing learning through knowledge structure reflection in an online course
Summary writing is an important skill that students use throughout their academic careers, writing supports reading and vocabulary skills as well as the acquisition of content knowledge. This exploratory and development-oriented investigation appraises the recently released online writing system, Graphical Interface of Knowledge Structure (GIKS) that provides structural feedback of students’ essays as network graphs for reflection and revision. Is the quality of students’ summary essays better with GIKS relative to some other common approaches? Using the learning materials, treatments, and procedure of a dissertation by Sarwar (Doctoral Thesis, University of Ottawa, 2012) but adapted for this setting, over a three-week period Grade 10 students (n = 180) read one of three physics lesson texts each week, wrote a summary essay of it, and then immediately received one of three counterbalanced treatments including reflection with GIKS, solving physics problems as multiple-choice questions, and viewing video information, and finally students rewrote the summary essay. All three treatments showed pre-to-post essay improvement in the central concepts subgraph structure that almost exactly matched the results obtained in the previous dissertation. GIKS with reflection obtained the largest improvement due to the largest increase in relevant links and the largest decrease for irrelevant links. The different treatments led to different knowledge structures in a regular way. These findings confirm those of Sarwar (2012) and support the use of GIKS as immediate focused formative feedback that supports summary writing in online settings.
KeywordsKnowledge structure Reflection Writing GIKS Feedback
Kyung Kim acknowledges support by the Pennsylvania State University’s Center for Online Innovation in Learning (Grant No. 05-042-23 UP10010).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
- Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A meta-analysis. Review of Educational Research, 74, 29–58.Google Scholar
- Clariana, R. B. (2010). Multi-decision approaches for eliciting knowledge structure. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 41–59). New York: Springer.Google Scholar
- Clariana, R. B., Wolfe, M. B., & Kim, K. (2014). The influence of narrative and expository lesson text structures on knowledge structures: Alternate measures of knowledge structure. Educational Technology Research and Development, 62(5), 601–616. https://doi.org/10.1007/s11423-014-9348-3.Google Scholar
- Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21–29.Google Scholar
- Coştu, B., & Ayas, A. (2005). Evaporation in different liquids: Secondary students’ conceptions. Research in Science & Technological Education, 23(1), 75–97.Google Scholar
- DiCerbo, K. E. (2007). Knowledge structures of entering computer networking students and their instructors. Journal of Information Technology Education, 6(1), 263–277.Google Scholar
- Fesel, S. S., Segers, E., Clariana, R. B., & Verhoeven, L. (2015). Quality of children’s knowledge representations in digital text comprehension: Evidence from pathfinder networks. Computers in Human Behavior, 48, 135–146.Google Scholar
- Graham, S., & Hebert, M. (2010). Writing to read: A report from Carnegie Corporation of New York. Evidence for how writing can improve reading. New York: Carnegie Corporation. https://www.carnegie.org/media/filer_public/9d/e2/9de20604-a055-42da-bc00-77da949b29d7/ccny_report_2010_writing.pdf.
- Ifenthaler, D. (2010). Relational, structural, and semantic analysis of graphical representations and concept maps. Educational Technology Research and Development, 58(1), 81–97.Google Scholar
- Johnson-Laird, P. N. (2004). The history of mental models. In K. Manktelow & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 179–212). New York: Psychology Press.Google Scholar
- Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
- Kim, K. (2017a). Visualizing first and second language interactions in science reading: A knowledge structure network approach. Language Assessment Quarterly, 14, 328–345.Google Scholar
- Koul, R., Clariana, R. B., & Salehi, R. (2005). Comparing several human and computer-based methods for scoring concept maps and essays. Journal of Educational Computing Research, 32(3), 261–273.Google Scholar
- Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7–19.Google Scholar
- Li, P., & Clariana, R. B. (2018). Reading comprehension in L1 and L2: An integrative approach. Journal of Neurolinguistics, 45. Retrieved form http://blclab.org/wp-content/uploads/2018/04/Li_Clariana_2018.pdf.
- Mørch, A. I., Engeness, I., Cheng, V. C., Cheung, W. K., & Wong, K. C. (2017). EssayCritic: Writing to learn with a knowledge-based design critiquing system. Educational Technology & Society, 20(2), 213–223.Google Scholar
- Mun, Y. (2015). The effect of sorting and writing tasks on knowledge structure measure in bilinguals’ reading comprehension. Masters Thesis. Retrieved from https://scholarsphere.psu.edu/files/x059c7329.
- Ong, W. J. (1982). Orality and literacy: The technologizing of the word. London: Methuen.Google Scholar
- Osborne, R., & Wittrock, M. (1985). The Generative Learning Model and its implications for science education. Studies in Science Education, 12, 59–87.Google Scholar
- Ozuru, Y., Briner, S., Kurby, C. A., & McNamara, D. S. (2013). Comparing comprehension measured by multiple-choice and open-ended questions. Canadian Journal of Experimental Psychology, 67(3), 215–227.Google Scholar
- Sarwar, G. S. (2012). Comparing the effect of reflections, written exercises, and multimedia instruction to address learners’ misconceptions using structural assessment of knowledge. Doctoral Thesis, University of Ottawa.Google Scholar
- Sarwar, G. S., & Trumpower, D. L. (2015). Effects of conceptual, procedural, and declarative reflection on students’ structural knowledge in physics. Educational Technology Research and Development, 63(2), 185–201.Google Scholar
- Spector, J., & Koszalka, T. (2004). The DEEP methodology for assessing learning in complex domains. Final report to the National Science Foundation Evaluative Research and Evaluation. Syracuse, NY: Syracuse University.Google Scholar
- Su, I.-H., & Hung, Pi.-H. (2010).Validity study on automatic scoring methods for the summarization ofscientific articles. A paper presented at the 7th conference of the international test commission, 19–21 July, 2010, Hong Kong. Retrieved from https://bib.irb.hr/datoteka/575883.itc_programme_book_-final_2.pdf.
- Tippett, C. D. (2010). Refutation text in science education: a review of two decades of research. International Journal of Science and Mathematics Education, 8(6), 951–970.Google Scholar
- Trumpower, D. L., & Sarwar, G. S. (2010). Effectiveness of structural feedback provided by Pathfinder networks. Journal of Educational Computing Research, 43(1), 7–24.Google Scholar
- Van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic Press.Google Scholar
- Zimmerman, W. A., Kang, H. B., Kim, K., Gao, M., Johnson, G., Clariana, R., et al. (2018). Computer-automated approach for scoring short essays in an introductory statistics course. Journal of Statistics Education, 26(1), 40–47.Google Scholar