Open Community Authoring of Targeted Worked Example Problems
Open collaborative authoring systems such as Wikipedia are growing in use and impact. How well does this model work for the development of educational resources? In particular, can volunteers contribute materials of sufficient quality? Could they create resources that meet students’ specific learning needs and engage their personal characteristics? Our experiment explored these questions using a novel web-based tool for authoring worked examples. Participants were professional teachers (math and non-math) and amateurs. Participants were randomly assigned to the basic tool, or to an enhanced version that prompted authors to create materials for a specific (fictitious) student. We find that while there are differences by teaching status, all three groups make contributions of worth and that targeting a specific student leads contributors to author materials with greater potential to engage students. The experiment suggests that community authoring of educational resources is a feasible model of development and can enable new levels of personalization.
Unable to display preview. Download preview PDF.
- 1.Anderson, J.R.: Rules of the mind. Erlbaum, Hillsdale (1993)Google Scholar
- 2.Murray, T.: Authoring intelligent tutoring systems: An analysis of the state of the art (International Journal of Artificial Intelligence in Education) 10, 98–129Google Scholar
- 3.Murray, T., B.S.A.S. (eds.): Tools for Advanced Technology Learning Environments. Kluwer Academic Publishers, Amsterdam (2003)Google Scholar
- 4.Aleven, V., Sewall, J., McLaren, B.M., Koedinger, K.R.: Rapid authoring of intelligent tutors for real-world and experimental use. In: Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006), pp. 847–851. IEEE Computer Society, Los Alamitos (2006)CrossRefGoogle Scholar
- 5.Ainsworth, S., Fleming, P.: Evaluating a mixed-initiative authoring environment: Is redeem for real? In: Proceedings of the 12th International Conference on Artificial Intelligence in Education, pp. 9–16. IOS Press, Amsterdam (2005)Google Scholar
- 6.Turner, T.E., Macasek, M.A., Nuzzo-Jones, G., Heffernan, N.T.: The assistment builder: A rapid development tool for its. In: Proceedings of the 12th Annual Conference on Artificial Intelligence in Education, pp. 929–931 (2005)Google Scholar
- 7.Baraniuk, R., Burrus, C.S., Hendricks, B., Henry, G., Hero, A., Johnson, D.H., Jones, D.L., Nowak, R., Odegard, J., Potter, L., Reedstrom, R., Schniter, P., Selesnick, I., Williams, D., Wilson., W.: Connexions: Education for a networked world. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, ICASSP (2002)Google Scholar
- 8.McLaren, B.M., Lim, S.J., Gagnon, F., Yaron, D., Koedinger, K.R.: Studying the effects of personalized language and worked examples in the context of a web-based intelligent tutor. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 318–328. Springer, Heidelberg (2006)CrossRefGoogle Scholar
- 9.Schwonke, R., Wittwer, J., Aleven, V., Salden, R., Krieg, C., Renkl, A.: Can tutored problem solving benefit from faded worked-out examples. In: European Cognitive Science Conference, pp. 23–27 (2007)Google Scholar
- 11.Shulman, L.S.: Those who understand: Knowledge growth in teaching. Educational Researcher 15(2), 4–14 (1986)Google Scholar
- 16.Feng, M., Heffernan, N.T., Koedinger, K.R.: Predicting state test scores better with intelligent tutoring systems: Developing metrics to measure assistance required. In: Proceedings of the 8th International Conference on Intelligent Tutoring Systems, pp. 31–40. Springer, Berlin (2006)Google Scholar
- 17.Pennebaker, J., Francis, M., Booth, R.: Linguistic inquiry and word count: LIWC. Erlbaum Publishers, Mahwah (2001)Google Scholar
- 18.Kincaid, J., Fishburne, R., Rodgers, R., Chissom, B.: Derivation of new readability formulas for navy enlisted personnel, 8–75 (1975)Google Scholar