Open Community Authoring of Targeted Worked Example Problems

  • Turadg Aleahmad
  • Vincent Aleven
  • Robert Kraut
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5091)

Abstract

Open collaborative authoring systems such as Wikipedia are growing in use and impact. How well does this model work for the development of educational resources? In particular, can volunteers contribute materials of sufficient quality? Could they create resources that meet students’ specific learning needs and engage their personal characteristics? Our experiment explored these questions using a novel web-based tool for authoring worked examples. Participants were professional teachers (math and non-math) and amateurs. Participants were randomly assigned to the basic tool, or to an enhanced version that prompted authors to create materials for a specific (fictitious) student. We find that while there are differences by teaching status, all three groups make contributions of worth and that targeting a specific student leads contributors to author materials with greater potential to engage students. The experiment suggests that community authoring of educational resources is a feasible model of development and can enable new levels of personalization.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anderson, J.R.: Rules of the mind. Erlbaum, Hillsdale (1993)Google Scholar
  2. 2.
    Murray, T.: Authoring intelligent tutoring systems: An analysis of the state of the art (International Journal of Artificial Intelligence in Education) 10, 98–129Google Scholar
  3. 3.
    Murray, T., B.S.A.S. (eds.): Tools for Advanced Technology Learning Environments. Kluwer Academic Publishers, Amsterdam (2003)Google Scholar
  4. 4.
    Aleven, V., Sewall, J., McLaren, B.M., Koedinger, K.R.: Rapid authoring of intelligent tutors for real-world and experimental use. In: Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006), pp. 847–851. IEEE Computer Society, Los Alamitos (2006)CrossRefGoogle Scholar
  5. 5.
    Ainsworth, S., Fleming, P.: Evaluating a mixed-initiative authoring environment: Is redeem for real? In: Proceedings of the 12th International Conference on Artificial Intelligence in Education, pp. 9–16. IOS Press, Amsterdam (2005)Google Scholar
  6. 6.
    Turner, T.E., Macasek, M.A., Nuzzo-Jones, G., Heffernan, N.T.: The assistment builder: A rapid development tool for its. In: Proceedings of the 12th Annual Conference on Artificial Intelligence in Education, pp. 929–931 (2005)Google Scholar
  7. 7.
    Baraniuk, R., Burrus, C.S., Hendricks, B., Henry, G., Hero, A., Johnson, D.H., Jones, D.L., Nowak, R., Odegard, J., Potter, L., Reedstrom, R., Schniter, P., Selesnick, I., Williams, D., Wilson., W.: Connexions: Education for a networked world. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, ICASSP (2002)Google Scholar
  8. 8.
    McLaren, B.M., Lim, S.J., Gagnon, F., Yaron, D., Koedinger, K.R.: Studying the effects of personalized language and worked examples in the context of a web-based intelligent tutor. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 318–328. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Schwonke, R., Wittwer, J., Aleven, V., Salden, R., Krieg, C., Renkl, A.: Can tutored problem solving benefit from faded worked-out examples. In: European Cognitive Science Conference, pp. 23–27 (2007)Google Scholar
  10. 10.
    Renkl, A., Atkinson, R.K.: Learning from examples: Fostering self-explanations in computer-based learning environments. Interactive Learning Environments 10(2), 105–119 (2002)CrossRefGoogle Scholar
  11. 11.
    Shulman, L.S.: Those who understand: Knowledge growth in teaching. Educational Researcher 15(2), 4–14 (1986)Google Scholar
  12. 12.
    Ku, H.Y., Sullivan, H.: Student performance and attitudes using personalized mathematics instruction. Educational Technology Research and Development 50(1), 21–34 (2002)CrossRefGoogle Scholar
  13. 13.
    Anand, P.G., Ross, S.M.: Using computer-assisted instruction to personalize arithmetic materials for elementary school children. Journal of Educational Psychology v79(n1), 72–78 (1987)CrossRefGoogle Scholar
  14. 14.
    López, C., Sullivan, H.: Effect of personalization of instructional context on the achievement and attitudes of hispanic students. Educational Technology Research and Development 40(4), 5–14 (1992)CrossRefGoogle Scholar
  15. 15.
    Small, D.A., Loewenstein, G.: Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty 26(1), 5–16 (2003)CrossRefMATHGoogle Scholar
  16. 16.
    Feng, M., Heffernan, N.T., Koedinger, K.R.: Predicting state test scores better with intelligent tutoring systems: Developing metrics to measure assistance required. In: Proceedings of the 8th International Conference on Intelligent Tutoring Systems, pp. 31–40. Springer, Berlin (2006)Google Scholar
  17. 17.
    Pennebaker, J., Francis, M., Booth, R.: Linguistic inquiry and word count: LIWC. Erlbaum Publishers, Mahwah (2001)Google Scholar
  18. 18.
    Kincaid, J., Fishburne, R., Rodgers, R., Chissom, B.: Derivation of new readability formulas for navy enlisted personnel, 8–75 (1975)Google Scholar
  19. 19.
    Cheng, R., Vassileva, J.: Design and evaluation of an adaptive incentive mechanism for sustained educational online communities. User Modeling and User-Adapted Interaction 16(3-4), 321–348 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Turadg Aleahmad
    • 1
  • Vincent Aleven
    • 1
  • Robert Kraut
    • 1
  1. 1.Human Computer Interaction InstituteCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations