Technology, Knowledge and Learning

, Volume 19, Issue 1–2, pp 205–220 | Cite as

Educational Data Mining and Learning Analytics: Applications to Constructionist Research

Integrative Review

Abstract

Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inferences about learning and learners, while providing EDM researchers with many interesting new research questions and challenges.

Keywords

Constructionism Educational data mining Learning analytics Design of learning environments Project-based learning 

References

  1. Abelson, H., & diSessa, A. (1986). Turtle geometry: The computer as a medium for exploring mathematics. Cambridge, MA: The MIT Press.Google Scholar
  2. Amershi, S., & Conati, C. (2009). Combining unsupervised and supervised classification to build user models for exploratory learning environments. Journal of Educational Data Mining, 1(1), 18–71.Google Scholar
  3. Andersen, E., Liu, Y.-E., Apter, E., Boucher-Genesse, F., & Popovic, Z. (2010). Gameplay analysis through state projection. In Proceedings of the fifth international conference on the foundations of digital games (pp. 1–8). Monterey, California: ACM.Google Scholar
  4. Arnold, K. E. (2010). Signals: Applying academic analytics. Educause Quarterly, 33(1), n1.Google Scholar
  5. Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L., et al. (2010). Problems with the use of student test scores to evaluate teachers. Washington, DC: Economic Policy Institute.Google Scholar
  6. Baker, R.S., Corbett, A.T., Koedinger, K. R. (2004). Detecting Student Misuse of Intelligent Tutoring Systems. In Proceedings of the 7th international conference on intelligent tutoring systems (pp. 531–540).Google Scholar
  7. Baker, R., Corbett, A., Koedinger, K., Evenson, S., Roll, I., Wagner, A., et al. (2006). Adapting to when students game an intelligent tutoring system. In M. Ikeda, K. Ashley, & T.-W. Chan (Eds.), Intelligent tutoring systems, Lecture Notes in Computer Science (Vol. 4053, pp. 392–401). Berlin/Heidelberg: Springer. Retrieved from http://www.springerlink.com.libweb.lib.utsa.edu/content/t3103564632g7n41/abstract/.
  8. Baker, R. S. J. D., Corbett, A. T., Roll, I., & Koedinger, K. R. (2008). Developing a generalizable detector of when students game the system. User Modeling and User-Adapted Interaction, 18(3), 287–314.CrossRefGoogle Scholar
  9. Baker, R., Gowda, S., & Corbett, A. (2011). Towards predicting future transfer of learning. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), Artificial intelligence in education, Lecture Notes in Computer Science (Vol. 6738, pp. 23–30). Berlin: Springer. Retrieved from http://www.springerlink.com.libweb.lib.utsa.edu/content/41325h8k111q0734/abstract/.
  10. Baker, R., & Siemens, G. (in press). Educational data mining and learning analytics. To appear in Sawyer, K. (Ed.), Cambridge Handbook of the Learning Sciences: 2nd Edition. Cambridge, UK: Cambridge University Press.Google Scholar
  11. Baker, R., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1(1), 3–17.Google Scholar
  12. Beck, J. E., & Woolf, B. P. (2000). High-level student modeling with machine learning. In G. Gauthier, C. Frasson, & K. VanLehn (Eds.), Intelligent tutoring systems, Lecture Notes in Computer Science (pp. 584–593). Berlin, Heidelberg: Springer. Retrieved from http://link.springer.com.libweb.lib.utsa.edu/chapter/10.1007/3-540-45108-0_62.
  13. Berland, M., Martin, T. et al. (2013). Using learning analytics to understand the learning pathways of novice programmers. Journal of the Learning Sciences, 22(4), 564–599.CrossRefGoogle Scholar
  14. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London: King’s College London.Google Scholar
  15. Blikstein, P. (2009). An atom is known by the company it keeps: Content, representation and pedagogy within the epistemic revolution of the complexity sciences. PhD. dissertation, Northwestern University, Evanston, IL.Google Scholar
  16. Blikstein, P. (2011). Using learning analytics to assess students’ behavior in open-ended programming tasks. In Proceedings of the I learning analytics knowledge conference (LAK 2011), Banff, Canada.Google Scholar
  17. Blikstein, P. (2013a). Digital fabrication and ‘making’ in education: The democratization of invention. In J. Walter-Herrmann & C. Büching (Eds.), FabLabs: Of machines, makers and inventors. Bielefeld: Transcript Publishers.Google Scholar
  18. Blikstein, P. (2013b). Multimodal Learning Analytics. In: Proceedings of the III learning analytics knowledge conference (LAK 2013), Leuven, Belgium.Google Scholar
  19. Blikstein, P. (2014). Bifocal modeling: Promoting authentic scientific inquiry through exploring and comparing real and ideal systems linked in real-time. In A. Nijholt (Ed.), Playful user interfaces (pp. 317–352). Singapore: Springer.CrossRefGoogle Scholar
  20. Piech, C., Sahami, M., koller, D., Cooper, S., & Blikstein, P. (2012). Modeling how students learn to program. In Proceedings of the 43rd ACM technical symposium on Computer Science Education (pp. 153–160). Raleigh, North Carolina, USA: ACM.Google Scholar
  21. Blumenfeld, P., Fishman, B. J., Krajcik, J., Marx, R. W., & Soloway, E. (2000). Creating usable innovations in systemic reform: Scaling up technology-embedded project-based science in urban schools. Educational Psychologist, 35(3), 149–164.CrossRefGoogle Scholar
  22. Buechley, L., & Eisenberg, M. (2008). The lilypad arduino: Toward wearable engineering for everyone. Pervasive Computing, IEEE, 7(2), 12–15.CrossRefGoogle Scholar
  23. Conati, C., & Maclaren, H. (2005). Data-driven refinement of a probabilistic model of user affect. In L. Ardissono, P. Brna, & A. Mitrovic (Eds.), User modeling 2005, Lecture Notes in Computer Science (pp. 40–49). Berlin: Springer. Retrieved from http://link.springer.com.libweb.lib.utsa.edu/chapter/10.1007/11527886_7.
  24. Corbett, A. T., & Anderson, J. R. (1995). Knowledge decomposition and subgoal reification in the ACT programming tutor. In Proceedings of the 7th world conference on artificial intelligence in education.Google Scholar
  25. D’Mello, S. K., Lehman, B., & Person, N. (2010). Monitoring affect states during effortful problem solving activities. International Journal of Artificial Intelligence in Education, 20(4), 361–389. doi:10.3233/JAI-2010-012.Google Scholar
  26. diSessa, A. A. (1993). Toward an epistemology of physics. Cognition and Instruction, 10(2–3), 105–225. doi:10.1080/07370008.1985.9649008.CrossRefGoogle Scholar
  27. diSessa, A. A., & Cobb, P. (2004). Ontological innovation and the role of theory in design experiments. Journal of the Learning Sciences, 13(1), 77–103. doi:10.1207/s15327809jls1301_4.CrossRefGoogle Scholar
  28. Dragon, T., Arroyo, I., Woolf, B. P., Burleson, W., Kaliouby, R. el, & Eydgahi, H. (2008). Viewing student affect and learning through classroom observation and physical sensors. In B. P. Woolf, E. Aïmeur, R. Nkambou, & S. Lajoie (Eds.), Intelligent tutoring systems, Lecture Notes in Computer Science (pp. 29–39). Berlin: Springer. Retrieved from http://link.springer.com.libweb.lib.utsa.edu/chapter/10.1007/978-3-540-69132-7_8.
  29. Duncan, S., & Berland, M. (2012). Triangulating Learning in Board Games: Computational thinking at multiple scales of analysis. In Proceedings of Games, Learning, & Society 8.0 (pp. 90–95). Madison, WI, USA.Google Scholar
  30. Dyke, G. (2011). Which aspects of novice programmers’ usage of an IDE predict learning outcomes. In Proceedings of the 42nd ACM technical symposium on computer science education (pp. 505–510). Dallas, TX: ACM.Google Scholar
  31. Eisenberg, M. (2011). Educational fabrication, in and out of the classroom. Society for Information Technology & Teacher Education International Conference, 2011(1), 884–891.Google Scholar
  32. Fosnot, C. T. (2005). Constructivism: Theory, perspectives, and practice. New York: Teachers College Press.Google Scholar
  33. Harel, I. (1990). Children as software designers: A constructionist approach for learning mathematics. Journal of Mathematical Behavior, 9(1), 3–93.Google Scholar
  34. IEDMS. (2009). International Educational Data Mining Society. Retrieved April 22, 2013, from http://www.educationaldatamining.org/.
  35. Jeong, H., Biswas, G., Johnson, J., & Howard, L. (2010). Analysis of productive learning behaviors in a structured inquiry cycle using hidden markov models. Manuscript submitted for publication.Google Scholar
  36. Kafai, Y. B., & Peppler, K. A. (2011). Youth, technology, and DIY developing participatory competencies in creative media production. Review of Research in Education, 35(1), 89–119.CrossRefGoogle Scholar
  37. Levy, F., & Murnane, R. J. (2005). The new division of labor: How computers are creating the next job market. Princeton University Press.Google Scholar
  38. Liu, Y.-E., Andersen, E., Snider, R., Cooper, S., & Popović, Z. (2011). Feature-based projections for effective playtrace analysis. In Proceedings of the 6th international conference on foundations of digital games (pp. 69–76). Bordeaux, France: ACM.Google Scholar
  39. Lynch, C., Ashley, K., Pinkwart, N., & Aleven, V. (2008). Argument graph classification with Genetic Programming and C4.5 (pp. 137–146). Presented at the educational data mining 2008: 1st international conference on educational data mining, Proceedings, Montreal, Quebec, Canada.Google Scholar
  40. Marzban, C., & Stumpf, G. J. (1998). A neural network for damaging wind prediction. Weather and Forecasting, 13(1), 151–163. doi:10.1175/1520-0434(1998)013<0151:ANNFDW>2.0.CO;2.CrossRefGoogle Scholar
  41. Merceron, A., & Yacef, K. (2004). Mining student data captured from a web-based tutoring tool: Initial exploration and results. Journal of Interactive Learning Research, 15(4), 319–346.Google Scholar
  42. Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.Google Scholar
  43. Muehlenbrock, M. (2005). Automatic action analysis in an interactive learning environment. In Proceedings of the workshop on usage analysis in learning systems at the 12th international conference on artificial intelligence in education AIED 2005 (pp. 73–80).Google Scholar
  44. NGSS Lead States. (2013). Next generation science standards: For states, by states. Washington, DC: The National Academies Press.Google Scholar
  45. Ocumpaugh, J., Baker, R., Gowda, S., Heffernan, N., & Heffernan, C. (2014). Population validity for educational data Mining models: A case study in affect detection. British Journal of Educational Technology, 45(3), 487–501.Google Scholar
  46. Papert, S. (1972). Teaching children to be mathematicians versus teaching about mathematics. International Journal of Mathematical Education in Science and Technology. Retrieved from http://www.informaworld.com/index/746865236.pdf.
  47. Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. NYC: Basic Books.Google Scholar
  48. Papert, S. (2000). What’s the big idea? Toward a pedagogy of idea power. IBM Systems Journal. Retrieved from http://llk.media.mit.edu/courses/readings/Papert-Big-Idea.pdf.
  49. Papert, S., & Harel, I. (1991). Situating constructionism. Constructionism. Retrieved from http://namodemello.com.br/pdf/tendencias/situatingconstrutivism.pdf.
  50. Piech, C., Sahami, M., Koller, D., Cooper, S., & Blikstein, P. (2012). Modeling how students learn to program. In Proceedings of the 43rd ACM technical symposium on computer science education, Raleigh, North Carolina, USA.Google Scholar
  51. Resnick, M. (1998). Technologies for lifelong kindergarten. Educational Technology Research and Development, 46(4), 43–55.CrossRefGoogle Scholar
  52. Resnick, M., Maloney, J., Monroy-Hernandez, A., Rusk, N., Eastmond, E., Brennan, K., et al. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67.CrossRefGoogle Scholar
  53. Reynolds, R., & Caperton, I. H. (2011). Contrasts in student engagement, meaning-making, dislikes, and challenges in a discovery-based program of game design learning. Educational Technology Research and Development, 59(2), 267–289.CrossRefGoogle Scholar
  54. Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011). Improving students’ help-seeking skills using metacognitive feedback in an intelligent tutoring system. Special Section I: Solving information-based problems: Evaluating sources and information Special Section II: Stretching the limits in help-seeking research: Theoretical, methodological, and technological advances, 21(2), 267–280. doi:10.1016/j.learninstruc.2010.07.004.Google Scholar
  55. Romero, C., & Ventura, S. (2010). Educational data mining: a review of the state of the art. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 40(6), 601–618.Google Scholar
  56. Roschelle, J., Penuel, W. R., Yarnall, L., Shechtman, N., & Tatar, D. (2005). Handheld tools that “Informate” assessment of student learning in Science: A requirements analysis. Journal of Computer Assisted learning, 21(3), 190–203. doi:10.1111/j.1365-2729.2005.00127.x.CrossRefGoogle Scholar
  57. Sao Pedro, M. A. S., Baker, R. S. J. D., & Gobert, J. D. (2012). Improving construct validity yields better models of systematic inquiry, even with less information. In J. Masthoff, B. Mobasher, M. C. Desmarais, & R. Nkambou (Eds.), User modeling, adaptation, and personalization, Lecture Notes in Computer Science (pp. 249–260). Berlin, Heidelberg: Springer. Retrieved from http://link.springer.com.libweb.lib.utsa.edu/chapter/10.1007/978-3-642-31454-4_21.
  58. Sao Pedro, M. A. S., Gobert, J. D., & Raziuddin, J. J. (2010). Comparing pedagogical approaches for the acquisition and long-term robustness of the control of variables strategy. In Proceedings of the 9th international conference of the learning sciences (Vol. 1, pp. 1024–1031). Chicago, IL: International Society of the Learning Sciences.Google Scholar
  59. Schneider, B. & Blikstein, P. (2014). Unraveling students' interaction around a tangible interface using multimodal learning analytics. In Proceedings of the 7th international conference on educational data mining. London, UK.Google Scholar
  60. Sherin, B. (2012). Using computational methods to discover student science conceptions in interview data. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 188–197). ACM.Google Scholar
  61. Sherin, B. (under review). A computational study of commonsense science: An exploration in the automated analysis of clinical interview data.Google Scholar
  62. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi:10.3102/0034654307313795.CrossRefGoogle Scholar
  63. Stamper, J., Eagle, M., Barnes, T., & Croy, M. (2011). Experimental evaluation of automatic hint generation for a logic tutor. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), Artificial intelligence in education, Lecture Notes in Computer Science (Vol. 6738, pp. 345–352). Berlin/Heidelberg: Springer. Retrieved from http://www.springerlink.com.libweb.lib.utsa.edu/content/f0w1t2642k4t6128/abstract/.
  64. Stevens, R., Satwicz, T., & McCarthy, L. (2008). In-game, in-room, in-world: Reconnecting video game play to the rest of kids’ lives. The ecology of games: Connecting youth, games, and learning, 9, 41–66.Google Scholar
  65. Tabanao, E. S., Rodrigo, M. M. T., & Jadud, M. C. (2011). Predicting at-risk novice Java programmers through the analysis of online protocols. In Proceedings of the seventh international workshop on computing education research (pp. 85–92). Providence, Rhode Island, USA: ACM.Google Scholar
  66. Vee, M. H. N. C., Meyer, B., & Mannock, K. L. (2006). Understanding novice errors and error paths in Object-oriented programming through log analysis. In Proceedings of workshop on educational data mining at the 8th international conference on intelligent tutoring systems (ITS 2006) (pp. 13–20).Google Scholar
  67. Walonoski, J.A., Heffernan, N.T. (2006): Prevention of off-task gaming behavior in intelligent tutoring systems. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems (pp. 722–724). Berlin: Springer.Google Scholar
  68. Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of the equity in access, use and outcomes. Review of Research in Education, 34(1), 179–225.CrossRefGoogle Scholar
  69. Wilensky, U. (1996). Making sense of probability through paradox and programming: A case study of connected mathematics. In Y. Kafai & M. Resnick (Eds.), Constructionism in practice (pp. 269–296). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  70. Wilensky, U. (1999). NetLogo. Retrieved from http://ccl.northwestern.edu/netlogo.
  71. Wilensky, U., & Reisman, K. (2006). Thinking like a wolf, a sheep, or a firefly: Learning biology through constructing and testing computational theories—An embodied modeling approach. Cognition and Instruction, 24(2), 171–209.CrossRefGoogle Scholar
  72. Witten, I. H., Frank, E., & Hall, M. A. (2011). Data mining: Practical machine learning tools and techniques. Morgan Kaufmann.Google Scholar
  73. Worsley, M. (2012). Multimodal learning analytics: Enabling the future of learning through multimodal data analysis and interfaces. In Proceedings of the international conference on multimodal interfaces, Santa Monica, CA.Google Scholar
  74. Worsley, M., & Blikstein, P. (2011). What’s an Expert? Using learning analytics to identify emergent markers of expertise through automated speech, sentiment and sketch analysis. In Proceedings for the 4th annual conference on educational data mining.Google Scholar
  75. Worsley, M., & Blikstein, P. (2012). An Eye For Detail: Techniques for using eye tracker data to explore learning in computer-mediated environments. In Proceedings of the international conference of the learning sciences (pp. 561–562). Sydney, Australia.Google Scholar
  76. Worsley, M., & Blikstein, P. (2013). Towards the Development of Multimodal Action Based Assessment. In Proceedings of the III learning analytics knowledge conference (LAK 2013), Leuven, Belgium.Google Scholar
  77. Worsley, M., & Blikstein, P. (in press). Analyzing engineering design through the lens of learning analytics. Journal of Learning Analytics.Google Scholar
  78. Yu, L., & Liu, H. (2004). Efficient feature selection via analysis of relevance and redundancy. The Journal of Machine Learning Research, 5, 1205–1224.Google Scholar
  79. Zahn, C., Krauskopf, K., Hesse, F. W., & Pea, R. (2010). Digital video tools in the classroom: Empirical studies on constructivist learning with audio-visual media in the domain of history. In Proceedings of the 9th international conference of the learning sciences (Vol. 1, pp. 620–627). Chicago, Illinois: International Society of the Learning Sciences.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Matthew Berland
    • 1
  • Ryan S. Baker
    • 2
  • Paulo Blikstein
    • 3
  1. 1.Department of Curriculum and InstructionUniversity of Wisconsin–MadisonMadisonUSA
  2. 2.Teacher’s College, Columbia UniversityNew YorkUSA
  3. 3.Graduate School of EducationStanford UniversityStanfordUSA

Personalised recommendations