A Propagation-Based Method of Estimating Students’ Concept Understanding

  • Rafael López-García
  • Makoto P. Kato
  • Katsumi Tanaka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10539)


In this paper, we introduce a method to estimate the degree of students’ understanding of concepts and relationships while they learn from digital text materials online. To achieve our goal, we first define a semantic network that represents the knowledge in a material. Second, we define students’ behavior as the sequence of relationships they read in the material, and we create a probabilistic model for relationship understanding. We also create inference rules to include new relationships in the network. Third, we simulate the propagation of the new concept understanding through the network by using a method based on Biased PageRank, extending it with a method to represent prior knowledge and weighting the contribution of every concept according to the uniqueness of its relationships. Finally, we describe an experiment to compare our method against a method without propagation and a method in which propagation is inversely proportional to the distance between concepts. Our method shows significant improvement compared to the others, providing evidence that propagation of concept understanding through the entire network exists.


Learning data analytics Concept understanding Biased PageRank 



This work was supported by JSPS KAKENHI Grant Numbers JP15H01718 and JP26700009.

We would also like to thank Profs. Masatoshi Yoshikawa and Roi Blanco for their inestimable advice to develop this research.


  1. 1.
    Moodle. Accessed 15 Jan 2017
  2. 2.
    Dagger, D., Conlan, O., Wade, V.: Fundamental requirements of personalised eLearning development environments. In: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (2005)Google Scholar
  3. 3.
    Kaplan, A.M., Haenlein, M.: Higher education and the digital revolution: about MOOCs, SPOCs, social media, and the Cookie Monster. Bus. Horiz. 59(4), 441–450 (2016)CrossRefGoogle Scholar
  4. 4.
    Marzano, R.J., Kendall, J.S.: The New Taxonomy of Educational Objectives. Corwin Press, Thousand Oaks (2007)Google Scholar
  5. 5.
    Gagné, R.M.: The Conditions of Learning and Theory of Instruction. Holt, Rinehart and Winston, New York (1985)Google Scholar
  6. 6.
    Zentall, T.R., Galizio, M., Critchfied, T.S.: Categorization, concept learning, and behavior analysis: an introduction. J. Exp. Anal. Behav. 78(3), 237–248 (2002)CrossRefGoogle Scholar
  7. 7.
    Piaget, J.: L’equilibration des structures cognitives: Problème central du développement. Presses Universitaires de France (1975)Google Scholar
  8. 8.
    Vygotsky, L.S.: Interaction between learning and development. In: Mind in Society, pp. 79–91. Harvard University Press, Cambridge (1978)Google Scholar
  9. 9.
    Gyöngyi, Z., Garcia-Molina, H., Pedersen, J.: Combating web spam with TrustRank. In: Proceedings of the 30th VLDB Conference, pp. 576–587 (2004)Google Scholar
  10. 10.
    Shahiri, A.M., Husain, W., Rashid, N.A.: A review on predicting student’s performance using data mining techniques. In: The Third Information Systems International Conference (2015). Procedia Comput. Sci. 72, 414–422 (2015)Google Scholar
  11. 11.
    Thakar, P., Mehta, A., Manisha: Performance analysis and prediction in educational data mining: a research travelogue. Int. J. Comput. Appl. 110(15), 60–68 (2015)Google Scholar
  12. 12.
    Kabakchieva, D.: Predicting student performance by using data mining methods for classification. Cybern. Inf. Technol. 13(1), 61–71 (2013)Google Scholar
  13. 13.
    Kotsiantis, S., Pierrakeas, C., Pintelas, P.: Prediction of student’s performance in distance learning using machine learning techniques. Appl. Artif. Intell. 18(5), 411–426 (2004)CrossRefGoogle Scholar
  14. 14.
    Dekker, G.W., Pechenizkiy, M., Vleeshouwers, J.M.: Predicting students drop out: a case study. In: Proceedings of 2nd International Conference on Educational Data Mining, pp. 41–50 (2009)Google Scholar
  15. 15.
    Corbett, A.T., Anderson, J.R.: Knowledge tracing: modeling the acquisition of procedural knowledge. User Model. User-Adap. Inter. 4(4), 253–278 (1995)CrossRefGoogle Scholar
  16. 16.
    Cen, H., Koedinger, K., Junker, B.: Learning factors analysis – a general method for cognitive model evaluation and improvement. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 164–175. Springer, Heidelberg (2006). doi: 10.1007/11774303_17 CrossRefGoogle Scholar
  17. 17.
    Williams, C., D’Mello, S.: Predicting student knowledge level from domain-independent function and content words. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6095, pp. 62–71. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-13437-1_7 CrossRefGoogle Scholar
  18. 18.
    Fuchs, L.S., Fuchs, D., Hosp, M.K.: Oral reading fluency as an indicator of reading competency: a theoretical, empirical and historical analysis. Sci. Stud. Read. 5(3), 239–256 (2001)CrossRefGoogle Scholar
  19. 19.
    Copeland, L., Gedeon, T., Mendis, S.: Predicting reading comprehension scores from eye movements using artificial neural networks and fuzzy output error. Artif. Intell. Res. 3(3), 35–48 (2014)Google Scholar
  20. 20.
    Sowa, J.F.: Semantic networks. In: Shapiro, S.C. (ed.) Encyclopedia of Artificial Intelligence (1987)Google Scholar
  21. 21.
    Sowa, J.F.: Conceptual graphs for a data base interface. IBM J. Res. Dev. 20(4), 336–357 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Prediger, S.: Simple concept graphs: a logic approach. In: Mugnier, M.-L., Chein, M. (eds.) ICCS-ConceptStruct 1998. LNCS, vol. 1453, pp. 225–239. Springer, Heidelberg (1998). doi: 10.1007/BFb0054917 CrossRefGoogle Scholar
  23. 23.
    Van der Riet, R.P., Meersman, R.A.: Linguistic Instruments in Knowledge Engineering. Imprint Elsevier Science Ltd., New York (1992)zbMATHGoogle Scholar
  24. 24.
    Mesquita, F., Schmidek, J., Barbosa, D.: Effectiveness and efficiency of open relation extraction. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 447–457 (2013)Google Scholar
  25. 25.
    Atzeni, P., Parker Jr., D.S.: Set containment inference and syllogisms. Theor. Comput. Sci. 62(1–2), 39–65 (1988)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
  27. 27.
    Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank citation ranking: bringing order to the web. Technical report. Stanford InfoLab (1999)Google Scholar
  28. 28.
    Lancers. Accessed 09 Jan 2017
  29. 29.
    Word frequency data. Accessed 31 Jan 2017
  30. 30.
    Cohen, J.: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol. Bull. 70(4), 213–220 (1968)CrossRefGoogle Scholar
  31. 31.
    Lenhard, W., Lenhard, A.: Hypothesis Tests for Comparing Correlations (2014). Accessed 15 Mar 2017
  32. 32.
    Eid, M., Gollwitzer, M., Schmitt, M.: Statistik und Forschungsmethoden Lehrbuch. Beltz, Weinheim (2011). (in German)Google Scholar
  33. 33.
    Holm, S.: A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6(2), 65–70 (1979)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Meyer, C.D.: Matrix Analysis and Applied Linear Algebra. SIAM, Philadelphia (2000). ISBN 0-89871-454-0CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Social Informatics, Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations