Advertisement

An Approach to Measuring the Difficulty of Learning Activities

  • Francisco J. Gallego-DuránEmail author
  • Rafael Molina-Carmona
  • Faraón Llorens-Largo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9753)

Abstract

In any learning environment, training activities are the basis for learning. Students need to practice to develop new skills and improve previously acquired abilities. Each student has specific needs based on their previous knowledge and personal skills. The allocation of a proper activity for a particular student consists in selecting a training activity that fits the skills and knowledge of the student. This allocation is particularly important since students who are assigned a too hard training activity will tend to leave it rather than making the necessary effort to complete it. Moreover, when the activity is too easy it does not represent a challenge for the student and the learning outcomes will tend to be very limited. An motivating activity, suitable for a given student, should be neither too easy nor too difficult. The problem arises when trying to measure or estimate the difficulty given any training activity. Our proposal is a definition of difficulty of a learning activity that can be used to measure the learning cost of a general learner. As a first step, the desirable features and the intrinsic limitations of a difficulty function are identified, so that a mathematical definition can be obtained quite straightforward. The result is an intuitive, understandable and objectively measurable way to determine the difficulty of a learning activity.

Keywords

Difficulty estimation Learning activity 

References

  1. 1.
    Aponte, M.-V., Levieux, G., Natkin, S.: Scaling the level of difficulty in single player video games. In: Natkin, S., Dupire, J. (eds.) ICEC 2009. LNCS, vol. 5709, pp. 24–35. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  2. 2.
    Cheng, I., Shen, R., Basu, A.: An algorithm for automatic difficulty level estimation of multimedia mathematical test items. In: Eighth IEEE International Conference on Advanced Learning Technologies, ICALT 2008, pp. 175–179, July 2008Google Scholar
  3. 3.
    Csikszentmihalyi, M.: Flow: The Psychology of Optimal Experience. Perennial Modern Classics. Harper Perennial Modern Classics, New York (1990)Google Scholar
  4. 4.
    D’Mello, S., Olney, A., Williams, C., Hays, P.: Gaze tutor: a gaze-reactive intelligent tutoring system. Int. J. Hum.-Comput. Stud. 70(5), 377–398 (2012)CrossRefGoogle Scholar
  5. 5.
    Domínguez, A., Saenz-De-Navarrete, J., De-Marcos, L., Fernández-Sanz, L., Pagés, C., Martínez-Herráiz, J.J.: Gamifying learning experiences: practical implications and outcomes. Comput. Educ. 63, 380–392 (2013)CrossRefGoogle Scholar
  6. 6.
    Elo, A.E.: The Rating of Chess Players, Past and Present. Arco Pub, New York (1978)Google Scholar
  7. 7.
    Getzels, J., Csíkszentmihályi, M.: The Creative Vision: A Longitudinal Study of Problem Finding in Art. A Wiley-Interscience publication, Wiley (1976)Google Scholar
  8. 8.
    Herbrich, R., Minka, T., Graepel, T.: Trueskill™: a bayesian skill rating system. In: Schölkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems, vol. 19, pp. 569–576. MIT Press (2007)Google Scholar
  9. 9.
    Hunicke, R.: The case for dynamic difficulty adjustment in games. In: Proceedings of the 2005 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, ACE 2005, NY, USA, pp. 429–433 (2005)Google Scholar
  10. 10.
    Hunicke, R., Chapman, V.: AI for dynamic difficulty adjustment in games (2004)Google Scholar
  11. 11.
    Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., Hall, C.: NMC Horizon Report: 2016 Higher Education Edition. New Media Consortium; EDUCAUSE Learning Initiative, Austin Texas; [S.l.] (2016)Google Scholar
  12. 12.
    Koster, R., Wright, W.: A Theory of Fun for Game Design. Paraglyph Press, Pittsburgh (2004)Google Scholar
  13. 13.
    Lee, J.J., Hammer, J.: Gamification in education: what, how, why bother? Acad. Exch. Quart. 15(2), 2 (2011)Google Scholar
  14. 14.
    Ley, T., Kump, B.: Which user interactions predict levels of expertise in work-integrated learning? In: Hernández-Leo, D., Ley, T., Klamma, R., Harrer, A. (eds.) EC-TEL 2013. LNCS, vol. 8095, pp. 178–190. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  15. 15.
    Missura, O., Gartner, T.: Predicting dynamic difficulty. In: Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 24, pp. 2007–2015. Curran Associates, Inc. (2011)Google Scholar
  16. 16.
    Mladenov, M., Missura, O.: Offline learning for online difficulty prediction (2010)Google Scholar
  17. 17.
    Mourato, F.J., dos Santos, M.P.: Measuring difficulty in platform videogames. In: 4a Conferencia Nacional em Interacao Pessoa-Mquina, pp. 173–180. Grupo Portugues de Computaao Grfica/Eurographics (2010)Google Scholar
  18. 18.
    Pedersen, C., Togelius, J., Yannakakis, G.N.: Modeling player experience in Super Mario Bros. In: Proceedings of the 5th International Conference on Computational Intelligence and Games, CIG 2009, Piscataway, NJ, USA, pp. 132–139. IEEE Press (2009)Google Scholar
  19. 19.
    Petkovic, D., Okada, K., Sosnick, M., Iyer, A., Zhu, S., Todtenhoefer, R., Huang, S.: Work in progress: a machine learning approach for assessment and prediction of teamwork effectiveness in software engineering education. In: Frontiers in Education Conference (FIE), pp. 1–3, October 2012Google Scholar
  20. 20.
    Radošević, D., Orehovački, T., Stapić, Z.: Automatic on-line generation of student’s exercises in teaching programming. In: “Automatic On-line Generation of Students Exercises in Teaching Programming”, Central European Conference on Information and Intelligent Systems, CECIIS (2010)Google Scholar
  21. 21.
    Ravi, G., Sosnovsky, S.: Exercise difficulty calibration based on student log mining. In: Mšdritscher, F., Luengo, V., Lai-Chong Law, E., Hoppe, U. (eds.) Proceedings of DAILE 2013: Workshop on Data Analysis and Interpretation for Learning Environments. Villard-de-Lans. France, January 2013Google Scholar
  22. 22.
    Sadigh, D., Seshia, S.A., Gupta, M.: Automating exercise generation: a step towards meeting the MOOC challenge for embedded systems. In: Proceedings of Workshop on Embedded Systems Education (WESE), October 2012Google Scholar
  23. 23.
    Saldana, J.M., Marfia, G., Roccetti, M.: First person shooters on the road: leveraging on aps and vanets for a quality gaming experience. In: Wireless Days, pp. 1–6. IEEE (2012)Google Scholar
  24. 24.
    Schalk, P., Wick, D., Turner, P., Ramsdell, M.: Predictive assessment of student performance for early strategic guidance. In: Frontiers in Education Conference (FIE 2011), pp. S2H-1–S2H-5, October 2011Google Scholar
  25. 25.
    Schell, J.: The Art of Game Design: A Book of Lenses. Morgan Kaufmann Publishers Inc., San Francisco (2008)Google Scholar
  26. 26.
    Verdú, E., Regueras, L.M., Verdú, M.J., Leal, J.P., de Castro, J.P., Queirós, R.: A distributed system for learning programming on-line. Comput. Educ. 58(1), 1–10 (2012)CrossRefGoogle Scholar
  27. 27.
    Wang, A.Y., Newlin, M.H.: Characteristics of students who enroll and succeed in psychology web-based classes. J. Educ. Psychol. 92(1), 137 (2000)CrossRefGoogle Scholar
  28. 28.
    Yoo, J., Kim, J.: Can online discussion participation predict group project performance? investigating the roles of linguistic features and participation patterns. Int. J. Artif. Intell. Educ. 24(1), 8–32 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Francisco J. Gallego-Durán
    • 1
    Email author
  • Rafael Molina-Carmona
    • 1
  • Faraón Llorens-Largo
    • 1
  1. 1.Cátedra Santander-UA de Transformación DigitalUniversidad de AlicanteAlicanteSpain

Personalised recommendations