Advertisement

An Educational System for Learning Search Algorithms and Automatically Assessing Student Performance

  • Foteini Grivokostopoulou
  • Isidoros Perikos
  • Ioannis HatzilygeroudisEmail author
Article

Abstract

In this paper, first we present an educational system that assists students in learning and tutors in teaching search algorithms, an artificial intelligence topic. Learning is achieved through a wide range of learning activities. Algorithm visualizations demonstrate the operational functionality of algorithms according to the principles of active learning. So, a visualization process can stop and request from a student to specify the next step or explain the way that a decision was made by the algorithm. Similarly, interactive exercises assist students in learning to apply algorithms in a step-by-step interactive way. Students can apply an algorithm to an example case, specifying the algorithm’s steps interactively, with the system’s guidance and help, when necessary. Next, we present assessment approaches integrated in the system that aim to assist tutors in assessing the performance of students, reduce their marking task workload and provide immediate and meaningful feedback to students. Automatic assessment is achieved in four stages, which constitute a general assessment framework. First, the system calculates the similarity between the student’s and the correct answer using the edit distance metric. In the next stage, it identifies the type of the answer, based on an introduced answer categorization scheme related to completeness and accuracy of an answer, taking into account student carelessness too. Afterwards, the types of errors are identified, based on an introduced error categorization scheme. Finally, answer is automatically marked via an automated marker, based on its type, the edit distance and the type of errors made. To assess the learning effectiveness of the system an extended evaluation study was conducted in real class conditions. The experiment showed very encouraging results. Furthermore, to evaluate the performance of the assessment system, we compared the assessment mechanism against expert (human) tutors. A total of 400 students’ answers were assessed by three tutors and the results showed a very good agreement between the automatic assessment system and the tutors.

Keywords

Artificial intelligence curriculum Search algorithms Automated assessment Intelligent tutoring system Algorithm visualization 

References

  1. Ala-Mutka, K. M. (2005). A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2), 83–102.CrossRefGoogle Scholar
  2. Alemán, J. L. F. (2011). Automated assessment in a programming tools course. Education, IEEE Transactions on, 54(4), 576–581.CrossRefGoogle Scholar
  3. Aleven, V., Mclaren, B. M., Sewall, J., & Koedinger, K. R. (2009). A new paradigm for intelligent tutoring systems: example-tracing tutors. International Journal of Artificial Intelligence in Education, 19(2), 105–154.Google Scholar
  4. Aleven, V., Roll, I. D. O., McLAREN, B. M., & Koedinger, K. R. (2010). Automated, unobtrusive, action-by-action assessment of self-regulation during learning with an intelligent tutoring system. Educational Psychologist, 45(4), 224–233.CrossRefGoogle Scholar
  5. Baker, R. S., & Rossi, L. M. (2013). –Assessing the Disengaged Behaviors of Learners. Design Recommendations for Intelligent Tutoring Systems, 153.Google Scholar
  6. Barker-Plummer, D., Dale, R., Cox, R., & Etchemendy, J. (2008). Automated assessment in the internet classroom. Education Informatics, Arlington, VA: In Proc. AAAI Fall Symp.Google Scholar
  7. Barker-Plummer, D., Dale, R., & Cox, R. (2012). Using edit distance to analyse errors in a natural language to logic translation corpus. EDM, 134.Google Scholar
  8. Blumenstein, M., Green, S., Fogelman, S., Nguyen, A., & Muthukkumarasamy, V. (2008). Performance analysis of GAME: a generic automated marking environment. Computers & Education, 50(4), 1203–1216.CrossRefGoogle Scholar
  9. Brusilovsky, P., & Loboda, T. D. (2006). WADEIn II: a case for adaptive explanatory visualization. ACM SIGCSE Bulletin, 38(3), 48–52.CrossRefGoogle Scholar
  10. Brusilovsky, P., & Sosnovsky, S. (2005). Individualized exercises for self-assessment of programming knowledge: an evaluation of QuizPACK. Journal on Educational Resources in Computing (JERIC), 5(3), 6.CrossRefGoogle Scholar
  11. Charman, D., & Elmes, A. (1998). Computer based assessment (volume 1): a guide to good practice. SEED (Science Education, Enhancement and Development), University of Plymouth.Google Scholar
  12. Chi, M. T., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: how students study and use examples in learning to solve problems. Cognitive Science, 13(2), 145–182.CrossRefGoogle Scholar
  13. Clark, I. (2012). Formative assessment: assessment is for self-regulated learning. Educational Psychology Review, 24(2), 205–249.CrossRefGoogle Scholar
  14. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.CrossRefGoogle Scholar
  15. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. 2nd edn. Hillsdale, New Jersey: L.Google Scholar
  16. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.CrossRefGoogle Scholar
  17. Dancey, C. P., & Reidy, J. (2007). Statistics without maths for psychology. Pearson Education.Google Scholar
  18. Darus, S. (2009). Framework for a Computer-based Essay Marking System: Specifically Developed for Esl Writing. Lambert Academic Pub.Google Scholar
  19. Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming: a review. Journal on Educational Resources in Computing (JERIC), 5(3), 4.CrossRefGoogle Scholar
  20. Falkner, N., Vivian, R., Piper, D., & Falkner, K. (2014). Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units. In Proceedings of the 45th ACM technical symposium on Computer science education (pp. 9–14). ACM.Google Scholar
  21. Fiedler, A., & Tsovaltzi, D. (2003). Automating hinting in an intelligent tutorial dialog system for mathematics. Knowledge Representation and Automated Reasoning for E-Learning Systems, 23.Google Scholar
  22. Gouli, E., Gogoulou, A., Papanikolaou, K. A., & Grigoriadou, M. (2006). An adaptive feedback framework to support reflection, guiding and tutoring. Advances in web-based education: Personalized learning environments, 178–202.Google Scholar
  23. Grivokostopoulou, F., & Hatzilygeroudis, I. (2013a). Teaching AI Search Algorithms in a Web-Based Educational System. In Proceedings of the IADIS International Conference e-Learning (pp. 83–90).Google Scholar
  24. Grivokostopoulou, F., & Hatzilygeroudis, I. (2013b). An automatic marking system for interactive exercises on blind search algorithms, In Artificial Intelligence in Education (pp. 783–786). Berlin Heidelberg: Springer.CrossRefGoogle Scholar
  25. Grivokostopoulou, F., & Hatzilygeroudis, I. (2013c). Automated marking for interactive exercises on heuristic search algorithms. In Teaching, Assessment and Learning for Engineering (TALE), 2013 I.E. International Conference on (pp. 598–603). IEEE.Google Scholar
  26. Grivokostopoulou, F., & Hatzilygeroudis, I. (2015). Semi-automatic generation of interactive exercises related to search algorithms. In Computer Science & Education (ICCSE), 2015 10th International Conference on (pp. 33–37). IEEE.Google Scholar
  27. Grivokostopoulou, F., Perikos, I., & Hatzilygeroudis, I. (2012). An automatic marking system for FOL to CF conversions. In Teaching, Assessment and Learning for Engineering (TALE), 2012 I.E. International Conference on (pp. H1A-7). IEEE.Google Scholar
  28. Grivokostopoulou, F., Perikos, I., & Hatzilygeroudis, I. (2014a). Using Semantic Web Technologies in a Web Based System for Personalized Learning AI Course. In Technology for Education (T4E), 2014 I.E. Sixth International Conference on (pp. 257–260). IEEE.Google Scholar
  29. Grivokostopoulou, F., Perikos, I., & Hatzilygeroudis, I. (2014b). Utilizing semantic web technologies and data mining techniques to analyze students learning and predict final performance. In Teaching, Assessment and Learning (TALE), 2014 International Conference on (pp. 488–494). IEEE.Google Scholar
  30. Hansen, S., Narayanan, N. H., & Hegarty, M. (2002). Designing educationally effective algorithm visualizations. Journal of Visual Languages and Computing, 13(3), 291–317.CrossRefGoogle Scholar
  31. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.CrossRefGoogle Scholar
  32. Hatzilygeroudis, I., Koutsojannis, C., Papavlasopoulos, C., & Prentzas, J. (2006, July). Knowledge-based adaptive assessment in a Web-based intelligent educational system. In Advanced Learning Technologies, 2006. Sixth International Conference on (pp. 651–655). IEEE.Google Scholar
  33. Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education, 24(4), 470–497.MathSciNetCrossRefGoogle Scholar
  34. Helmick, M. T. (2007). Interface-based programming assignments and automatic grading of java programs. In ACM SIGCSE Bulletin (Vol. 39, No. 3, pp. 63–67). ACM.Google Scholar
  35. Hershkovitz, A., de Baker, R. S. J., Gobert, J., Wixon, M., & Sao Pedro, M. (2013). Discovery with models a case study on carelessness in computer-based science inquiry. American Behavioral Scientist, 57(10), 1480–1499.CrossRefGoogle Scholar
  36. Higgins, C. A., & Bligh, B. (2006). Formative computer based assessment in diagram based domains. ACM SIGCSE Bulletin, 38(3), 98–102.CrossRefGoogle Scholar
  37. Higgins, C. A., Gray, G., Symeonidis, P., & Tsintsifas, A. (2005). Automated assessment and experiences of teaching programming. Journal on Educational Resources in Computing (JERIC), 5(3), 5.CrossRefGoogle Scholar
  38. Hsiao, I. H., Brusilovsky, P., & Sosnovsky, S. (2008). Web-based parameterized questions for object-oriented programming. In Proceedings of World Conference on E-Learning, E-Learn (pp. 17–21).Google Scholar
  39. Hundhausen, C. D., Douglas, S. A., & Stasko, J. T. (2002). A meta-study of algorithm visualization effectiveness. Journal of Visual Languages and Computing, 13(3), 259–290.CrossRefGoogle Scholar
  40. Ihantola, P., Ahoniemi, T., Karavirta, V., & Seppälä, O. (2010). Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research (pp. 86–93). ACM.Google Scholar
  41. Jackson, D., & Usher, M. (1997). Grading student programs using ASSYST. In ACM SIGCSE Bulletin (Vol. 29, No. 1, pp. 335–339). ACM.Google Scholar
  42. Jenkins, T. (2002). On the difficulty of learning to program. In Proceedings of the 3rd Annual Conference of the LTSN Centre for Information and Computer Sciences (Vol. 4, pp. 53–58).Google Scholar
  43. Jeremić, Z., Jovanović, J., & Gašević, D. (2012). Student modeling and assessment in intelligent tutoring of software patterns. Expert Systems with Applications, 39(1), 210–222.CrossRefGoogle Scholar
  44. Joy, M., Griffiths, N., & Boyatt, R. (2005). The boss online submission and assessment system. Journal on Educational Resources in Computing (JERIC), 5(3), 2.CrossRefGoogle Scholar
  45. Kordaki, M., Miatidis, M., & Kapsampelis, G. (2008). A computer environment for beginners’ learning of sorting algorithms: design and pilot evaluation. Computers & Education, 51(2), 708–723.CrossRefGoogle Scholar
  46. Kwan, R., Chan, J., & Lui, A. (2004). Reaching an ITopia in distance learning—a case study. AACE Journal, 12(2), 171–187.Google Scholar
  47. Lahtinen, E., Ala-Mutka, K., & Järvinen, H. M. (2005). A study of the difficulties of novice programmers. In ACM SIGCSE Bulletin (Vol. 37, No. 3, pp. 14–18). ACM.Google Scholar
  48. Lau, W. W., & Yuen, A. H. (2010). Promoting conceptual change of learning sorting algorithm through the diagnosis of mental models: the effects of gender and learning styles. Computers & Education, 54(1), 275–288.CrossRefGoogle Scholar
  49. Levenshtein, V. I. (1966). Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady (Vol. 10, No. 8, pp. 707–710).Google Scholar
  50. Long, Y., & Aleven, V. (2013). Skill diaries: improve student learning in an intelligent tutoring system with periodic self-assessment, In Artificial intelligence in education (pp. 249–258). Berlin Heidelberg: Springer.Google Scholar
  51. Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Seppälä, O., & Silvasti, P. (2004). Visual algorithm simulation exercise system with automatic assessment: TRAKLA2. Informatics in Education, 3(2), 267–288.Google Scholar
  52. Martin, J., & VanLehn, K. (1995). Student assessment using Bayesian nets. International Journal of Human-Computer Studies, 42(6), 575–591.CrossRefGoogle Scholar
  53. Mehta, S. I., & Schlecht, N. W. (1998). Computerized assessment technique for large classes. Journal of Engineering Education, 87(2), 167.CrossRefGoogle Scholar
  54. Moreno, A., Myller, N., Sutinen, E., & Ben-Ari, M. (2004). Visualizing programs with Jeliot 3. In Proceedings of the working conference on Advanced visual interfaces (pp. 373–376). ACM.Google Scholar
  55. Muñoz-Merino, P. J., Kloos, C. D., & Muñoz-Organero, M. (2011). Enhancement of student learning through the use of a hinting computer e-learning system and comparison with human teachers. Education, IEEE Transactions on, 54(1), 164–167.CrossRefGoogle Scholar
  56. Naps, T. L., Rößling, G., Almstrum, V., Dann, W., Fleischer, R., Hundhausen, C.,. .. & Velázquez-Iturbide, J. Á. (2002). Exploring the role of visualization and engagement in computer science education. In ACM SIGCSE Bulletin (Vol. 35, No. 2, pp. 131–152). ACM.Google Scholar
  57. Narciss, S. (2008). Feedback strategies for interactive learning tasks. Handbook Of Research On Educational Communications And Technology, 3, 125–144.Google Scholar
  58. Naudé, K. A., Greyling, J. H., & Vogts, D. (2010). Marking student programs using graph similarity. Computers & Education, 54(2), 545–561.CrossRefGoogle Scholar
  59. Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.CrossRefGoogle Scholar
  60. Olson, T., & Wisher, R. A. (2002). The effectiveness of web-based instruction: an initial inquiry. The International Review of Research in Open and Distributed Learning, 3(2).Google Scholar
  61. Pavlik Jr, P. I., Cen, H., & Koedinger, K. R. (2009). Performance factors analysis–a new alternative to knowledge tracing. Online Submission.Google Scholar
  62. Perikos, I., Grivokostopoulou, F., & Hatzilygeroudis, I. (2012). Automatic marking of NL to FOL conversions. In Proc. of 15th IASTED International Conference on Computers and Advanced Technology in Education (CATE), Napoli, Italy (pp. 227–233).Google Scholar
  63. Rajala, T., Laakso, M. J., Kaila, E., & Salakoski, T. (2007). VILLE: a language-independent program visualization tool, In Proceedings of the Seventh Baltic Sea Conference on Computing Education Research-Volume 88 (pp. 151–159). Inc: Australian Computer Society.Google Scholar
  64. Rosa, K. D., & Eskenazi, M. (2013). Self-assessment in the REAP tutor: knowledge, interest, motivation, & learning. International Journal of Artificial Intelligence in Education, 21(4), 237–253.Google Scholar
  65. San Pedro, M. O. Z., Baker, d. R. S., & Rodrigo, M. M. T. (2014). Carelessness and affect in an intelligent tutoring system for mathematics. International Journal of Artificial Intelligence in Education, 24(2), 189–210.CrossRefGoogle Scholar
  66. San Pedro, M. O. C. Z., d Baker, R. S., & Rodrigo, M. M. T. (2011). Detecting carelessness through contextual estimation of slip probabilities among students using an intelligent tutor for mathematics, In Artificial Intelligence in Education (pp. 304–311). Berlin Heidelberg: Springer.Google Scholar
  67. Sánchez-Torrubia, M. G., Torres-Blanc, C., & López-Martínez, M. A. (2009). Pathfinder: a visualization emathteacher for actively learning dijkstra’s algorithm. Electronic Notes in Theoretical Computer Science, 224, 151–158.CrossRefGoogle Scholar
  68. Shepard, L. A. (2005). Linking formative assessment to scaffolding. Educational Leadership, 63(3), 66–70.Google Scholar
  69. Shute, V. J., & Zapata-Rivera, D. (2010). Educational measurement and intelligent systems. of the International Encyclopedia of Education. Oxford: Elsevier Publishers.Google Scholar
  70. Siler, S. A., & VanLehn, K. (2003). Accuracy of tutors’ assessments of their students by tutoring context. In Proceedings of the 25th annual conference of the cognitive science society. Mahwah, NJ: Erlbaum.Google Scholar
  71. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: a meta-analysis. Personnel Psychology, 59(3), 623–664.CrossRefGoogle Scholar
  72. Stajduhar, I., & Mausa, G. (2015). Using string similarity metrics for automated grading of SQL statements. In Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on (pp. 1250–1255). IEEE.Google Scholar
  73. Suleman, H. (2008). Automatic marking with Sakai. In Proceedings of the 2008 annual research conference of the South African Institute of Computer Scientists and Information Technologists on IT research in developing countries: riding the wave of technology (pp. 229–236). ACM.Google Scholar
  74. Thomas, P., Smith, N., & Waugh, K. (2008). Automatically assessing graph-based diagrams. Learning, Media and Technology, 33(3), 249–267.CrossRefGoogle Scholar
  75. VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.Google Scholar
  76. VanLehn, K. (2008). Intelligent tutoring systems for continuous, embedded assessment. The future of assessment: Shaping teaching and learning, 113–138.Google Scholar
  77. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.CrossRefGoogle Scholar
  78. Viera, A. J., & Garrett, J. M. (2005). Understanding interobserver agreement: the kappa statistic. Family Medicine, 37(5), 360–363.Google Scholar
  79. Vujošević-Janičić, M., Nikolić, M., Tošić, D., & Kuncak, V. (2013). Software verification and graph similarity for automated evaluation of students’ assignments. Information and Software Technology, 55(6), 1004–1016.CrossRefGoogle Scholar
  80. Wang, T., Su, X., Ma, P., Wang, Y., & Wang, K. (2011). Ability-training-oriented automated assessment in introductory programming course. Computers & Education, 56(1), 220–226.CrossRefGoogle Scholar
  81. Watson, C., & Li, F. W. (2014). Failure rates in introductory programming revisited. In Proceedings of the 2014 conference on Innovation & technology in computer science education (pp. 39–44). ACM.Google Scholar
  82. Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. Metacognition in Educational Theory and Practice, 93, 27–30.Google Scholar
  83. Woolf, B. P. (2010). Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Morgan Kaufmann.Google Scholar

Copyright information

© International Artificial Intelligence in Education Society 2016

Authors and Affiliations

  • Foteini Grivokostopoulou
    • 1
  • Isidoros Perikos
    • 1
  • Ioannis Hatzilygeroudis
    • 1
    Email author
  1. 1.Department of Computer Engineering & Informatics, School of EngineeringUniversity of PatrasPatrasGreece

Personalised recommendations